diff --git "a/abstractive/test.jsonl" "b/abstractive/test.jsonl" new file mode 100644--- /dev/null +++ "b/abstractive/test.jsonl" @@ -0,0 +1,100 @@ +{"id": "E09-1056", "document": "Handling terminology is an important matter in a translation workflow . However , current Machine Translation ( MT ) systems do not yet propose anything proactive upon tools which assist in managing terminological databases . In this work , we investigate several enhancements to analogical learning and test our implementation on translating medical terms . We show that the analogical engine works equally well when translating from and into a morphologically rich language , or when dealing with language pairs written in different scripts . Combining it with a phrasebased statistical engine leads to significant improvements . If machine translation is to meet commercial needs , it must offer a sensible approach to translating terms . Currently , MT systems offer at best database management tools which allow a human ( typically a translator , a terminologist or even the vendor of the system ) to specify bilingual terminological entries . More advanced tools are meant to identify inconsistencies in terminological translations and might prove useful in controlledlanguage situations ( Itagaki et al . , 2007 ) . One approach to translate terms consists in using a domain-specific parallel corpus with standard alignment techniques ( Brown et al . , 1993 ) to mine new translations . Massive amounts of parallel data are certainly available in several pairs of languages for domains such as parliament debates or the like . However , having at our disposal a domain-specific ( e.g. computer science ) bitext with an adequate coverage is another issue . One might argue that domain-specific comparable ( or perhaps unrelated ) corpora are easier to acquire , in which case context-vector techniques ( Rapp , 1995 ; Fung and McKeown , 1997 ) can be used to identify the translation of terms . We certainly agree with that point of view to a certain extent , but as discussed by Morin et al . ( 2007 ) , for many specific domains and pairs of languages , such resources simply do not exist . Furthermore , the task of translation identification is more difficult and error-prone . Analogical learning has recently regained some interest in the NLP community . Lepage and Denoual ( 2005 ) proposed a machine translation system entirely based on the concept of formal analogy , that is , analogy on forms . Stroppa and Yvon ( 2005 ) applied analogical learning to several morphological tasks also involving analogies on words . Langlais and Patry ( 2007 ) applied it to the task of translating unknown words in several European languages , an idea investigated as well by Denoual ( 2007 ) for a Japanese to English translation task . In this study , we improve the state-of-the-art of analogical learning by ( i ) proposing a simple yet effective implementation of an analogical solver ; ( ii ) proposing an efficient solution to the search issue embedded in analogical learning , ( iii ) investigating whether a classifier can be trained to recognize bad candidates produced by analogical learning . We evaluate our analogical engine on the task of translating terms of the medical domain ; a domain well-known for its tendency to create new words , many of which being complex lexical constructions . Our experiments involve five language pairs , including languages with very different morphological systems . In the remainder of this paper , we first present in Section 2 the principle of analogical learning . Practical issues in analogical learning are discussed in Section 3 along with our solutions . In Section 4 , we report on experiments we conducted with our analogical device . We conclude this study and discuss future work in Section 5 . In this study , we proposed solutions to practical issues involved in analogical learning . A simple yet effective implementation of a solver is described . A search strategy is proposed which outperforms the one described in ( Langlais and Patry , 2007 ) . Also , we showed that a classifier trained to select good candidate translations outperforms the most-frequently-generated heuristic used in several works on analogical learning . Our analogical device was used to translate medical terms in different language pairs . The approach rates comparably across the 10 translation directions we considered . In particular , we do not see a drop in performance when translating into a morphology rich language ( such as Finnish ) , or when translating into languages with different scripts . Averaged over all translation directions , the best variant could translate in first position 21 % of the terms with a precision of 57 % , while at best , one could translate 30 % of the terms with a perfect precision . We show that the analogical translations are of better quality than those produced by a phrase-based engine trained at the character level , albeit with much lower recall . A straightforward combination of both approaches led an improvement of 5.3 BLEU points over the SMT alone . Better SMT performance could be obtained with a system based on morphemes , see for instance ( Toutanova et al . , 2008 ) . However , since lists of morphemes specific to the medical domain do not exist for all the languages pairs we considered here , unsupervised methods for acquiring morphemes would be necessary , which is left as a future work . In any case , this comparison is meaningful , since both the SMT and the analogical device work at the character level . This work opens up several avenues . First , we will test our approach on terminologies from different domains , varying the size of the training material . Second , analyzing the segmentation induced by analogical learning would be interesting . Third , we need to address the problem of combining the translations produced by analogy into a front-end statistical translation engine . Last , there is no reason to constrain ourselves to translating terminology only . We targeted this task in the first place , because terminology typically plugs translation systems , but we think that analogical learning could be useful for translating infrequent entities .", "challenge": "Existing machine translation systems do not manage domain-specific terms mainly due to the expensiveness for certain language pairs and domains.", "approach": "They propose simple methods to improve existing analogy solvers and apply them to the translation of medial terms.", "outcome": "The proposed analogy solvers significantly improve machine translation of the medical domain for morphologically rich languages and language pairs in different scripts."} +{"id": "N19-1362", "document": "Reasoning about implied relationships ( e.g. paraphrastic , common sense , encyclopedic ) between pairs of words is crucial for many cross-sentence inference problems . This paper proposes new methods for learning and using embeddings of word pairs that implicitly represent background knowledge about such relationships . Our pairwise embeddings are computed as a compositional function on word representations , which is learned by maximizing the pointwise mutual information ( PMI ) with the contexts in which the two words cooccur . We add these representations to the cross-sentence attention layer of existing inference models ( e.g. BiDAF for QA , ESIM for NLI ) , instead of extending or replacing existing word embeddings . Experiments show a gain of 2.7 % on the recently released SQuAD 2.0 and 1.3 % on MultiNLI . Our representations also aid in better generalization with gains of around 6 - 7 % on adversarial SQuAD datasets , and 8.8 % on the adversarial entailment test set by Glockner et al . ( 2018 ) . Reasoning about relationships between pairs of words is crucial for cross sentence inference problems such as question answering ( QA ) and natural language inference ( NLI ) . In NLI , for example , given the premise \" golf is prohibitively expensive \" , inferring that the hypothesis \" golf is a cheap pastime \" is a contradiction requires one to know that expensive and cheap are antonyms . Recent work ( Glockner et al . , 2018 ) has shown that current models , which rely heavily on unsupervised single-word embeddings , struggle to learn such relationships . In this paper , we show that they can be learned with word pair vectors ( pair2vec 1 ) , which are trained unsupervised , and which significantly improve performance when added to existing cross-sentence attention mechanisms . Unlike single-word representations , which typically model the co-occurrence of a target word x with its context c , our word-pair representations are learned by modeling the three-way cooccurrence between words ( x , y ) and the context c that ties them together , as seen in Table 1 . While similar training signals have been used to learn models for ontology construction ( Hearst , 1992 ; Snow et al . , 2005 ; Turney , 2005 ; Shwartz et al . , 2016 ) and knowledge base completion ( Riedel et al . , 2013 ) , this paper shows , for the first time , that large scale learning of pairwise embeddings can be used to directly improve the performance of neural cross-sentence inference models . More specifically , we train a feedforward network R(x , y ) that learns representations for the individual words x and y , as well as how to compose them into a single vector . Training is done by maximizing a generalized notion of the pointwise mutual information ( PMI ) among x , y , and their context c using a variant of negative sampling ( Mikolov et al . , 2013a ) . Making R(x , y ) a compositional function on individual words alleviates the sparsity that necessarily comes with embedding pairs of words , even at a very large scale . We show that our embeddings can be added to existing cross-sentence inference models , such as BiDAF++ ( Seo et al . , 2017 ; Clark and Gardner , 2018 ) for QA and ESIM ( Chen et al . , 2017 ) for NLI . Instead of changing the word embeddings that are fed into the encoder , we add the pretrained pair representations to higher layers in the network where cross sentence attention mechanisms are used . This allows the model to use the background knowledge that the pair embeddings implicitly encode to reason about the likely relationships between the pairs of words it aligns . Experiments show that simply adding our wordpair embeddings to existing high-performing models , which already use ELMo ( Peters et al . , 2018 ) , results in sizable gains . We show 2.72 F1 points over the BiDAF++ model ( Clark and Gardner , 2018 ) on SQuAD 2.0 ( Rajpurkar et al . , 2018 ) , as well as a 1.3 point gain over ESIM ( Chen et al . , 2017 ) on MultiNLI ( Williams et al . , 2018 ) . Additionally , our approach generalizes well to adversarial examples , with a 6 - 7 % F1 increase on adversarial SQuAD ( Jia and Liang , 2017 ) and a 8.8 % gain on the Glockner et al . ( 2018 ) NLI benchmark . An analysis of pair2vec on word analogies suggests that it complements the information in single-word representations , especially for encyclopedic and lexicographic relations . We presented new methods for training and using word pair embeddings that implicitly represent background knowledge . Our pair embeddings are computed as a compositional function of the individual word representations , which is learned by maximizing a variant of the PMI with the contexts in which the the two words co-occur . Experiments on cross-sentence inference benchmarks demonstrated that adding these representations to existing models results in sizable improvements for both in-domain and adversarial settings . Published concurrently with this paper , BERT ( Devlin et al . , 2018 ) , which uses a masked language model objective , has reported dramatic gains on multiple semantic benchmarks including question-answering , natural language inference , and named entity recognition . Potential avenues for future work include multitasking BERT with pair2vec in order to more directly incorporate reasoning about word pair relations into the BERT objective . Bishan Yang and Tom Mitchell . 2017 . Leveraging knowledge bases in LSTMs for improving machine reading . In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics ( Volume 1 : Long Papers ) , pages 1436 - 1446 . Association for Computational Linguistics .", "challenge": "Existing methods for word pair relationship reasoning suffer from learning such relationships because they heavily rely on unsupervised single-word embeddings.", "approach": "They propose a compositional function on word representations which learns maximizing the pointwise mutual information with contexts of two words to compute pairwise embeddings.", "outcome": "The proposed representations coupled with neural cross-sentence inference models achieve gains on SQuAD 2.0 and MultiNLI and generalize better on adversarial datasets."} +{"id": "P01-1040", "document": "It is widely recognized that the proliferation of annotation schemes runs counter to the need to re-use language resources , and that standards for linguistic annotation are becoming increasingly mandatory . To answer this need , we have developed a representation framework comprised of an abstract model for a variety of different annotation types ( e.g. , morpho-syntactic tagging , syntactic annotation , co-reference annotation , etc . ) , which can be instantiated in different ways depending on the annotator s approach and goals . In this paper we provide an overview of our representation framework and demonstrate its applicability to syntactic annotation . We show how the framework can contribute to comparative evaluation and merging of parser output and diverse syntactic annotation schemes . It is widely recognized that the proliferation of annotation schemes runs counter to the need to re-use language resources , and that standards for linguistic annotation are becoming increasingly mandatory . In particular , there is a need for a general framework for linguistic annotation that is flexible and extensible enough to accommodate different annotation types and different theoretical and practical approaches , while at the same time enabling their representation in a pivot format that can serve as the basis for comparative evaluation of parser output , such as PARSEVAL ( Harrison , et al . , 1991 ) , as well as the development of reusable editing and processing tools . To answer this need , we have developed a representation framework comprised of an abstract model for a variety of different annotation types ( e.g. , morpho-syntactic tagging , syntactic annotation , co-reference annotation , etc . ) , which can be instantiated in different ways depending on the annotator s approach and goals . We have implemented both the abstract model and various instantiations using XML schemas ( Thompson , et al . , 2000 ) , the Resource Definition Framework ( RDF ) ( Lassila and Swick , 2000 ) and RDF schemas ( Brickley and Guha , 2000 ) , which enable description and definition of abstract data models together with means to interpret , via the model , information encoded according to different conventions . The results have been incorporated into XCES ( Ide , et al . , 2000a ) , part of the EAGLES Guidelines developed by the Expert Advisory Group on Language Engineering Standards ( EAGLES ) 1 . The XCES provides a ready-made , standard encoding format together with a data architecture designed specifically for linguistically annotated corpora . In this paper we provide an overview of our representation framework and demonstrate its applicability to syntactic annotation . The framework has been applied to the representation of terminology ( Terminological Markup Framework2 , ISO project n.16642 ) and computational lexicons ( Ide , et al . , 2000b ) , thus demonstrating its general applicability for a variety of linguistic annotation types . We also show how the framework can contribute to comparison and merging of diverse syntactic annotation schemes . The XCES framework for linguistic annotation is built around some relatively straightforward ideas : separation of information conveyed by means of structure and information conveyed directly by specification of content categories ; development of an abstract format that puts a layer of abstraction between site-specific annotation schemes and standard specifications ; and creation of a Data Category Registry to provide a reference set of annotation categories . The emergence of XML and related standards such as RDF provides the enabling technology . We are , therefore , at a point where the creation and use of annotated data and concerns about the way it is represented can be treated separately that is , researchers can focus on the question of what to encode , independent of the question of how to encode it . The end result should be greater coherence , consistency , and ease of use and access for annotated data .", "challenge": "The proliferation of annotation schemes calls for a need for flexible and extensive general annotation schemes which can accommodate different annotation types and approaches.", "approach": "They propose a representation framework with an abstract model for different annotation types which can be instantiated in different ways depending on needs.", "outcome": "They apply the proposed framework to representations of terminology and computational lexicons and show its applicability and how it can compare and merge annotation schemes."} +{"id": "2020.acl-main.642", "document": "Visual question answering aims to answer the natural language question about a given image . Existing graph-based methods only focus on the relations between objects in an image and neglect the importance of the syntactic dependency relations between words in a question . To simultaneously capture the relations between objects in an image and the syntactic dependency relations between words in a question , we propose a novel dual channel graph convolutional network ( DC-GCN ) for better combining visual and textual advantages . The DC-GCN model consists of three parts : an I-GCN module to capture the relations between objects in an image , a Q-GCN module to capture the syntactic dependency relations between words in a question , and an attention alignment module to align image representations and question representations . Experimental results show that our model achieves comparable performance with the state-of-theart approaches . As a form of visual Turing test , visual question answering ( VQA ) has drawn much attention . The goal of VQA ( Antol et al . , 2015 ; Goyal et al . , 2017 ) is to answer a natural language question related to the contents of a given image . Attention mechanisms are served as the backbone of the previous mainstream approaches ( Lu et al . , 2016 ; Yang et al . , 2016 ; Yu et al . , 2017 ) , however , they tend to catch only the most discriminative information , ignoring other rich complementary clues ( Liu et al . , 2019 ) . Recent VQA studies have been exploring higher level semantic representation of images , notably using graph-based structures for better image understanding , such as scene graph generation ( Xu et al . , 2017 ; Yang et al . , 2018 ) , visual relationship detection ( Yao et al . , 2018 ) , object counting ( Zhang et al . , 2018a ) , and relation reasoning ( Cao et al . , 2018 ; Li et al . , 2019 ; Cadene et al . , 2019a ) . Representing images as graphs allows one to explicitly model interactions between two objects in an image , so as to seamlessly transfer information between graph nodes ( e.g. , objects in an image ) . Very recent research methods ( Li et al . , 2019 ; Cadene et al . , 2019a ; Yu et al . , 2019 ) have achieved remarkable performances , but there is still a big gap between them and human . As shown in Figure 1 ( a ) , given an image of a group of persons and the corresponding question , a VQA system needs to not only recognize the objects in an image ( e.g. , batter , umpire and catcher ) , but also grasp the textual information in the question \" what color is the umpire 's shirt \" . However , even many competitive VQA models struggle to process them accurately , and as a result predict the incorrect answer ( black ) rather than the correct answer ( blue ) , including the state-of-the-art methods . Although the relations between two objects in an image have been considered , the attention-based VQA models lack building blocks to explicitly capture the syntactic dependency relations between words in a question . As shown in Figure 1 ( c ) , these dependency relations can reflect which object is being asked ( e.g. , the word umpire 's modifies the word shirt ) and which aspect of the object is being asked ( e.g. , the word color is the direct object of the word is ) . If a VQA model only knows the word shirt rather than the relation between words umpire 's and shirt in a question , it is difficult to distinguish which object is being asked . In fact , we do need the modified relations to discriminate the correct object from multiple similar objects . Therefore , we consider that it is necessary to explore the relations between words at linguistic level in addition to constructing the relations between objects at visual level . Motivated by this , we propose a dual channel graph convolutional network ( DC-GCN ) to simultaneously capture the relations between objects in an image and the syntactic dependency relations between words in a question . Our proposed DC-GCN model consists of an Image-GCN ( I-GCN ) module , a Question GCN ( Q-GCN ) module , and an attention alignment module . The I-GCN module captures the relations between objects in an image , the Q-GCN module captures the syntactic dependency relations between words in a question , and the attention alignment module is used to align two representations of image and question . The contributions of this work are summarized as follows : 1 ) We propose a dual channel graph convolutional network ( DC-GCN ) to simultaneously capture the visual and textual relations , and design the attention alignment module to align the multimodal representations , thus reducing the semantic gaps between vision and language . 2 ) We explore how to construct the syntactic dependency relations between words at linguistic level via graph convolutional networks as well as the relations between objects at visual level . 3 ) We conduct extensive experiments and ablation studies on VQA-v2 and VQA-CP-v2 datasets to examine the effectiveness of our DC-GCN model . Experimental results show that the DC-GCN model achieves competitive performance with the state-of-the-art approaches . In this paper , we propose a dual channel graph convolutional network to explore the relations between objects in an image and the syntactic dependency relations between words in a question . Furthermore , we explicitly construct the relations between words by dependency tree and align the image and question representations by an attention alignment module to reduce the gaps between vision and language . Extensive experiments on the VQA-v2 and VQA-CP-v2 datasets demonstrate that our model achieves comparable performance with the stateof-the-art approaches . We will explore more complicated object relation modeling in future work .", "challenge": "Existing methods for visual question answering only focus on relations between objects in an image neglecting syntactic dependency relations between words in a question.", "approach": "They propose a dual channel graph convolutional network which consists of three modules to capture relations within an image, a syntactic dependency, and between modalities.", "outcome": "The proposed model achieves comparable performance with the state-of-the-art approaches on VQA-v2 and VQA-CP-v2 datasets."} +{"id": "W96-0206", "document": "This paper investigates model merging , a technique for deriving Markov models from text or speech corpora . Models are derived by starting with a large and specific model and by successively combining states to build smaller and more general models . We present methods to reduce the time complexity of the algorithm and report on experiments on deriving language models for a speech recognition task . The experiments show the advantage of model merging over the standard bigram approach . The merged model assigns a lower perplexity to the test set and uses considerably fewer states . Hidden Markov Models are commonly used for statistical language models , e.g. in part-of-speech tagging and speech recognition ( Rabiner , 1989 ) . The models need a large set of parameters which are induced from a ( text- ) corpus . The parameters should be optimal in the sense that the resulting models assign high probabilities to seen training data as well as new data that arises in an application . There are several methods to estimate model parameters . The first one is to use each word ( type ) as a state and estimate the transition probabilities between two or three words by using the relative frequencies of a corpus . This method is commonly used in speech recognition and known as word-bigram or word-trigram model . The relative frequencies have to be smoothed to handle the sparse data problem and to avoid zero probabilities . The second method is a variation of the first method . Words are automatically grouped , e.g. by similarity of distribution in the corpus ( Pereira et al . , 1993 ) . The relative frequencies of pairs or triples of groups ( categories , clusters ) are used as model parameters , each group is represented by a state in the model . The second method has the advantage of drastically reducing the number of model parameters and thereby reducing the sparse data problem ; there is more data per group than per word , thus estimates are more precise . The third method uses manually defined categories . They are linguistically motivated and usually called parts-of-speech . An important difference to the second method with automatically derived categories is that with the manual definition a word can belong to more than one category . A corpus is ( manually ) tagged with the categories and transition probabilities between two or three categories are estimated from their relative frequencies . This method is commonly used for part-of-speech tagging ( Church , 1988 ) . The fourth method is a variation of the third method and is also used for part-of-speech tagging . This method does not need a pre-annotated corpus for parameter estimation . Instead it uses a lexicon stating the possible parts-of-speech for each word , a raw text corpus , and an initial bias for the transition and output probabilities . The parameters are estimated by using the Baum-Welch algorithm ( Baum et al . , 1970 ) . The accuracy of the derived model depends heavily on the initial bias , but with a good choice results are comparable to those of method three ( Cutting et al . , 1992 ) . This paper investigates a fifth method for estimating natural language models , combining the advantages of the methods mentioned above . It is suitable for both speech recognition and partof-speech tagging , has the advantage of automatically deriving word categories from a corpus and is capable of recognizing the fact that a word belongs to more than one category . Unlike other techniques it not only induces transition and output probabilities , but also the model topology , i.e. , the number of states , and for each state the outputs that have a non-zero probability . The method is called model merging and was introduced by ( Omohundro , 1992 ) . The rest of the paper is structured as follows . We first give a short introduction to Markov mo-dels and present the model merging technique . Then , techniques for reducing the time complexity are presented and we report two experiments using these techniques . We investigated model merging , a technique to induce Markov models from corpora .. The original procedure is improved by introducing constraints and a different initial model . The procedures are shown to be applicable to a transliterated speech corpus . The derived models assign lower perplexities to test data than the standard bigram model derived from the same training corpus . Additionally , the merged model was much smaller than the bigram model . The experiments revealed a feature of model merging that allows for improvement of the method 's time complexity . There is a large initial part of merges that do not change the model 's perplexity w.r.t , the test part , and that do not influence the final optimal model . The time needed to derive a model is drastically reduced by abbreviating these initial merges . Instead of starting with the trivial model , one can start with a smaller , easy-to-produce model , but one has to ensure that its size is still larger than the optimal model .", "challenge": "Statistical language models based on the Hidden Markov Model need a large set of parameters that is optimal to assign high probabilities to seen data.", "approach": "They propose to use model merging to combine the existing methods by starting with a large and specific model with smaller and more general models.", "outcome": "The proposed approach has the advantage over the bigram approach which is assigning a lower perplexity with fewer states for a speech recognition task."} +{"id": "P13-1116", "document": "This paper presents a novel deterministic algorithm for implicit Semantic Role Labeling . The system exploits a very simple but relevant discursive property , the argument coherence over different instances of a predicate . The algorithm solves the implicit arguments sequentially , exploiting not only explicit but also the implicit arguments previously solved . In addition , we empirically demonstrate that the algorithm obtains very competitive and robust performances with respect to supervised approaches that require large amounts of costly training data . Traditionally , Semantic Role Labeling ( SRL ) systems have focused in searching the fillers of those explicit roles appearing within sentence boundaries ( Gildea and Jurafsky , 2000 , 2002 ; Carreras and M\u00e0rquez , 2005 ; Surdeanu et al . , 2008 ; Haji\u010d et al . , 2009 ) . These systems limited their searchspace to the elements that share a syntactical relation with the predicate . However , when the participants of a predicate are implicit this approach obtains incomplete predicative structures with null arguments . The following example includes the gold-standard annotations for a traditional SRL process : ( 1 ) [ arg0 The network ] had been expected to have The previous analysis includes annotations for the nominal predicate loss based on the NomBank structure ( Meyers et al . , 2004 ) . In this case the annotator identifies , in the first sentence , the arguments arg0 , the entity losing something , arg1 , the thing lost , and arg3 , the source of that loss . However , in the second sentence there is another instance of the same predicate , loss , but in this case no argument has been associated with it . Traditional SRL systems facing this type of examples are not able to fill the arguments of a predicate because their fillers are not in the same sentence of the predicate . Moreover , these systems also let unfilled arguments occurring in the same sentence , like in the following example : ( 2 ) Quest Medical Inc said it adopted [ arg1 a shareholders ' rights ] [ np plan ] in which rights to purchase shares of common stock will be distributed as a dividend to shareholders of record as of Oct 23 . For the predicate plan in the previous sentence , a traditional SRL process only returns the filler for the argument arg1 , the theme of the plan . However , in both examples , a reader could easily infer the missing arguments from the surrounding context of the predicate , and determine that in ( 1 ) both instances of the predicate share the same arguments and in ( 2 ) the missing argument corresponds to the subject of the verb that dominates the predicate , Quest Medical Inc. Obviously , this additional annotations could contribute positively to its semantic analysis . In fact , Gerber and Chai ( 2010 ) pointed out that implicit arguments can increase the coverage of argument structures in NomBank by 71 % . However , current automatic systems require large amounts of manually annotated training data for each predicate . The effort required for this manual annotation explains the absence of generally applicable tools . This problem has become a main concern for many NLP tasks . This fact explains a new trend to develop accurate unsupervised systems that exploit simple but robust linguistic principles ( Raghunathan et al . , 2010 ) . In this work , we study the coherence of the predicate and argument realization in discourse . In particular , we have followed a similar approach to the one proposed by Dahl et al . ( 1987 ) who filled the arguments of anaphoric mentions of nominal predicates using previous mentions of the same predicate . We present an extension of this idea assuming that in a coherent document the different ocurrences of a predicate , including both verbal and nominal forms , tend to be mentions of the same event , and thus , they share the same argument fillers . Following this approach , we have developed a deterministic algorithm that obtains competitive results with respect to supervised methods . That is , our system can be applied to any predicate without training data . The main contributions of this work are the following : \u2022 We empirically prove that there exists a strong discourse relationship between the implicit and explicit argument fillers of the same predicates . \u2022 We propose a deterministic approach that exploits this discoursive property in order to obtain the fillers of implicit arguments . \u2022 We adapt to the implicit SRL problem a classic algorithm for pronoun resolution . \u2022 We develop a robust algorithm , ImpAr , that obtains very competitive results with respect to existing supervised systems . We release an open source prototype implementing this algorithm 1 . The paper is structured as follows . Section 2 discusses the related work . Section 3 presents in detail the data used in our experiments . Section 4 describes our algorithm for implicit argument resolution . Section 5 presents some experiments we have carried out to test the algorithm . Section 6 discusses the results obtained . Finally , section 7 offers some concluding remarks and presents some future research lines . In this work we have presented a robust deterministic approach for implicit Semantic Role Labeling . The method exploits a very simple but relevant discoursive coherence property that holds over explicit and implicit arguments of closely related nominal and verbal predicates . This property states that if several instances of the same predicate appear in a well-written discourse , it is very likely that they maintain the same argument fillers . We have shown the importance of this phenomenon for recovering the implicit information about semantic roles . To our knowledge , this is the first empirical study that proves this phenomenon . Based on these observations , we have developed a new deterministic algorithm , ImpAr , that obtains very competitive and robust performances with respect to supervised approaches . That is , it can be applied where there is no available manual annotations to train . The code of this algorithm is publicly available and can be applied to any document . As input it only needs the document with explicit semantic role labeling and Super-Sense annotations . These annotations can be easily obtained from plain text using available tools7 , what makes this algorithm the first effective tool available for implicit SRL . As it can be easily seen , ImpAr has a large margin for improvement . For instance , providing more accurate spans for the fillers . We also plan to test alternative approaches to solve the arguments without explicit antecedents . For instance , our system can also profit from additional annotations like coreference , that has proved its utility in previous works . Finally , we also plan to study our approach on different languages and datasets ( for instance , the SemEval-2010 dataset ) .", "challenge": "Existing semantic role labelling methods obtain predicative structures with null arguments when the participants of a predicate are implicit, without large training data.", "approach": "They propose a deterministic algorithm which exploits a relevant discursive property and solves implicit arguments sequentially without any training data.", "outcome": "The proposed method performs competitively with existing supervised methods and proves that there is a strong discourse relationship between the implicit and explicit argument filters."} +{"id": "W96-0208", "document": "This paper describes an experimental comparison of seven different learning algorithms on the problem of learning to disambiguate the meaning of a word from context . The algorithms tested include statistical , neural-network , decision-tree , rule-based , and case-based classification techniques . The specific problem tested involves disambiguating six senses of the word \" line \" using the words in the current and proceeding sentence as context . The statistical and neural-network methods perform the best on this particular problem and we discuss a potential reason for this observed difference . We also discuss the role of bias in machine learning and its importance in explaining performance differences observed on specific problems . Recent research in empirical ( corpus-based ) natural language processing has explored a number of different methods for learning from data . Three general approaches are statistical , neural-network , and symbolic machine learning and numerous specific methods have been developed under each of these paradigms ( Wermter , Riloff , & Scheler , 1996 ; Charniak , 1993 ; Reilly & Sharkey , 1992 ) . An important question is whether some methods perform significantly better than others on particular types of problems . Unfortunately , there have been very few direct comparisons of alternative methods on identical test data . A somewhat indirect comparison of applying stochastic context-free grammars ( Periera & Shabes , 1992 ) , a transformation-based method ( Brill , 1993 ) , and inductive logic programming ( Zelle & Mooney , 1994 ) to parsing the ATIS ( Airline Travel Information Service ) corpus from the Penn Treebank ( Marcus , Santorini , & Marcinkiewicz , 1993 ) indicates fairly similar performance for these three very different methods . Also , comparisons of Bayesian , information-retrieval , neural-network , and case-based methods on word-sense disambiguation have also demonstrated similar performance ( Leacock , Towell , & Voorhees , 1993b ; Lehman , 1994 ) . However , in a comparison of neural-network and decision-tree methods on learning to generate the past tense of an English verb , decision trees performed significantly better ( Ling & Marinov , 1993 ; Ling , 1994 ) . Subsequent experiments on this problem have demonstrated that an inductive logic programming method produces even better results than decision trees ( Mooney & Califf , 1995 ) . In this paper , we present direct comparisons of a fairly wide range of general learning algorithms on the problem of discriminating six senses of the word \" line \" from context , using data assembled by Leacock et al . ( 1993b ) . We compare a naive Bayesian classifier ( Duda & Hart , 1973 ) , a perceptron ( Rosenblatt , 1962 ) , a decision-tree learner ( Quinlan , 1993 ) , a k nearest-neighbor classifier ( Cover & Hart , 1967 ) , logic-based DNF ( disjunctive normal form ) and CNF ( conjunctive normal form ) learners ( Mooney , 1995 ) and a decisionlist learner ( Rivest , 1987 ) . Tests on all methods used identical training and test sets , and ten separate random trials were run in order to measure average performance and allow statistical testing of the significance of any observed differences . On this particular task , we found that the Bayesian and perceptron methods perform significantly better than the remaining methods and discuss a potential reason for this observed difference . We also discuss the role of bias in machine learning and its importance in explaining the observed differences in the performance of alternative methods on specific problems . This paper has presented fairly comprehensive experiments comparing seven quite different empirical methods on learning to disambiguate words in context . Methods that employ a weighted combination of a large set of features , such as simple Bayesian and neural-network methods , were shown to perform better than alternative methods such as decision-tree , rule-based , and instancebased techniques on the problem of disambiguating the word \" line \" into one of six possible senses given the words that appear in the current and previous sentence as context . Although different learning algorithms can frequently perform quite similarly , they all have specific biases in their representation of concepts and therefore can illustrate both strengths and weaknesses in particular applications . Only rigorous experimental comparisons together with a qualitative analysis and explanation of their results can help determine the appropriate methods for particular problems in natural language processing .", "challenge": "Statistical, neural-network, and symbolic machine learning and numerous specific methods have been proposed but only a few works compare them with identical test data.", "approach": "They compare statistical, neural-network, decision-tree, rule-based, and case-based classification techniques on the disambiguation of meanings of words from contexts using identical training and test sets.", "outcome": "They show that the methods that employ a weighted combination of a large set of features such as statistical and neural-network methods perform the best."} +{"id": "2022.naacl-main.154", "document": "The development of over-parameterized pretrained language models has made a significant contribution toward the success of natural language processing . While over-parameterization of these models is the key to their generalization power , it makes them unsuitable for deployment on low-capacity devices . We push the limits of state-of-the-art Transformer-based pre-trained language model compression using Kronecker decomposition . We present our KroneckerBERT , a compressed version of the BERT BASE model obtained by compressing the embedding layer and the linear mappings in the multi-head attention , and the feed-forward network modules in the Transformer layers . Our KroneckerBERT is trained via a very efficient two-stage knowledge distillation scheme using far fewer data samples than state-of-the-art models like MobileBERT and TinyBERT . We evaluate the performance of KroneckerBERT on well-known NLP benchmarks . We show that our KroneckerBERT with compression factors of 7.7\u00d7 and 21\u00d7 outperforms state-of-theart compression methods on the GLUE and SQuAD benchmarks . In particular , using only 13 % of the teacher model parameters , it retain more than 99 % of the accuracy on the majority of GLUE tasks . In recent years , the emergence of Pre-trained Language Models ( PLMs ) has led to a significant breakthrough in Natural Language Processing ( NLP ) . The introduction of Transformers and unsupervised pre-training on enormous unlabeled data are the two main factors that contribute to this success . Transformer-based models ( Devlin et al . , 2018 ; Radford et al . , 2019 ; Yang et al . , 2019 ; Shoeybi et al . , 2019 ) are powerful yet highly overparameterized . The enormous size of these models does not meet the constraints imposed by edge devices on memory , latency , and energy consumption . Therefore there has been a growing interest in developing new methodologies and frameworks for the compression of these large PLMs . Similar to other deep learning models , the main directions for the compression of these models include low-bit quantization ( Gong et al . , 2014 ; Prato et al . , 2019 ) , network pruning ( Han et al . , 2015 ) , matrix decomposition ( Yu et al . , 2017 ; Lioutas et al . , 2020 ) and Knowledge distillation ( KD ) ( Hinton et al . , 2015 ) . These methods are either used in isolation or in combination to improve compression-performance trade-off . Recent works have been relatively successful in compressing Transformer-based PLMs to a certain degree ( Sanh et al . , 2019 ; Sun et al . , 2019 ; Jiao et al . , 2019 ; Sun et al . , 2020 ; Xu et al . , 2020 ; Wang et al . , 2020 ; Kim et al . , 2021 ) ; however , moderate and extreme compression of these models ( compression factors > 5 and 10 resepctively ) is still quite challenging . In particular , several works ( Mao et al . , 2020 ; Zhao et al . , 2019a Zhao et al . , , 2021 ) ) that have tried to go beyond the compression factor of 10 , have done so at the expense of a significant drop in performance . Following the classical assumption that matrices often follow a low-rank structure , low-rank decomposition methods have been used for compression of weight matrices in deep learning models ( Yu et al . , 2017 ; Swaminathan et al . , 2020 ; Winata et al . , 2019 ) and especially Transformer-based models ( Noach and Goldberg , 2020 ; Mao et al . , 2020 ) . However , low-rank decomposition methods only exploit redundancies of the weight matrix in the horizontal and vertical dimensions and thus limit the flexibility of the compressed model . Kronecker decomposition on the other hand exploits redun- dancies in predefined patches and hence allows for more flexibility in their representation . Recent works prove Kronecker product to be more effective in retaining accuracy after compression than SVD ( Thakker et al . , 2019 ) . This work proposes a novel framework that uses Kronecker decomposition for compression of Transformer-based PLMs and provides a very promising compression-performance trade-off for medium and high compression levels , with 13 % and 5 % of the original model parameters respectively . We use Kronecker decomposition for the compression of both Transformer layers and the embedding layer . For Transformer layers , the compression is achieved by representing every weight matrix both in the multi-head attention ( MHA ) and the feed-forward neural network ( FFN ) as a Kronecker product of two smaller matrices . We also propose a Kronecker decomposition for compression of the embedding layer . Previous works have tried different techniques to reduce the enormous memory consumption of this layer ( Khrulkov et al . , 2019 ; Li et al . , 2018 ) . Our Kronecker decomposition method can substantially reduce the amount of required memory while maintaining low computation . Using Kronecker decomposition for large compression factors leads to a reduction in the model expressiveness . This is due to the nature of the Kronecker product and the fact that elements in this representation are tied together . To address this issue , we propose to distill knowledge from the intermediate layers of the original uncompressed network to the Kronecker network during training . Training of the state-of-the art BERT compression models ( Zhao et al . , 2019a , b ; Sun et al . , 2020 Sun et al . , , 2019 ) ) involve an extensive training which requires vast computational resources . For example in ( Sun et al . , 2020 ) , first a specially designed teacher , i.e IB-BERT LARGE is trained from scratch on the en-tire English wikipedia and Book Corpus . The student is then pretrained on the same corpus via KD while undergoing an additional progressive KD phase . Another example is TinyBERT ( Jiao et al . , 2019 ) which requires pretraining on the entire English Wikipedia and also uses extensive data augmentation ( 20\u00d7 ) for fine-tuning on the downstream tasks . We show that our Kronecker BERT can out perform state-of-the-art with significantly less training requirements . More precisely , our Kronecker-BERT model undergoes a very light pretraining on only 10 % of the English Wikipedia for 3 epochs followed by finetuning on the original downstream data . Note that , while our evaluations in this work are limited to BERT , this proposed compression method can be directly used to compress other Transformer-based NLP models . The main contributions of this paper are as follows : \u2022 Compression of the embedding layer using the Kronecker decomposition with very low computational overhead . We introduced a novel method for compressing Transformer-based language models that uses Kronecker decomposition for the compression of the embedding layer and the linear mappings within the Transformer blocks . The proposed framework was used to compress the BERT BASE model . We used a very light two-stage KD method to train the compressed model . We show that the proposed framework can significantly reduce the size and the number of computations while outperforming stateof-the-art . The proposed method can be directly applied for compression of other Transformer-based language models . The combination of the proposed method with other compression techniques such layer truncation , pruning and quantization can be an interesting direction for future work .", "challenge": "Moderate or Extreme model compressions with factors of > 5 or 10 for transformer models are challenging without a significant performance drop or massive training.", "approach": "They propose a compression method with Kronecker decomposition with a two-staged knowledge distillation from intermediate layers of the original model to avoid losing the expressiveness.", "outcome": "Their model outperforms state-of-the-art compressed models on GLUE and SQuAD and retains more than 99% of the accuracy while using 13% of the original parameters."} +{"id": "2022.naacl-main.394", "document": "Recent causal probing literature reveals when language models and syntactic probes use similar representations . Such techniques may yield \" false negative \" causality results : models may use representations of syntax , but probes may have learned to use redundant encodings of the same syntactic information . We demonstrate that models do encode syntactic information redundantly and introduce a new probe design that guides probes to consider all syntactic information present in embeddings . Using these probes , we find evidence for the use of syntax in models where prior methods did not , allowing us to boost model performance by injecting syntactic information into representations . Here , we identify a limitation of prior causal probing art in which redundant information in embeddings could lead to probes and models using different representations of the same information , which in turn could lead to uninformative causal analysis results . We propose a new probe architecture that addresses this limitation by encouraging probes to use all sources of information in embeddings . Recent large neural models like BERT and GPT-3 exhibit impressive performance on a large variety of linguistic tasks , from sentiment analysis to question-answering ( Devlin et al . , 2019 ; Brown et al . , 2020 ) . Given the models ' impressive performance , but also their complexity , researchers have developed tools to understand what patterns models have learned . In probing literature , researchers develop \" probes : \" models designed to extract information from the representations of trained models ( Linzen et al . , 2016 ; Conneau et al . , 2018 ; Hall Maudslay et al . , 2020 ) . For example , Hewitt and Manning ( 2019 ) demonstrated that one can train accurate linear classifiers to predict syntactic structure from BERT or ELMO embeddings . These probes reveal what information is present in model embeddings but not how or if models use that information ( Belinkov , 2021 ) . To address this gap , new research in causal analysis seeks to understand how aspects of models ' representations affect their behavior ( Elazar et al . , 2020 ; Ravfogel et al . , 2020 ; Giulianelli et al . , 2018 ; Tucker et al . , 2021 ; Feder et al . , 2021 ) . Typically , these techniques create counterfactual representations that differ from the original according to some Figure 1 : In a 2D embedding space , a model might redundantly encode syntactic representations of a sentence like \" the girl saw the boy with the telescope . \" Redundant encodings could cause misalignment between the model 's decision boundary ( blue ) and a probe 's ( red ) . We introduce dropout probes ( green ) to use all informative dimensions . property ( e.g. , syntactic interpretation of the sentence ) . Researchers then compare outputs when using original and counterfactual embeddings to assess whether a property encoded in the representation is causally related to model behavior . Unfortunately , negative results -wherein researchers report that models do not appear to use a property causally -are difficult to interpret . Such failures can be attributed to a model truly not using the property ( true negatives ) , or to a failure of the technique ( false negatives ) . For example , as depicted in Figure 1 , if a language model encodes syntactic information redundantly ( here illustrated in two-dimensions ) , the model and probe may differentiate among parses along orthogonal dimensions . When creating counterfactual representations with such probes , researchers could incorrectly conclude that the model does not use syntactic information . In this work , we present new evidence for the causal use of syntactic representations on task performance in BERT , using newly-designed probes that take into account the potential redundancy in a model 's internal representation . First , we find evidence for representational redundancy in BERTbased models . Based on these findings , we propose a new probe design that encourages the probe to use all relevant representations of syntax in model embeddings . These probes are then used to assess if language models use representations of syntax causally , and , unlike prior art , we find that some fine-tuned models do exhibit signatures of causal use of syntactic information . Lastly , having found that these models causally use representations of syntax , we used our probes to boost a questionanswering model 's performance by \" injecting \" syntactic information at test time.1 2 Related Work In this work , we designed and evaluated \" dropout probes , \" a new neural probing architecture for generating useful causal analysis of trained language models . Our technical contribution -adding a dropout layer before probes -was inspired by a theory of redundant syntactic encodings in models . Our results fit within three categories : we showed that 1 ) models encoded syntactic information redundantly , 2 ) dropout probes , unlike standard probes , revealed that QA models used syntactic representations causally , and 3 ) by injecting syntactic information at test time in syntacticallychallenging domains , we could increase model performance without retraining . Despite our step towards better understanding of pretrained models , future work remains . Natural extensions include studying pretrained models beyond those considered in this work , further research into redundancy in embeddings , more investigation into inserting symbolic knowledge into neural representations , and new methods for training models to respond appropriately to interventions .", "challenge": "Redundant encodings of syntactic information can make existing probing techniques for causal use of syntactic knowledge by language models provide misleading false negative results.", "approach": "They first show that models redundantly encode information and propose a dropout-based probing technique that considers all syntactic information in embeddings.", "outcome": "They find evidence of models using syntactic information not identified by previous methods by using the proposed technique and further improve the QA model's performance."} +{"id": "P13-1066", "document": "Online discussion forums are a popular platform for people to voice their opinions on any subject matter and to discuss or debate any issue of interest . In forums where users discuss social , political , or religious issues , there are often heated debates among users or participants . Existing research has studied mining of user stances or camps on certain issues , opposing perspectives , and contention points . In this paper , we focus on identifying the nature of interactions among user pairs . The central questions are : How does each pair of users interact with each other ? Does the pair of users mostly agree or disagree ? What is the lexicon that people often use to express agreement and disagreement ? We present a topic model based approach to answer these questions . Since agreement and disagreement expressions are usually multiword phrases , we propose to employ a ranking method to identify highly relevant phrases prior to topic modeling . After modeling , we use the modeling results to classify the nature of interaction of each user pair . Our evaluation results using real-life discussion / debate posts demonstrate the effectiveness of the proposed techniques . Online discussion / debate forums allow people with common interests to freely ask and answer questions , to express their views and opinions on any subject matter , and to discuss issues of common interest . A large part of such discussions is about social , political , and religious issues . On such issues , there are often heated discussions / debates , i.e. , people agree or disagree and argue with one another . Such ideological discussions on a myriad of social and political issues have practical implications in the fields of communication and political science as they give social scientists an opportunity to study real-life discussions / debates of almost any issue and analyze participant behaviors in a large scale . In this paper , we present such an application , which aims to perform fine-grained analysis of user-interactions in online discussions . There have been some related works that focus on discovering the general topics and ideological perspectives in online discussions ( Ahmed and Xing , 2010 ) , placing users in support / oppose camps ( Agarwal et al . , 2003 ) , and classifying user stances ( Somasundaran and Wiebe , 2009 ) . However , these works are at a rather coarser level and have not considered more fine-grained characteristics of debates / discussions where users interact with each other by quoting / replying each other to express agreement or disagreement and argue with one another . In this work , we want to mine the following information : 1 . The nature of interaction of each pair of users or participants who have engaged in the discussion of certain issues , i.e. , whether the two persons mostly agree or disagree with each other in their interactions . 2 . What language expressions are often used to express agreement ( e.g. , \" I agree \" and \" you 're right \" ) and disagreement ( e.g. , \" I disagree \" and \" you speak nonsense \" ) . We note that although agreement and disagreement expressions are distinct from traditional sentiment expressions ( words and phrases ) such as good , excellent , bad , and horrible , agreement and disagreement clearly express a kind of sentiment as well . They are usually emitted during interactive exchanges of arguments in ideological discussions . This idea prompted us to introduce the concept of ADsentiment . We define the polarity of agreement expressions as positive and the polarity of disagreement expressions as negative . We refer agreement and disagreement expressions as ADsentiment expressions , or AD-expressions for short . AD-expressions are crucial for the analysis of interactive discussions and debates just as sentiment expressions are instrumental in sentiment analysis ( Liu , 2012 ) . We thus regard this work as an extension to traditional sentiment analysis ( Pang and Lee , 2008 ; Liu , 2012 ) . In our earlier work ( Mukherjee and Liu , 2012a ) , we proposed three topic models to mine contention points , which also extract ADexpressions . In this paper , we further improve the work by coupling an information retrieval method to rank good candidate phrases with topic modeling in order to discover more accurate ADexpressions . Furthermore , we apply the resulting AD-expressions to the new task of classifying the arguing or interaction nature of each pair of users . Using discovered AD-expressions for classification has an important advantage over traditional classification because they are domain independent . We employ a semi-supervised generative model called JTE-P to jointly model AD-expressions , pair interactions , and discussion topics simultaneously in a single framework . With such complex interactions mined , we can produce many useful summaries of discussions . For example , we can discover the most contentious pairs for each topic and ideological camps of participants , i.e. , people who often agree with each other are likely to belong to the same camp . The proposed framework also facilitates tracking users ' ideology shifts and the resulting arguing nature . The proposed methods have been evaluated both qualitatively and quantitatively using a large number of real-life discussion / debate posts from four domains . Experimental results show that the proposed model is highly effective in performing its tasks and outperforms several baselines . This paper studied the problem of modeling user pair interactions in online discussions with the purpose of discovering the interaction or arguing nature of each author pair and various ADexpressions emitted in debates . A novel technique was also proposed to rank n-gram phrases where relevance based ranking was used in conjunction with a semi-supervised generative model . This method enables us to find better ADexpressions . Experiments using real-life online debate data showed the effectiveness of the model . In our future work , we intend to extend the model to account for stances , and issue specific interactions which would pave the way for user profiling and behavioral modeling .", "challenge": "Existing methods that analyze discussions coarsely leave user interactions with quotes or replies expressing agreements, and what lexicon they use in such cases unstudied.", "approach": "They propose to couple a retrieval method with topic modeling to accurately discover ad-expressions to perform a fine-grained analysis of user pair interactions.", "outcome": "They applied the proposed method to real-life discussion posts from four domains and quantitative and qualitative evaluation show its effectiveness."} +{"id": "N09-1041", "document": "We present an exploration of generative probabilistic models for multi-document summarization . Beginning with a simple word frequency based model ( Nenkova and Vanderwende , 2005 ) , we construct a sequence of models each injecting more structure into the representation of document set content and exhibiting ROUGE gains along the way . Our final model , HIERSUM , utilizes a hierarchical LDA-style model ( Blei et al . , 2004 ) to represent content specificity as a hierarchy of topic vocabulary distributions . At the task of producing generic DUC-style summaries , HIERSUM yields state-of-the-art ROUGE performance and in pairwise user evaluation strongly outperforms Toutanova et al . ( 2007 ) 's state-of-the-art discriminative system . We also explore HIERSUM 's capacity to produce multiple ' topical summaries ' in order to facilitate content discovery and navigation . Over the past several years , there has been much interest in the task of multi-document summarization . In the common Document Understanding Conference ( DUC ) formulation of the task , a system takes as input a document set as well as a short description of desired summary focus and outputs a word length limited summary . 1 To avoid the problem of generating cogent sentences , many systems opt for an extractive approach , selecting sentences from the document set which best reflect its core content . 2 There are several approaches to modeling document content : simple word frequency-based methods ( Luhn , 1958 ; Nenkova and Vanderwende , 2005 ) , graph-based approaches ( Radev , 2004 ; Wan and Yang , 2006 ) , as well as more linguistically motivated techniques ( Mckeown et al . , 1999 ; Leskovec et al . , 2005 ; Harabagiu et al . , 2007 ) . Another strand of work ( Barzilay and Lee , 2004 ; Daum\u00e9 III and Marcu , 2006 ; Eisenstein and Barzilay , 2008 ) , has explored the use of structured probabilistic topic models to represent document content . However , little has been done to directly compare the benefit of complex content models to simpler surface ones for generic multi-document summarization . In this work we examine a series of content models for multi-document summarization and argue that LDA-style probabilistic topic models ( Blei et al . , 2003 ) can offer state-of-the-art summarization quality as measured by automatic metrics ( see section 5.1 ) and manual user evaluation ( see section 5.2 ) . We also contend that they provide convenient building blocks for adding more structure to a summarization model . In particular , we utilize a variation of the hierarchical LDA topic model ( Blei et al . , 2004 ) to discover multiple specific ' subtopics ' within a document set . The resulting model , HIERSUM ( see section 3.4 ) , can produce general summaries as well as summaries for any of the learned sub-topics . In this paper we have presented an exploration of content models for multi-document summarization and demonstrated that the use of structured topic models can benefit summarization quality as measured by automatic and manual metrics .", "challenge": "There have been only a few works that investigate how complex content models perform on multi-document summarization.", "approach": "They benchmark a series of existing generative probabilistic models and also propose an LDA-style which can model multiple \"sub-topics\".", "outcome": "LDA-style models can perform well on both automatic and manual evaluations and the proposed model can produce generic and also sub-topic-focused summaries."} +{"id": "2022.acl-long.480", "document": "Simultaneous Machine Translation is the task of incrementally translating an input sentence before it is fully available . Currently , simultaneous translation is carried out by translating each sentence independently of the previously translated text . More generally , Streaming MT can be understood as an extension of Simultaneous MT to the incremental translation of a continuous input text stream . In this work , a state-of-the-art simultaneous sentencelevel MT system is extended to the streaming setup by leveraging the streaming history . Extensive empirical results are reported on IWSLT Translation Tasks , showing that leveraging the streaming history leads to significant quality gains . In particular , the proposed system proves to compare favorably to the best performing systems . Simultaneous Machine Translation ( MT ) is the task of incrementally translating an input sentence before it is fully available . Indeed , simultaneous MT can be naturally understood in the scenario of translating a text stream as a result of an upstream Automatic Speech Recognition ( ASR ) process . This setup defines a simultaneous Speech Translation ( ST ) scenario that is gaining momentum due to the vast number of industry applications that could be exploited based on this technology , from person-toperson communication to subtitling of audiovisual content , just to mention two main applications . These real-world streaming applications motivate us to move from simultaneous to streaming MT , understanding streaming MT as the task of simultaneously translating a potentially unbounded and unsegmented text stream . Streaming MT poses two main additional challenges over simultaneous MT . First , the MT system must be able to leverage the streaming history beyond the sentence level both at training and inference time . Second , the system must work under latency constraints over the entire stream . With regard to exploiting streaming history , or more generally sentence context , it is worth mentioning the significant amount of previous work in offline MT at sentence level ( Tiedemann and Scherrer , 2017 ; Agrawal et al . , 2018 ) , document level ( Scherrer et al . , 2019 ; Ma et al . , 2020a ; Zheng et al . , 2020b ; Li et al . , 2020 ; Maruf et al . , 2021 ; Zhang et al . , 2021 ) , and in related areas such as language modelling ( Dai et al . , 2019 ) that has proved to lead to quality gains . Also , as reported in ( Li et al . , 2020 ) , more robust ST systems can be trained by taking advantage of the context across sentence boundaries using a data augmentation strategy similar to the prefix training methods proposed in ( Niehues et al . , 2018 ; Ma et al . , 2019 ) . This data augmentation strategy was suspected to boost re-translation performance when compared to conventional simultaneous MT systems ( Arivazhagan et al . , 2020 ) . Nonetheless , with the notable exception of ( Schneider and Waibel , 2020 ) , sentences in simultaneous MT are still translated independently from each other ignoring the streaming history . ( Schneider and Waibel , 2020 ) proposed an end-to-end streaming MT model with a Transformer architecture based on an Adaptive Computation Time method with a monotonic encoderdecoder attention . This model successfully uses the streaming history and a relative attention mechanism inspired by Transformer-XL ( Dai et al . , 2019 ) . Indeed , this is an MT model that sequentially translates the input stream without the need for a segmentation model . However , it is hard to interpret the latency of their streaming MT model because the authors observe that the current sentence-level latency measures , Average Proportion ( AP ) ( Cho and Esipova , 2016 ) , Average Lagging ( AL ) ( Ma et al . , 2019 ) and Differentiable Average Lagging ( DAL ) ( Cherry and Foster , 2019 ) do not perform well on a streaming setup . This fact is closely related to the second challenge mentioned above , which is that the system must work under latency constraints over the entire stream . Indeed , current sentence-level latency measures do not allow us to appropriately gauge the latency of streaming MT systems . To this purpose , ( Iranzo-S\u00e1nchez et al . , 2021 ) recently proposed a stream-level adaptation of the sentence-level latency measures based on the conventional re-segmentation approach applied to the ST output in order to evaluate translation quality ( Matusov et al . , 2005 ) . In this work , the simultaneous MT model based on a unidirectional encoder-decoder and training along multiple wait-k paths proposed by ( Elbayad et al . , 2020a ) is evolved into a streamingready simultaneous MT model . To achieve this , model training is performed following a sentenceboundary sliding-window strategy over the parallel stream that exploits the idea of prefix training , while inference is carried out in a single forward pass on the source stream that is segmented by a Direct Segmentation ( DS ) model ( Iranzo-S\u00e1nchez et al . , 2020 ) . In addition , a refinement of the unidirectional encoder-decoder that takes advantage of longer context for encoding the initial positions of the streaming MT process is proposed . This streaming MT system is thoroughly assessed on IWSLT translation tasks to show how leveraging the streaming history provides systematic and significant BLEU improvements over the baseline , while reported stream-adapted latency measures are fully consistent and interpretable . Finally , our system favourably compares in terms of translation quality and latency to the latest state-of-the-art simultaneous MT systems ( Ansari et al . , 2020 ) . This paper is organized as follows . Next section provides a formal framework for streaming MT to accommodate streaming history in simultaneous MT . Section 3 presents the streaming experimental setup whose results are reported and discussed in Section 4 . Finally , conclusions and future work are drawn in Section 5 . In this work , a formalization of streaming MT as a generalization of simultaneous MT has been proposed in order to define a theoretical framework in which our two contributions have been made . On the one hand , we successfully leverage streaming history across sentence boundaries for a simultaneous MT system based on multiple wait-k paths that allows our system to greatly improve the results of the sentence-level baseline . On the other hand , our PBE is able to take into account longer context information than its unidirectional counterpart , while keeping the same training efficiency . Our proposed MT system has been evaluated under a realistic streaming setting being able to reach similar translation quality than a state-of-theart segmentation-free streaming MT system at a fraction of its latency . Additionally , our system has been shown to be competitive when compared with state-of-the-art simultaneous MT systems optimized for sentence-level translation , obtaining excellent results using a single model across a wide range of latency levels , thanks to its flexible inference policy . In terms of future work , additional training and inference procedures that take advantage of the streaming history in streaming MT are still open for research . One important avenue of improvement is to devise more robust training methods , so that simultaneous models can perform as well as their offline counterparts when carrying out inference at higher latencies . The segmentation model , though proved useful in a streaming setup , adds complexity and can greatly affect translation quality . Thus , the development of segmentation-free streaming MT models is another interesting research topic .", "challenge": "The streaming machine translation task posts additional challenges over simultaneous counterparts which are the leverage of the streaming history beyond sentence level and latency constraints.", "approach": "They extend a unidirectional encoder-decoder-based simultaneous sentence-level system to the streaming setup by leveraging the streaming history across sentence boundaries.", "outcome": "The proposed model outperforms the baseline model and is comparable to the state-of-the-art simultaneous systems in quality and latency on IWSLT translation tasks."} +{"id": "W96-0213", "document": "This paper presents a statistical model which trains from a corpus annotated with Part-Of-Speech tags and assigns them to previously unseen text with state-of-the-art accuracy(96.6 % ) . The model can be classified as a Maximum Entropy model and simultaneously uses many contextual \" features \" to predict the POS tag . Furthermore , this paper demonstrates the use of specialized features to model difficult tagging decisions , discusses the corpus consistency problems discovered during the implementation of these features , and proposes a training strategy that mitigates these problems . Many natural language tasks require the accurate assignment of Part-Of-Speech ( POS ) tags to previously unseen text . Due to the availability of large corpora which have been manually annotated with POS information , many taggers use annotated text to \" learn \" either probability distributions or rules and use them to automatically assign POS tags to unseen text . The experiments in this paper were conducted on the Wall Street Journal corpus from the Penn Treebank project ( Marcus et al . , 1994 ) , although the model can trai~n from any large corpus annotated with POS tags . Since most realistic natural language applications must process words that were never seen before in training data , all experiments in this paper are conducted on test data that include unknown words . Several recent papers ( Brill , 1994 , Magerman , 1995 ) have reported 96.5 % tagging accuracy on the Wall St. Journal corpus . The experiments in this paper test the hypothesis that better use of context will improve the accuracy . A Maximum Entropy model is well-suited for such experiments since it corn-bines diverse forms of contextual information in a principled manner , and does not impose any distributional assumptions on the training data . Previous uses of this model include language modeling ( Lau et al . , 1993 ) , machine translation ( Berger et al . , 1996 ) , prepositional phrase attachment ( Ratnaparkhi et al . , 1994 ) , and word morphology ( Della Pietra et al . , 1995 ) . This paper briefly describes the maximum entropy and maximum likelihood properties of the model , features used for POS tagging , and the experiments on the Penn Treebank Wall St. Journal corpus . It then discusses the consistency problems discovered during an attempt to use specialized features on the word context . Lastly , the results in this paper are compared to those from previous work on POS tagging . The Maximum Entropy model is an extremely flexible technique for linguistic modelling , since it can use a virtually unrestricted and rich feature set in the framework of a probability model . The implementation in this paper is a state-of-the-art POS tagger , as evidenced by the 96.6 % accuracy on the unseen Test set , shown in Table 11 . The model with specialized features does not perform much better than the baseline model , and further discovery or refinement of word-based features is difficult given the inconsistencies in the training data . A model trained and tested on data from a single annotator performs at .5 % higher accuracy than the baseline model and should produce more consistent input for applications that require tagged text .", "challenge": "Because natural language tasks require the part-of-speech tags to previously unseen texts, taggers need to be tested on test data which contain such unseen words.", "approach": "They present a Maximum Entropy model that simultaneously uses contextual features trained on annotated part-of-speech tags.", "outcome": "They show the proposed model achieves the state-of-the-art on unseen corpus and the model with specialized features does not perform much better than the baseline."} +{"id": "2021.naacl-main.470", "document": "Neural-based summarization models suffer from the length limitation of text encoder . Long documents have to been truncated before they are sent to the model , which results in huge loss of summary-relevant contents . To address this issue , we propose the sliding selector network with dynamic memory for extractive summarization of long-form documents , which employs a sliding window to extract summary sentences segment by segment . Moreover , we adopt memory mechanism to preserve and update the history information dynamically , allowing the semantic flow across different windows . Experimental results on two large-scale datasets that consist of scientific papers demonstrate that our model substantially outperforms previous state-of-the-art models . Besides , we perform qualitative and quantitative investigations on how our model works and where the performance gain comes from . 1 Text summarization is an important task of natural language processing which aims to distil salient contents from a textual document . Existing summarization models can be roughly classified into two categories , which are abstractive and extractive . Abstractive summarization usually adopts natural language generation technology to produce a wordby-word summary . In general , these approaches are flexible but may yield disfluent summaries ( Liu and Lapata , 2019a ) . By comparison , extractive approaches aim to select a subset of the sentences in the source document , thereby enjoying better fluency and efficiency ( Cao et al . , 2017 ) . Although many summarization approaches have demonstrated their success on relatively short documents , such as news articles , they usually fail Paragraph 1 : Medical tourism is illustrated as occurrence in which individuals travel abroad to receive healthcare services . It is a multibillion dollar industry and countries like India , Thailand , Israel , Singapore , \u2026 Paragraph 2 : The prime driving factors in medical tourism are increased medical costs , increased insurance premiums , increasing number of uninsured or partially insured individuals in developed countries , \u2026 \u2026 \u2026 Paragraph 5 : It is generally presumed in marketing that products with similar characteristics will be equally preferred by the consumers , however , attributes , which make the product similar to other products , will not \u2026 . to achieve desired performance when directly applied in long-form documents , such as scientific papers . This inferior performance is partly due to the truncation operation , which inevitably leads to information loss , especially for extractive models because parts of gold sentences would be inaccessible . In addition , the accurate modeling of long texts remains a challenge ( Frermann and Klementiev , 2019 ) . A practical solution for this problem is to use a sliding window to process documents separately . This approach is used in other NLP tasks , such as machine reading comprehension ( Wang et al . , 2019b ) . However , such a paradigm is not suitable for summarization task because the concatenation of summaries that are independently extracted from local contexts is usually inconsistent with the gold summary of the entire document . Figure 1 shows an example to illustrate this problem . The core topic of the source document is \" medical tourism , \" which is discussed in Paragraphs 1 and 2 . How-ever , the 5-th paragraph is mainly about \" consumer and product . \" As a consequence , the paragraphby-paragraph extraction approach might produce a both repetitive and noisy summary . Under this circumstance , the supervised signals will have a negative effect on model behaviors because understanding why Paragraph 5 should output an empty result without information conveying from previous texts is confused for the model . In this paper , we propose a novel extractive summarization model for long-form documents . We split the input document into multiple windows and encode them with a sliding encoder sequentially . During this process , we introduce a memory to preserve salient information learned from previous windows , which is used to complete and enrich local texts . Intuitively , our model has the following advantages : 1 ) In each window , the text encoder processes a relatively short segment , thereby yielding more accurate representations . 2 ) The local text representations can capture beyond-window contextual information via the memory module . 3 ) The previous selection results are also parameterized in the memory block , allowing the collaboration among summary sentences . To sum up , our contributions are threefold . ( 1 ) We propose a novel extractive summarization model that can summarize documents of arbitrary length without truncation loss . Also , it employs the memory mechanism to address context fragmentation . To the best of our knowledge , we are the first to propose applying memory networks into extractive text summarization task . ( 2 ) The proposed framework ( i.e. , a sliding encoder combined with dynamic memory ) provides a general solution for summarizing long documents and can be easily extended to other abstractive and extractive summarization models . ( 3 ) Our model achieves the state-of-the-art results on two widely used datasets for long document summarization . Moreover , we conduct extensive analysis to understand how our model works and where the performance gain comes from . In this study , we propose a novel extractive summarization that can summarize long-form documents without content loss . We conduct extensive experiments on two well-studied datasets that consist of scientific papers . Experimental results demonstrate that our model outperforms previous stateof-the-art models . In the future , we will extend our framework ( i.e. , a sliding encoder combined with long-range memory modeling ) to abstractive summarization models .", "challenge": "Neural-based summarization models truncate documents because of the length limitation leading to a loss of summary-relevant contents and the sliding window approach is not suitable.", "approach": "They propose to employ a sliding window to extract summaries segment by segment with a memory mechanism for preserving and updating the history information dynamically.", "outcome": "The proposed model achieves state-of-the-art on two large-scale long document summarization datasets based on scientific papers."} +{"id": "E14-1023", "document": "In this paper , we present work on extracting social networks from unstructured text . We introduce novel features derived from semantic annotations based on FrameNet . We also introduce novel semantic tree kernels that help us improve the performance of the best reported system on social event detection and classification by a statistically significant margin . We show results for combining the models for the two aforementioned subtasks into the overall task of social network extraction . We show that a combination of features from all three levels of abstractions ( lexical , syntactic and semantic ) are required to achieve the best performing system . Social network extraction from text has recently been gaining a considerable amount of attention ( Agarwal and Rambow , 2010 ; Elson et al . , 2010 ; Agarwal et al . , 2013a ; Agarwal et al . , 2013b ; He et al . , 2013 ) . One of the reason for this attention , we believe , is that being able to extract social networks from unstructured text may provide a powerful new tool for historians , political scientists , scholars of literature , and journalists to analyze large collections of texts around entities and their interactions . The tool would allow researchers to quickly extract networks and assess their size , nature , and cohesiveness , a task that would otherwise be impossible with corpora numbering millions of documents . It would also make it possible to make falsifiable claims about these networks , bringing the experimental method to disciplines like history , where it is still relatively rare . In our previous work ( Agarwal et al . , 2010 ) , we proposed a definition of a network based on interactions : nodes are entities and links are social events . We defined two broad types of links : one-directional links ( one person thinking about or talking about another person ) and bi-directional links ( two people having a conversation , a meeting , etc . ) . For example , in the following sentence , we would add two links to the network : a one-directional link between Toujan Faisal and the committee , triggered by the word said ( because Toujan is talking about the committee ) and a bi-directional link between the same entities triggered by the word informed ( a mutual interaction ) . ( 1 ) [ Toujan Faisal ] , 54 , said [ she ] was informed of the refusal by an [ Interior Ministry committee ] overseeing election preparations . In this paper , we extract networks using the aforementioned definition of social networks . We introduce and add tree kernel representations and features derived from frame-semantic parses to our previously proposed system . Our results show that hand-crafted frame semantic features , which are linguistically motivated , add less value to the overall performance in comparison with the frame-semantic tree kernels . We believe this is due to the fact that hand-crafted features require frame parses to be highly accurate and complete . In contrast , tree kernels are able to find and leverage less strict patterns without requiring the semantic parse to be entirely accurate or complete . Apart from introducing semantic features and tree structures , we evaluate on the task of social network extraction , which is a combination of two sub-tasks : social event detection and social event classification . In our previous work ( Agarwal and Rambow , 2010 ) , we presented results for the two sub-tasks , but no evaluation was presented for the task of social network extraction . We experiment with two different designs of combining models for the two sub-tasks : 1 ) One-versus-All and 2 ) Hierarchical . We find that the hierarchical design outperforms the more commonly used Oneversus-All by a statistically significant margin . Following are the contributions of this paper : 1 . We design and propose novel frame semantic features and tree-based representations and show that tree kernels are well suited to work with noisy semantic parses . 2 . We show that in order to achieve the best performing system , we need to include features and tree structures from all levels of abstractions , lexical , syntactic , and semantic , and that the convolution kernel framework is well-suited for creating such a combination . 3 . We combine the previously proposed subtasks ( social event detection and classification ) into a single task , social network extraction , and show that combining the models using a hierarchical design is significantly better than the one-versus-all design . The rest of the paper is structured as follows : In Section 2 , we give a precise definition of the task and describe the data . In Section 3 , we give a brief overview of frame semantics and motivate the need to use frame semantics for the tasks addressed in this paper . In Section 4 , we present semantic features and tree kernel representations designed for the tasks . In Section 5 , we briefly review tree kernels and support vector machines ( SVM ) . In Section 6 we present experiments and discuss the results . In Section 7 we discuss related work . We conclude and give future directions of work in Section 8 . This work has only scratched the surface of possibilities for using frame semantic features and tree structures for the task of social event extraction . We have shown that tree kernels are well suited to work with possibly inaccurate semantic parses in contrast to hand-crafted features that require the semantic parses to be completely accurate . We have also extended our previous work by designing and evaluating a full system for social network extraction . A more natural data representation for semantic parses is a graph structure . We are actively exploring the design of semantic graph structures that may be brought to bear with the use of graph kernels ( Vishwanathan et al . , 2010 ) .", "challenge": "Social network extraction is gaining attention because it would be a powerful tool to analyze large collections of texts around entities and their interactions.", "approach": "They propose to use tree kernel representations and features derived from frame-semantic parses for the task of social event extraction.", "outcome": "They show that the use of semantic tree kernels is suited to noisy parses and improves the best system for social event detection and classification."} +{"id": "D19-1240", "document": "User-generated textual data is rich in content and has been used in many user behavioral modeling tasks . However , it could also leak user private-attribute information that they may not want to disclose such as age and location . User 's privacy concerns mandate data publishers to protect privacy . One effective way is to anonymize the textual data . In this paper , we study the problem of textual data anonymization and propose a novel Reinforcement Learning-based Text Anonymizor , RLTA , which addresses the problem of private-attribute leakage while preserving the utility of textual data . Our approach first extracts a latent representation of the original text w.r.t . a given task , then leverages deep reinforcement learning to automatically learn an optimal strategy for manipulating text representations w.r.t . the received privacy and utility feedback . Experiments show the effectiveness of this approach in terms of preserving both privacy and utility . Social media users generate a tremendous amount of data such as profile information , network connections and online reviews and posts . Online vendors use this data to understand users preferences and further predict their future needs . However , user-generated data is rich in content and malicious attackers can infer users ' sensitive information . AOL search data leak in 2006 is an example of privacy breaches which results in users re-identification according to the published AOL search logs and queries ( Pass et al . , 2006 ) . Therefore , these privacy concerns mandate that data be anonymized before publishing . Recent research has shown that textual data alone may contain sufficient information about users ' private-attributes that they do not want to disclose such as age , gender , location , political views and sexual orienta-tion ( Mukherjee and Liu , 2010 ; Volkova et al . , 2015 ) . Little attention has been paid to protect users textual information ( Li et al . , 2018 ; Zhang et al . , 2018 ; Anandan et al . , 2012 ; Saygin et al . , 2006 ) . Anonymizing textual information comes at the cost of losing utility of data for future applications . Some existing work shows the degraded quality of textual information ( Anandan et al . , 2012 ; Zhang et al . , 2018 ; Saygin et al . , 2006 ) . Another related problem setting is when the latent representation of the user generated texts is shared for different tasks . It is very common to use recurrent neural networks to create a representation of user generated text to use for different machine learning tasks . Hitaj el al . show text representations can leak users ' private information such as location ( Hitaj et al . , 2017 ) . This work aims to anonymize users ' textual information against private-attribute inference attacks . Adversarial learning is the state-of-the-art approach for creating a privacy preserving text embedding ( Li et al . , 2018 ; Coavoux et al . , 2018 ) . In these methods , a model is trained to create a text embedding , but we can not control the privacyutility balance . Recent success of reinforcement learning ( RL ) ( Paulus et al . , 2017 ; Sun and Zhang , 2018 ) shows a feasible alternative : by leveraging reinforcement learning , we can include feedback of attackers and utility in a reward function that allows for the control of the privacy-utility balance . Furthermore , an RL agent can perturb parts of an embedded text for preserving both utility and privacy , instead of retraining an embedding as in adversarial learning . Therefore , we propose a novel Reinforcement Learning-based Text Anonymizer , namely , RLTA , composed of two main components : 1 ) an attention based task-aware text representation learner to extract latent embedding representation of the original text 's content w.r.t . a given task , and 2 ) a deep reinforcement learning based privacy and utility preserver to convert the problem of text anonymization to a one-player game in which the agent 's goal is to learn the optimal strategy for text embedding manipulation to satisfy both privacy and utility . The Deep Q-Learning algorithm is then used to train the agent capable of changing the text embedding w.r.t . the received feedback from the privacy and utility subcomponents . We investigate the following challenges : 1 ) How could we extract the textual embedding w.r.t . a given task ? 2 ) How could we perturb the extracted text embedding to ensure that user privateattribute information is obscured ? and 3 ) How could we preserve the utility of text embedding during anonymization ? Our main contributions are : ( 1 ) we study the problem of text anonymization by learning a reinforced task-aware text anonymizer , ( 2 ) we corporate a data-utility taskaware checker to ensure that the utility of textual embeddings is preserved w.r.t . a given task , and ( 3 ) we conduct experiments on real-world data to demonstrate the effectiveness of RLTA in an important natural language processing task . In this paper , we propose a deep reinforcement learning based text anonymization , RLTA , which creates a text embedding such that does not leak user 's private-attribute information while preserving its utility w.r.t . a given task . RLTA has two main components : ( 1 ) an attention based taskaware text representation learner , and ( 2 ) a deep RL based privacy and utility preserver . Our results illustrate the effectiveness of RLTA in preserving privacy and utility . One future direction is to generate privacy preserving text rather than embeddings . We also adopt deep Q-learning to train the agent . A future direction is to apply different RL algorithms and investigate how it impacts results . It would be also interesting to adopt RLTA for other types of data .", "challenge": "User-generated texts in social media can leak sensitive private information about users however methods for protection have not been investigated well.", "approach": "They propose a reinforcement leaning-based text analyzer which prevents private-attribute leakage while keeping the utility by extracting representations and manipulating with privacy and utility feedback.", "outcome": "The proposed method effectively preserves privacy and utility in textual embeddings in experiments with real-world data."} +{"id": "E17-1004", "document": "The freedom of the Deep Web offers a safe place where people can express themselves anonymously but they also can conduct illegal activities . In this paper , we present and make publicly available 1 a new dataset for Darknet active domains , which we call it \" Darknet Usage Text Addresses \" ( DUTA ) . We built DUTA by sampling the Tor network during two months and manually labeled each address into 26 classes . Using DUTA , we conducted a comparison between two well-known text representation techniques crossed by three different supervised classifiers to categorize the Tor hidden services . We also fixed the pipeline elements and identified the aspects that have a critical influence on the classification results . We found that the combination of TF-IDF words representation with Logistic Regression classifier achieves 96.6 % of 10 folds cross-validation accuracy and a macro F1 score of 93.7 % when classifying a subset of illegal activities from DUTA . The good performance of the classifier might support potential tools to help the authorities in the detection of these activities . If we think about the web as an ocean of data , the Surface Web is no more than the slight waves that float on the top . While in the depth , there is a lot of sunken information that is not reached by the traditional search engines . The web can be divided into Surface Web and Deep Web . The Surface Web is the portion of the web that can be crawled and 1 The dataset is available upon request to the first author ( email ) . indexed by the standard search engines , such as Google or Bing . However , despite their existence , there is still an enormous part of the web remained without indexing due to its vast size and the lack of hyperlinks , i.e. not referenced by the other web pages . This part , that can not be found using a search engine , is known as Deep Web ( Noor et al . , 2011 ; Boswell , 2016 ) . Additionally , the content might be locked and requires human interaction to access e.g. to solve a CAPTCHA or to enter a log-in credential to access . This type of web pages is referred to as \" database-driven \" websites . Moreover , the traditional search engines do not examine the underneath layers of the web , and consequently , do not reach the Deep Web . The Darknet , which is also known as Dark Web , is a subset of the Deep Web . It is not only not indexed and isolated , but also requires a specific software or a dedicated proxy server to access it . The Darknet works over a virtual sub-network of the World Wide Web ( WWW ) that provides an additional layer of anonymity for the network users . The most popular ones are \" The Onion Router\"2 also known as Tor network , \" Invisible Internet Project \" I2P3 , and Freenet4 . The community of Tor refers to Darknet websites as \" Hidden Services \" ( HS ) which can be accessed via a special browser called Tor Browser5 . A study by Bergman et al . ( 2001 ) has stated astonishing statistics about the Deep Web . For example , only on Deep Web there are more than 550 billion individual documents comparing to only 1 billion on Surface Web . Furthermore , in the study of Rudesill et al . ( 2015 ) they emphasized on the immensity of the Deep Web which was estimated to be 400 to 500 times wider than the Surface Web . The concepts of Darknet and Deep Net have ex-isted since the establishment of World Wide Web ( WWW ) , but what make it very popular in the recent years is when the FBI had arrested Dread Pirate Roberts , the owner of Silk Road black market , in October 2013 . The FBI has estimated the sales on Silk Road to be 1.2 Billion dollars by July 2013 . The trading network covered among 150,000 anonymous customers and approximately 4,000 vendors ( Rudesill et al . , 2015 ) . The cryptocurrency ( Nakamoto , 2008 ) is a hot topic in the field of Darknet since it anonymizes the financial transactions and hides the trading parties identities ( Ron and Shamir , 2014 ) . The Darknet is often associated with illegal activities . In a study carried out by Intelliagg group ( 2015 ) over 1 K samples of hidden services , they claimed that 68 % of Darknet contents would be illegal . Moore et at . ( 2016 ) showed , after analyzing 5 K onion domains , that the most common usages for Tor HS are criminal and illegal activities , such as drugs , weapons and all kind of pornography . It is worth to mention about dramatic increase in the proliferation of Darknet domains which doubled their size from 30 K to 60 K between August 2015 and 2016 ( Figure 1 ) . However , the publicly reachable domains are no more than 6 K to 7 K due to the ambiguity nature of the Darknet ( Ciancaglini et al . , 2016 ) . Motivated by the critical buried contents on the Darknet and its high abuse , we focused our research in designing and building a system that classifies the illegitimate practices on Darknet . In this paper , we present the first publicly available dataset called \" Darknet Usage Text Addresses \" ( DUTA ) that is extracted from the Tor HS Darknet . DUTA contains 26 categories that cover all the legal and the illegal activities monitored on Darknet during our sampling period . Our objective is to create a precise categorization of the Darknet via classifying the textual content of the HS . In order to achieve our target , we designed and compared different combinations of some of the most wellknown text classification techniques by identifying the key stages that have a high influence on the method performance . We set a baseline methodology by fixing the elements of text classification pipeline which allows the scientific community to compare their future research with this baseline under the defined pipeline . The fixed methodology we propose might represent a significant contribution into a tool for the authorities who monitor the Darknet abuse . The rest of the paper is organized as follows : Section 2 presents the related work . Next , Section 3 explains the proposed dataset DUTA and its characteristics . After that , Section 4 describes the set of the designed classification pipelines . Then , in Section 5 we discuss the experiments performed and the results . In Section 6 we describe the technical implementation details and how we employed the successful classifier in an application . Finally , in Section 7 we present our conclusions with a pointing to our future work . In this paper , we have categorized illegal activities of Tor HS by using two text representation methods , TF-IDF and BOW , combined with three classifiers , SVM , LR , and NB . To support the classification pipelines , we built the dataset DUTA , We found that the combination of the TF-IDF text representation with the Logistic Regression classifier can achieve 96.6 % accuracy over 10 folds of cross-validation and 93.7 % macro F1 score . We noticed that our classifier suffers from overfitting due to the difficulty of reaching more samples of onion hidden services for some classes like counterfeiting personal identification or illegal drugs . However , our results are encouraging , and yet there is still a wide margin for future improvements . We are looking forward to enlarging the dataset by digging deeper into the Darknet by adding more HS sources , even from I2P and Freenet , and exploring ports other than the HTTP port . Moreover , we plan to get the benefit of the HTML tags and the hyperlinks by weighting some tags or parsing the hyperlinks text . Also , during the manual labeling of the dataset , we realized that a wide portion of the hidden services advertise their illegal products graphically , i.e. the service owner uses the images instead of the text . Therefore , our aim is to build an image classifier to work in parallel with the text classification . The high accuracy we have obtained in this work might represent an opportunity to insert our research into a tool that supports the authorities in monitoring the Darknet .", "challenge": "Darknet which is a sub-network of the World Wide Web with an additional layer of anonymity, is often associated with illegal activities.", "approach": "They present a dataset of samples of the Tor network over two months manually labelled into 26 classes and evaluate existing text classification methods.", "outcome": "They find that the Logistic Regression classifier with TF-IDF representations achieves 96.6% accuracy however it also suffers from overtting for some classes."} +{"id": "P17-1143", "document": "Cybersecurity risks and malware threats are becoming increasingly dangerous and common . Despite the severity of the problem , there has been few NLP efforts focused on tackling cybersecurity . In this paper , we discuss the construction of a new database for annotated malware texts . An annotation framework is introduced based around the MAEC vocabulary for defining malware characteristics , along with a database consisting of 39 annotated APT reports with a total of 6,819 sentences . We also use the database to construct models that can potentially help cybersecurity researchers in their data collection and analytics efforts . In 2010 , the malware known as Stuxnet physically damaged centrifuges in Iranian nuclear facilities ( Langner , 2011 ) . More recently in 2016 , a botnet known as Mirai used infected Internet of Things ( IoT ) devices to conduct large-scale Distributed Denial of Service ( DDoS ) attacks and disabled Internet access for millions of users in the US West Coast ( US-CERT , 2016 ) . These are only two cases in a long list ranging from ransomeware on personal laptops ( Andronio et al . , 2015 ) to taking over control of moving cars ( Checkoway et al . , 2011 ) . Attacks such as these are likely to become increasingly frequent and dangerous as more devices and facilities become connected and digitized . Recently , cybersecurity defense has also been recognized as one of the \" problem areas likely to be important both for advancing AI and for its long-run impact on society \" ( Sutskever et al . , 2016 ) . In particular , we feel that natural language processing ( NLP ) has the potential for substantial contribution in cybersecurity and that this is a critical research area given the urgency and risks involved . There exists a large repository of malwarerelated texts online , such as detailed malware reports by various cybersecurity agencies such as Symantec ( DiMaggio , 2015 ) and Cylance ( Gross , 2016 ) and in various blog posts . Cybersecurity researchers often consume such texts in the process of data collection . However , the sheer volume and diversity of these texts make it difficult for researchers to quickly obtain useful information . A potential application of NLP can be to quickly highlight critical information from these texts , such as the specific actions taken by a certain malware . This can help researchers quickly understand the capabilities of a specific malware and search in other texts for malware with similar capabilities . An immediate problem preventing application of NLP techniques to malware texts is that such texts are mostly unannotated . This severely limits their use in supervised learning techniques . In light of that , we introduce a database of annotated malware reports for facilitating future NLP work in cybersecurity . To the best of our knowledge , this is the first database consisting of annotated malware reports . It is intended for public release , where we hope to inspire contributions from other research groups and individuals . The main contributions of this paper are : \u2022 We initiate a framework for annotating malware reports and annotate 39 Advanced Persistent Threat ( APT ) reports ( containing 6,819 sentences ) with attribute labels from the Malware Attribute Enumeration and Characterization ( MAEC ) vocabulary ( Kirillov et al . , 2010 ) . \u2022 We propose the following tasks , construct models for tackling them , and discuss the challenges : \u2022 Classify if a sentence is useful for inferring malware actions and capabilities , \u2022 Predict token , relation and attribute labels for a given malware-related text , as defined by the earlier framework , and \u2022 Predict a malware 's signatures based only on text describing the malware . In this paper , we presented a framework for annotating malware reports . We also introduced a database with 39 annotated APT reports and proposed several new tasks and built models for extracting information from the reports . Finally , we discuss several factors that make these tasks extremely challenging given currently available models . We hope that this paper and the accompanying database serve as a first step towards NLP being applied in cybersecurity and that other researchers will be inspired to contribute to the database and to construct their own datasets and implementations . More details about this database can be found at http://statnlp.org / research / re/.", "challenge": "Regardless of the possible dangers of malware and the existence of related corpus, there are no annotated datasets hindering the development of NLP tools.", "approach": "They present an annotation scheme for malware texts with its instancilization using an existing corpus, and evaluate existing models on the new data and tasks.", "outcome": "Benchmark experiments with newly introduced data and tasks reveal several difficulties in existing models."} +{"id": "P07-1037", "document": "Until quite recently , extending Phrase-based Statistical Machine Translation ( PBSMT ) with syntactic structure caused system performance to deteriorate . In this work we show that incorporating lexical syntactic descriptions in the form of supertags can yield significantly better PBSMT systems . We describe a novel PBSMT model that integrates supertags into the target language model and the target side of the translation model . Two kinds of supertags are employed : those from Lexicalized Tree-Adjoining Grammar and Combinatory Categorial Grammar . Despite the differences between these two approaches , the supertaggers give similar improvements . In addition to supertagging , we also explore the utility of a surface global grammaticality measure based on combinatory operators . We perform various experiments on the Arabic to English NIST 2005 test set addressing issues such as sparseness , scalability and the utility of system subcomponents . Our best result ( 0.4688 BLEU ) improves by 6.1 % relative to a state-of-theart PBSMT model , which compares very favourably with the leading systems on the NIST 2005 task . Within the field of Machine Translation , by far the most dominant paradigm is Phrase-based Statistical Machine Translation ( PBSMT ) ( Koehn et al . , 2003 ; Tillmann & Xia , 2003 ) . However , unlike in rule-and example-based MT , it has proven difficult to date to incorporate linguistic , syntactic knowledge in order to improve translation quality . Only quite recently have ( Chiang , 2005 ) and ( Marcu et al . , 2006 ) shown that incorporating some form of syntactic structure could show improvements over a baseline PBSMT system . While ( Chiang , 2005 ) avails of structure which is not linguistically motivated , ( Marcu et al . , 2006 ) employ syntactic structure to enrich the entries in the phrase table . In this paper we explore a novel approach towards extending a standard PBSMT system with syntactic descriptions : we inject lexical descriptions into both the target side of the phrase translation table and the target language model . Crucially , the kind of lexical descriptions that we employ are those that are commonly devised within lexicon-driven approaches to linguistic syntax , e.g. Lexicalized Tree-Adjoining Grammar ( Joshi & Schabes , 1992 ; Bangalore & Joshi , 1999 ) and Combinary Categorial Grammar ( Steedman , 2000 ) . In these linguistic approaches , it is assumed that the grammar consists of a very rich lexicon and a tiny , impoverished 1 set of combinatory operators that assemble lexical entries together into parse-trees . The lexical entries consist of syntactic constructs ( ' supertags ' ) that describe information such as the POS tag of the word , its subcategorization information and the hierarchy of phrase categories that the word projects upwards . In this work we employ the lexical entries but exchange the algebraic combinatory operators with the more robust and efficient supertagging approach : like standard taggers , supertaggers employ probabilities based on local context and can be implemented using finite state technology , e.g. Hidden Markov Models ( Bangalore & Joshi , 1999 ) . There are currently two supertagging approaches available : LTAG-based ( Bangalore & Joshi , 1999 ) and CCG-based ( Clark & Curran , 2004 ) . Both the LTAG ( Chen et al . , 2006 ) and the CCG supertag sets ( Hockenmaier , 2003 ) were acquired from the WSJ section of the Penn-II Treebank using handbuilt extraction rules . Here we test both the LTAG and CCG supertaggers . We interpolate ( log-linearly ) the supertagged components ( language model and phrase table ) with the components of a standard PBSMT system . Our experiments on the Arabic-English NIST 2005 test suite show that each of the supertagged systems significantly improves over the baseline PBSMT system . Interestingly , combining the two taggers together diminishes the benefits of supertagging seen with the individual LTAG and CCG systems . In this paper we discuss these and other empirical issues . The remainder of the paper is organised as follows : in section 2 we discuss the related work on enriching PBSMT with syntactic structure . In section 3 , we describe the baseline PBSMT system which our work extends . In section 4 , we detail our approach . Section 5 describes the experiments carried out , together with the results obtained . Section 6 concludes , and provides avenues for further work . SMT practitioners have on the whole found it difficult to integrate syntax into their systems . In this work , we have presented a novel model of PBSMT which integrates supertags into the target language model and the target side of the translation model . Using LTAG supertags gives the best improvement over a state-of-the-art PBSMT system for a smaller data set , while CCG supertags work best on a large 2 million-sentence pair training set . Adding grammaticality factors based on algebraic compositional operators gives the best result , namely 0.4688 BLEU , or a 6.1 % relative increase over the baseline . This result compares favourably with the best systems on the NIST 2005 Arabic-English task . We expect more work on system integration to improve results still further , and anticipate that similar increases are to be seen for other language pairs .", "challenge": "Improving phrase-based statistical machine translation by incorporating linguistic and syntactic knowledge in a linguistically motivated way remains an open challenge.", "approach": "They propose to extend phrase-based statistical machine translation models by injecting lexical knowledge via supertags such as Lexicalized Tree-Adjoining Grammer into the target language model.", "outcome": "They show that Utilizing lexical knowledge via supertags for phrase-based statistical machine translation models improves over existing systems on Arabic to English translation tasks."} +{"id": "N12-1085", "document": "Existing work in fine-grained sentiment analysis focuses on sentences and phrases but ignores the contribution of individual words and their grammatical connections . This is because of a lack of both ( 1 ) annotated data at the word level and ( 2 ) algorithms that can leverage syntactic information in a principled way . We address the first need by annotating articles from the information technology business press via crowdsourcing to provide training and testing data . To address the second need , we propose a suffix-tree data structure to represent syntactic relationships between opinion targets and words in a sentence that are opinion-bearing . We show that a factor graph derived from this data structure acquires these relationships with a small number of word-level features . We demonstrate that our supervised model performs better than baselines that ignore syntactic features and constraints . The terms \" sentiment analysis \" and \" opinion mining \" cover a wide body of research on and development of systems that can automatically infer emotional states from text ( after Pang and Lee ( 2008 ) we use the two names interchangeably ) . Sentiment analysis plays a large role in business , politics , and is itself a vibrant research area ( Bollen et al . , 2010 ) . Effective sentiment analysis for texts such as newswire depends on the ability to extract who ( source ) is saying what ( target ) . Fine-grained sentiment analysis requires identifying the sources and targets directly relevant to sentiment bearing expressions ( Ruppenhofer et al . , 2008 ) . For example , consider the following sentence from a major information technology ( IT ) business journal : Lloyd Hession , chief security officer at BT Radianz in New York , said that virtualization also opens up a slew of potential network access control issues . There are three entities in the sentence that have the capacity to express an opinion : Lloyd Hession , BT Radianz , and New York . These are potential opinion sources . There are also a number of mentioned concepts that could serve as the topic of an opinion in the sentence , or target . These include all the sources , but also \" virtualization \" , \" network access control \" , \" network \" , and so on . The challenging task is to discriminate between these mentions and choose the ones that are relevant to the user . Furthermore , such a system must also indicate the content of the opinion itself . This means that we are actually searching for all triples { source , target , opinion } in this sentence ( Kim and Hovy , 2006 ) and throughout each document in the corpus . In this case , we want to identify that Lloyd Hession is the source of an opinion , \" slew of network issues , \" about a target , virtualization . Providing such fine-grained annotations would enrich information extraction , question answering , and corpus exploration applications by letting users see who is saying what with what opinion ( Wilson et al . , 2005 ; Stoyanov and Cardie , 2006 ) . We motivate the need for a grammatically-focused approach to fine-grained opinion mining and situate it within the context of existing work in Section 2 . We propose a supervised technique for learning opiniontarget relations from dependency graphs in a way that preserves syntactic coherence and semantic compositionality . In addition to being theoretically sound -a lacuna identified in many sentiment systems1 -such approaches improve downstream sentiment tasks ( Moilanen and Pulman , 2007 ) . There are multiple types of downstream tasks that potentially require the retrieval of { source , target , opinion } relations on a sentence-by-sentence basis . An increasingly significant application area is in the use of large corpora in social science . This area of research requires the exploration and aggregation of data about the relationships between discourses , organizations , and people . For example , the IT business press data that we use in this work belongs to a larger research program ( Tsui et al . , 2009 ; Sayeed et al . , 2010 ) of exploring industry opinion leadership . IT business press text is one type of text in which many entities and opinions can appear intermingled with one another in a small amount of text . Another application for fine-grained sentiment relation retrieval of this type is paraphrasing , where attribution of which opinion belongs to which entities may be important for producing useful and accurate output , since source and target identification errors can change the entire meaning of an output text . Unlike previous approaches that ignore syntax , we use a sentence 's syntactic structure to build a probabilistic model that encodes whether a word is opinion bearing as a latent variable . We build a data structure we call a \" syntactic relatedness trie \" ( Section 3 ) that serves as the skeleton for a graphical model over the sentiment relevance of words ( Section 4 ) . This approach allows us to learn features that predict opinion bearing constructions from grammatical structures . Because of a dearth of resources for this fine-grained task , we also develop new crowdsourcing techniques for labeling word-level , syntactically informed sen-timent ( Section 5 ) . We use inference techniques to uncover grammatical patterns that connect opinionexpressing words and target entities ( Section 6 ) performing better than using syntactically uninformed methods . In this work , we have applied machine learning to produce a robust modeling of syntactic structure for an information extraction application . A solution to the problem of modeling these structures requires the development of new techniques that model complex linguistic relationships in an application-dependent way . We have shown that we can mine these relationships without being overcome by the data-sparsity issues that typically stymie learning over complex linguistic structure . The limitations on these techniques ultimately find their root in the difficulty in modeling complex syntactic structures that simultaneously exclude irrelevant portions of the structure while maintaining connected relations . Our technique uses a structurelabelling scheme that enforces connectedness . Enforcing connected structure is not only necessary to produce useful results but also to improve accuracy . Further performance gains might be possible by enriching the feature set . For example , the POS tagset used by the Stanford parser contains multiple verb tags that represent different English tenses and numbers . For the purpose of sentiment relations , it is possible that the differences between verb tags are too small to matter and are causing data sparsity issues . Thus , we could additional features that \" back off \" to general verb tags .", "challenge": "Existing methods for fine-grained sentiment analysis ignore the contributions of individual words and their grammatical connections because of a lack of annotated data and algorithms.", "approach": "They present a dataset by a word-level, syntactically informed crowdsourcing technique and propose a supervised technique with a suffix-tree data structure to model syntactic relationships.", "outcome": "The proposed method can uncover grammatical patterns that connect opinion expressing words and target entities and outperforms syntactically uninformed baselines."} +{"id": "P16-1057", "document": "Many language generation tasks require the production of text conditioned on both structured and unstructured inputs . We present a novel neural network architecture which generates an output sequence conditioned on an arbitrary number of input functions . Crucially , our approach allows both the choice of conditioning context and the granularity of generation , for example characters or tokens , to be marginalised , thus permitting scalable and effective training . Using this framework , we address the problem of generating programming code from a mixed natural language and structured specification . We create two new data sets for this paradigm derived from the collectible trading card games Magic the Gathering and Hearthstone . On these , and a third preexisting corpus , we demonstrate that marginalising multiple predictors allows our model to outperform strong benchmarks . The generation of both natural and formal languages often requires models conditioned on diverse predictors ( Koehn et al . , 2007 ; Wong and Mooney , 2006 ) . Most models take the restrictive approach of employing a single predictor , such as a word softmax , to predict all tokens of the output sequence . To illustrate its limitation , suppose we wish to generate the answer to the question \" Who wrote The Foundation ? \" as \" The Foundation was written by Isaac Asimov \" . The generation of the words \" Issac Asimov \" and \" The Foundation \" from a word softmax trained on annotated data is unlikely to succeed as these words are sparse . A robust model might , for example , employ one pre- dictor to copy \" The Foundation \" from the input , and a another one to find the answer \" Issac Asimov \" by searching through a database . However , training multiple predictors is in itself a challenging task , as no annotation exists regarding the predictor used to generate each output token . Furthermore , predictors generate segments of different granularity , as database queries can generate multiple tokens while a word softmax generates a single token . In this work we introduce Latent Predictor Networks ( LPNs ) , a novel neural architecture that fulfills these desiderata : at the core of the architecture is the exact computation of the marginal likelihood over latent predictors and generated segments allowing for scalable training . We introduce a new corpus for the automatic generation of code for cards in Trading Card Games ( TCGs ) , on which we validate our model1 . TCGs , such as Magic the Gathering ( MTG ) and Hearthstone ( HS ) , are games played between two players that build decks from an ever expanding pool of cards . Examples of such cards are shown in Figure 1 . Each card is identified by its attributes ( e.g. , name and cost ) and has an effect that is described in a text box . Digital implementations of these games implement the game logic , which includes the card effects . This is attractive from a data extraction perspective as not only are the data annotations naturally generated , but we can also view the card as a specification communicated from a designer to a software engineer . This dataset presents additional challenges to prior work in code generation ( Wong and Mooney , 2006 ; Jones et al . , 2012 ; Lei et al . , 2013 ; Artzi et al . , 2015 ; Quirk et al . , 2015 ) , including the handling of structured input-i.e . cards are composed by multiple sequences ( e.g. , name and description)-and attributes ( e.g. , attack and cost ) , and the length of the generated sequences . Thus , we propose an extension to attention-based neural models ( Bahdanau et al . , 2014 ) to attend over structured inputs . Finally , we propose a code compression method to reduce the size of the code without impacting the quality of the predictions . Experiments performed on our new datasets , and a further pre-existing one , suggest that our extensions outperform strong benchmarks . The paper is structured as follows : We first describe the data collection process ( Section 2 ) and formally define our problem and our baseline method ( Section 3 ) . Then , we propose our extensions , namely , the structured attention mechanism ( Section 4 ) and the LPN architecture ( Section 5 ) . We follow with the description of our code compression algorithm ( Section 6 ) . Our model is validated by comparing with multiple benchmarks ( Section 7 ) . Finally , we contextualize our findings with related work ( Section 8) and present the conclusions of this work ( Section 9 ) . We introduced a neural network architecture named Latent Prediction Network , which allows efficient marginalization over multiple predictors . Under this architecture , we propose a generative model for code generation that combines a character level softmax to generate language-specific tokens and multiple pointer networks to copy keywords from the input . Along with other extensions , namely structured attention and code compression , our model is applied on on both existing datasets and also on a newly created one with implementations of TCG game cards . Our experiments show that our model out-performs multiple benchmarks , which demonstrate the importance of combining different types of predictors .", "challenge": "While using a single predictor puts limitations on generation tasks, training multiple predictors remains challenging because of a lack of suitable annotated data.", "approach": "They propose a neural network architecture that allows to incorporation arbitary number of input functions enabling flexibility in conditioning text and the granularity of generation.", "outcome": "Evaluation of existing benchmarks and the newly created card game-based dataset shows that the proposed model with multiple predictors outperforms strong baseline models."} +{"id": "P14-1121", "document": "The main work in bilingual lexicon extraction from comparable corpora is based on the implicit hypothesis that corpora are balanced . However , the historical contextbased projection method dedicated to this task is relatively insensitive to the sizes of each part of the comparable corpus . Within this context , we have carried out a study on the influence of unbalanced specialized comparable corpora on the quality of bilingual terminology extraction through different experiments . Moreover , we have introduced a regression model that boosts the observations of word cooccurrences used in the context-based projection method . Our results show that the use of unbalanced specialized comparable corpora induces a significant gain in the quality of extracted lexicons . The bilingual lexicon extraction task from bilingual corpora was initially addressed by using parallel corpora ( i.e. a corpus that contains source texts and their translation ) . However , despite good results in the compilation of bilingual lexicons , parallel corpora are scarce resources , especially for technical domains and for language pairs not involving English . For these reasons , research in bilingual lexicon extraction has focused on another kind of bilingual corpora comprised of texts sharing common features such as domain , genre , sampling period , etc . without having a source text / target text relationship ( McEnery and Xiao , 2007 ) . These corpora , well known now as comparable corpora , have also initially been introduced as non-parallel corpora ( Fung , 1995 ; Rapp , 1995 ) , and non-aligned corpora ( Tanaka and Iwasaki , 1996 ) . According to Fung and Che-ung ( 2004 ) , who range bilingual corpora from parallel corpora to quasi-comparable corpora going through comparable corpora , there is a continuum from parallel to comparable corpora ( i.e. a kind of filiation ) . The bilingual lexicon extraction task from comparable corpora inherits this filiation . For instance , the historical context-based projection method ( Fung , 1995 ; Rapp , 1995 ) , known as the standard approach , dedicated to this task seems implicitly to lead to work with balanced comparable corpora in the same way as for parallel corpora ( i.e. each part of the corpus is composed of the same amount of data ) . In this paper we want to show that the assumption that comparable corpora should be balanced for bilingual lexicon extraction task is unfounded . Moreover , this assumption is prejudicial for specialized comparable corpora , especially when involving the English language for which many documents are available due the prevailing position of this language as a standard for international scientific publications . Within this context , our main contribution consists in a re-reading of the standard approach putting emphasis on the unfounded assumption of the balance of the specialized comparable corpora . In specialized domains , the comparable corpora are traditionally of small size ( around 1 million words ) in comparison with comparable corpus-based general language ( up to 100 million words ) . Consequently , the observations of word co-occurrences which is the basis of the standard approach are unreliable . To make them more reliable , our second contribution is to contrast different regression models in order to boost the observations of word co-occurrences . This strategy allows to improve the quality of extracted bilingual lexicons from comparable corpora . In this paper , we have studied how an unbalanced specialized comparable corpus could influence the quality of the bilingual lexicon extraction . This aspect represents a significant interest when working with specialized comparable corpora for which the quantity of the data collected may differ depending on the languages involved , especially when involving the English language as many scientific documents are available . More precisely , our different experiments show that using an unbalanced specialized comparable corpus always improves the quality of word translations . Thus , the MAP goes up from 29.6 % ( best result on the balanced corpora ) to 42.3 % ( best result on the unbalanced corpora ) in the breast cancer domain , and from 16.5 % to 26.0 % in the diabetes domain . Additionally , these results can be improved by using a prediction model of the word co-occurrence counts . Here , the MAP goes up from 42.3 % ( best result on the unbalanced corpora ) to 46.9 % ( best result on the unbalanced corpora with prediction ) in the breast cancer domain , and from 26.0 % to 29.8 % in the diabetes domain . We hope that this study will pave the way for using specialized unbalanced comparable corpora for bilingual lexicon extraction .", "challenge": "The standard mapping approach for bilingual lexicon extraction assumes that two corpus to be balanced but no study assesses if this influences the performance.", "approach": "They analyze the relationship between corpus balance and bilingual lexicon extraction and propose a regression model to improve the performance.", "outcome": "They show that bilingual lexicon extraction is influenced by the balance of two source corpus, and imbalance improves the final extraction performance in specialized domains."} +{"id": "N09-1004", "document": "Word sense disambiguation is the process of determining which sense of a word is used in a given context . Due to its importance in understanding semantics of natural languages , word sense disambiguation has been extensively studied in Computational Linguistics . However , existing methods either are brittle and narrowly focus on specific topics or words , or provide only mediocre performance in real-world settings . Broad coverage and disambiguation quality are critical for a word sense disambiguation system . In this paper we present a fully unsupervised word sense disambiguation method that requires only a dictionary and unannotated text as input . Such an automatic approach overcomes the problem of brittleness suffered in many existing methods and makes broad-coverage word sense disambiguation feasible in practice . We evaluated our approach using SemEval 2007 Task 7 ( Coarse-grained English All-words Task ) , and our system significantly outperformed the best unsupervised system participating in Se-mEval 2007 and achieved the performance approaching top-performing supervised systems . Although our method was only tested with coarse-grained sense disambiguation , it can be directly applied to fine-grained sense disambiguation . In many natural languages , a word can represent multiple meanings / senses , and such a word is called a homograph . Word sense disambiguation(WSD ) is the process of determining which sense of a homograph is used in a given context . WSD is a long-standing problem in Computational Linguistics , and has significant impact in many real-world applications including machine translation , information extraction , and information retrieval . Generally , WSD methods use the context of a word for its sense disambiguation , and the context information can come from either annotated / unannotated text or other knowledge resources , such as Word-Net ( Fellbaum , 1998 ) , SemCor ( SemCor , 2008 ) , Open Mind Word Expert ( Chklovski and Mihalcea , 2002 ) , eXtended WordNet ( Moldovan and Rus , 2001 ) , Wikipedia ( Mihalcea , 2007 ) , parallel corpora ( Ng , Wang , and Chan , 2003 ) . In ( Ide and V\u00e9ronis , 1998 ) many different WSD approaches were described . Usually , WSD techniques can be divided into four categories ( Agirre and Edmonds , 2006 ) , \u2022 Dictionary and knowledge based methods . These methods use lexical knowledge bases such as dictionaries and thesauri , and hypothesize that context knowledge can be extracted from definitions of words . For example , Lesk disambiguated two words by finding the pair of senses with the greatest word overlap in their dictionary definitions ( Lesk , 1986 ) . \u2022 Supervised methods . Supervised methods mainly adopt context to disambiguate words . A supervised method includes a training phase and a testing phase . In the training phase , a sense-annotated training corpus is required , from which syntactic and semantic features are extracted to create a classifier using machine learning techniques , such as Support Vector Machine ( Novischi et al . , 2007 ) . In the following testing phase , a word is classified into senses ( Mihalcea , 2002 ) ( Ng and Lee , 1996 ) . Currently supervised methods achieve the best disambiguation quality ( about 80 % precision and recall for coarse-grained WSD in the most recent WSD evaluation conference SemEval 2007 ( Navigli et al . , 2007 ) ) . Nevertheless , since training corpora are manually annotated and expensive , supervised methods are often brittle due to data scarcity , and it is hard to annotate and acquire sufficient contextual information for every sense of a large number of words existing in natural languages . \u2022 Semi-supervised methods . To overcome the knowledge acquisition bottleneck problem suffered by supervised methods , these methods make use of a small annotated corpus as seed data in a bootstrapping process ( Hearst , 1991 ) ( Yarowsky , 1995 ) . A word-aligned bilingual corpus can also serve as seed data ( Ng , Wang , and Chan , 2003 ) . \u2022 Unsupervised methods . These methods acquire contextual information directly from unannotated raw text , and senses can be induced from text using some similarity measure ( Lin , 1997 ) . However , automatically acquired information is often noisy or even erroneous . In the most recent SemEval 2007 ( Navigli et al . , 2007 ) , the best unsupervised systems only achieved about 70 % precision and 50 % recall . Disambiguation of a limited number of words is not hard , and necessary context information can be carefully collected and hand-crafted to achieve high disambiguation accuracy as shown in ( Yarowsky , 1995 ) . However , such approaches suffer a significant performance drop in practice when domain or vocabulary is not limited . Such a \" cliff-style \" performance collapse is called brittleness , which is due to insufficient knowledge and shared by many techniques in Artificial Intelligence . The main challenge of a WSD system is how to overcome the knowledge acquisition bottleneck and efficiently collect the huge amount of context knowledge . More precisely , a practical WSD need figure out how to create and maintain a comprehensive , dynamic , and up-todate context knowledge base in a highly automatic manner . The context knowledge required in WSD has the following properties : 1 . The context knowledge need cover a large number of words and their usage . Such a requirement of broad coverage is not trivial because a natural language usually contains thousands of words , and some popular words can have dozens of senses . For example , the Oxford English Dictionary has approximately 301,100 main entries ( Oxford , 2003 ) , and the average polysemy of the WordNet inventory is 6.18 ( Fellbaum , 1998 ) . Clearly acquisition of such a huge amount of knowledge can only be achieved with automatic techniques . 2 . Natural language is not a static phenomenon . New usage of existing words emerges , which creates new senses . New words are created , and some words may \" die \" over time . It is estimated that every year around 2,500 new words appear in English ( Kister , 1992 ) . Such dynamics requires a timely maintenance and updating of context knowledge base , which makes manual collection even more impractical . Taking into consideration the large amount and dynamic nature of context knowledge , we only have limited options when choosing knowledge sources for WSD . WSD is often an unconscious process to human beings . With a dictionary and sample sentences / phrases an average educated person can correctly disambiguate most polysemous words . Inspired by human WSD process , we choose an electronic dictionary and unannotated text samples of word instances as context knowledge sources for our WSD system . Both sources can be automatically accessed , provide an excellent coverage of word meanings and usage , and are actively updated to reflect the current state of languages . In this paper we present a fully unsupervised WSD system , which only requires WordNet sense inventory and unannotated text . In the rest of this paper , section 2 describes how to acquire and represent the context knowledge for WSD . We present our WSD algorithm in section 3 . Our WSD system is evaluated with SemEval-2007 Task 7 ( Coarse-grained English All-words Task ) data set , and the experiment results are discussed in section 4 . We conclude in section 5 . Broad coverage and disambiguation quality are critical for WSD techniques to be adopted in practice . This paper proposed a fully unsupervised WSD method . We have evaluated our approach with SemEval-2007 Task 7 ( Coarse-grained English Allwords Task ) data set , and we achieved F-scores approaching the top performing supervised WSD systems . By using widely available unannotated text and a fully unsupervised disambiguation approach , our method may provide a viable solution to the problem of WSD . The future work includes : 1 . Continue to build the knowledge base , enlarge the coverage and improve the system performance . The experiment results in Section 4.2 clearly show that more word instances can improve the disambiguation accuracy and recall scores ; 2 . WSD is often an unconscious process for human beings . It is unlikely that a reader examines all surrounding words when determining the sense of a word , which calls for a smarter and more selective matching strategy than what we have tried in Section 4.1 ; 3 . Test our WSD system on fine-grained SemEval 2007 WSD task 17 . Although we only evaluated our approach with coarse-grained senses , our method can be directly applied to finegrained WSD without any modifications .", "challenge": "Unsupervised approaches for word sense disambiguation require no expensive annotations but are often noisy or erroneous and suffer when domain or vocabulary is not limited.", "approach": "They propose an unsupervised method which requires only a dictionary and raw texts as context knowledge allowing automatic updates of the state of a language.", "outcome": "The proposed method approaches top performing supervised systems in F-scors on the SemEval-2007 Task 7 dataset."} +{"id": "2020.acl-main.733", "document": "Lexica distinguishing all morphologically related forms of each lexeme are crucial to many language technologies , yet building them is expensive . We propose Frugal Paradigm Completion , an approach that predicts all related forms in a morphological paradigm from as few manually provided forms as possible . It induces typological information during training which it uses to determine the best sources at test time . We evaluate our language-agnostic approach on 7 diverse languages . Compared to popular alternative approaches , our Frugal Paradigm Completion approach reduces manual labor by 16 - 63 % and is the most robust to typological variation . From syntactic parsing ( Seeker and Kuhn , 2013 ) to text-to-speech ( Zen et al . , 2016 ; Wan et al . , 2019 ) , many linguistic technologies rely on accurate lexica decorated with morphological information . Yet , building such lexica requires much human effort ( Buckwalter , 2002 ; Tadi\u0107 and Fulgosi , 2003 ; Forsberg et al . , 2006 ; Sagot , 2010 ; Eskander et al . , 2013 ) . We present a language-agnostic method for minimizing the manual labor required to add new paradigms to an existing lexicon . Formally , let each lexicon entry , or realization , be a triple ( P , C , f ) . P marks membership in some paradigm P of morphologically related words , C defines a cell in P as a bundle of morphosyntactic features , and f is the form realizing C in P. Hence , paradigm SING can be expressed ( in the UniMorph schema ( Kirov et al . , 2018 ) ) as a set of realizations : { ( SING , NFIN , sing ) , ( SING , 3.SG.PRES , sings ) , . . . } . For each paradigm to be added to the lexicon , e.g. , FLY , we aim to select as few sources as pos-sible to be manually realized , e.g. , { ( FLY , NFIN , fly ) , ( FLY , PST , flew ) } such that the forms realizing the remaining cells can be predicted , i.e. , flies , flying , flown . Here , sources are manually provided realizations . Targets are realizations whose forms must be predicted from sources . Our work differs from traditional paradigm completion ( Durrett and DeNero , 2013 ) in that sources are not given blindly , but the system must strategically select which sources it wants to be given at test time . Paradigm completion from one source is typically non-deterministic due to multiple inflection classes realizing different exponents in some cells , e.g. , suffixing + ed generates the past tense for WALK , but not for SING or FLY which are members of different classes . Hence , many works discuss paradigm completion in the context of ( implicit ) inflection class disambiguation ( Ackerman et al . , 2009 ; Montermini and Bonami , 2013 ; Beniamine et al . , 2018 ) . Finkel and Stump ( 2007 ) propose three approaches to select the fewest sources required to deterministically identify class . Yet , neural sequence models can often complete paradigms accurately from less sources without fully disambiguating inflection class ( Kann and Sch\u00fctze , 2016 ; Aharoni and Goldberg , 2017 ; Wu and Cotterell , 2019 ) . See Elsner et al . ( 2019 ) for an overview of the application of neural sequence models to morphological theory . We propose Frugal Paradigm Completion ( FPC ) , inspired by work on inflection class disambiguation and neural sequence modeling . We train a source selection agent ( SSA ) to induce typological knowledge regarding the distribution of complexity in paradigms and use this to request informative source cells to be realized by an oracle . Sources are fed to a predictor to generate target forms . For each paradigm , SSA iteratively requests sources until the oracle confirms all cells have been realized correctly . We introduce a novel metric , auto-rate , to quantify the manual labour ( performed by the oracle ) needed to complete each paradigm . Using this metric , we demonstrate that FPC reduces labor by 63 % over predicting targets from lemmata , and 47 % over predicting them from the smallest set of sources that fully disambiguates inflection class . We propose a new typology for discussing the organization of complexity in paradigms which helps explain why strategies perform better or worse on certain languages while FPC , being sensitive to typological variation , performs robustly . After discussing related paradigm completion approaches in Section 2 , we describe FPC in Section 3 . Section 4 covers all data and experimental set up details . We discuss results in Section 5 and analyze FPC 's behavior in Section 6 . We presented Frugal Paradigm Completion , which reduces the manual labor required to expand a morphological lexicon by 16 - 63 % over competitive approaches across 7 languages . We demonstrated that typologically distinct morphological systems require unique treatment and benefit from our SSA , that learns its strategy from data . We found that inducing this strategy is not as challenging as previously suggested ( Finkel and Stump , 2007 ) . Thus , SSA might be replaced with a less costly architecture while our model might be improved by conditioning on semantics and jointly decoding from a variable number of sources .", "challenge": "Lexica distinguishing all morphologically related forms is crucial to language technologies but building such a resource requires expensive human effort.", "approach": "They propose a language-agnostic method which predicts all related forms in a morphological paradigm from a few manual annotations by inducing typological information during training.", "outcome": "The proposed method reduced the manual labor for expanding a morphological lexicon by 16 - 63% over existing methods in 7 languages."} +{"id": "P10-1049", "document": "Several attempts have been made to learn phrase translation probabilities for phrasebased statistical machine translation that go beyond pure counting of phrases in word-aligned training data . Most approaches report problems with overfitting . We describe a novel leavingone-out approach to prevent over-fitting that allows us to train phrase models that show improved translation performance on the WMT08 Europarl German-English task . In contrast to most previous work where phrase models were trained separately from other models used in translation , we include all components such as single word lexica and reordering models in training . Using this consistent training of phrase models we are able to achieve improvements of up to 1.4 points in BLEU . As a side effect , the phrase table size is reduced by more than 80 % . A phrase-based SMT system takes a source sentence and produces a translation by segmenting the sentence into phrases and translating those phrases separately ( Koehn et al . , 2003 ) . The phrase translation table , which contains the bilingual phrase pairs and the corresponding translation probabilities , is one of the main components of an SMT system . The most common method for obtaining the phrase table is heuristic extraction from automatically word-aligned bilingual training data ( Och et al . , 1999 ) . In this method , all phrases of the sentence pair that match constraints given by the alignment are extracted . This includes overlapping phrases . At extraction time it does not matter , whether the phrases are extracted from a highly probable phrase alignment or from an unlikely one . Phrase model probabilities are typically defined as relative frequencies of phrases extracted from word-aligned parallel training data . The joint counts C ( f , \u1ebd ) of the source phrase f and the target phrase \u1ebd in the entire training data are normalized by the marginal counts of source and target phrase to obtain a conditional probability EQUATION The translation process is implemented as a weighted log-linear combination of several models h m ( e I 1 , s K 1 , f J 1 ) including the logarithm of the phrase probability in source-to-target as well as in target-to-source direction . The phrase model is combined with a language model , word lexicon models , word and phrase penalty , and many others . ( Och and Ney , 2004 ) The best translation \u00ea\u00ce 1 as defined by the models then can be written as EQUATION In this work , we propose to directly train our phrase models by applying a forced alignment procedure where we use the decoder to find a phrase alignment between source and target sentences of the training data and then updating phrase translation probabilities based on this alignment . In contrast to heuristic extraction , the proposed method provides a way of consistently training and using phrase models in translation . We use a modified version of a phrase-based decoder to perform the forced alignment . This way we ensure that all models used in training are identical to the ones used at decoding time . An illustration of the basic idea can be seen in Figure 1 . In the literature this method by itself has been shown to be problematic because it suffers from over-fitting ( DeNero et al . , 2006 ) , ( Liang et al . , 2006 ) . Since our initial phrases are extracted from the same training data , that we want to align , very long phrases can be found for segmentation . As these long phrases tend to occur in only a few training sentences , the EM algorithm generally overestimates their probability and neglects shorter phrases , which better generalize to unseen data and thus are more useful for translation . In order to counteract these effects , our training procedure applies leaving-one-out on the sentence level . Our results show , that this leads to a better translation quality . Ideally , we would produce all possible segmentations and alignments during training . However , this has been shown to be infeasible for real-world data ( DeNero and Klein , 2008 ) . As training uses a modified version of the translation decoder , it is straightforward to apply pruning as in regular decoding . Additionally , we consider three ways of approximating the full search space : 1 . the single-best Viterbi alignment , 2 . the n-best alignments , 3 . all alignments remaining in the search space after pruning . The performance of the different approaches is measured and compared on the German-English Europarl task from the ACL 2008 Workshop on Statistical Machine Translation ( WMT08 ) . Our results show that the proposed phrase model training improves translation quality on the test set by 0.9 BLEU points over our baseline . We find that by interpolation with the heuristically extracted phrases translation performance can reach up to 1.4 BLEU improvement over the baseline on the test set . After reviewing the related work in the following section , we give a detailed description of phrasal alignment and leaving-one-out in Section 3 . Section 4 explains the estimation of phrase models . The empirical evaluation of the different approaches is done in Section 5 . We have shown that training phrase models can improve translation performance on a state-ofthe-art phrase-based translation model . This is achieved by training phrase translation probabilities in a way that they are consistent with their use in translation . A crucial aspect here is the use of leaving-one-out to avoid over-fitting . We have shown that the technique is superior to limiting phrase lengths and smoothing with lexical probabilities alone . While models trained from Viterbi alignments already lead to good results , we have demonstrated that considering the 100-best alignments allows to better model the ambiguities in phrase segmentation . The proposed techniques are shown to be superior to previous approaches that only used lexical probabilities to smooth phrase tables or imposed limits on the phrase lengths . On the WMT08 Europarl task we show improvements of 0.9 BLEU points with the trained phrase table and 1.4 BLEU points when interpolating the newly trained model with the original , heuristically extracted phrase table . In TER , improvements are 0.4 and 1.7 points . In addition to the improved performance , the trained models are smaller leading to faster and smaller translation systems .", "challenge": "Existing approaches that learn phrase translation probabilities for phrase-based statistical machine translation beyond pure phrase counting in word-aligned training data suffer from over-fitting.", "approach": "They propose a leave-one-out approach for training phrase models jointly with other translation components to be consistent with the training data.", "outcome": "The proposed model outperforms baseline models with smoothing or phrase length limitations on Europarl German-English and TER datasets and keeps the system faster and smaller."} +{"id": "N19-1221", "document": "Abuse on the Internet represents a significant societal problem of our time . Previous research on automated abusive language detection in Twitter has shown that communitybased profiling of users is a promising technique for this task . However , existing approaches only capture shallow properties of online communities by modeling followerfollowing relationships . In contrast , working with graph convolutional networks ( GCNs ) , we present the first approach that captures not only the structure of online communities but also the linguistic behavior of the users within them . We show that such a heterogeneous graph-structured modeling of communities significantly advances the current state of the art in abusive language detection . Matthew Zook ( 2012 ) carried out an interesting study showing that the racist tweets posted in response to President Obama 's re-election were not distributed uniformly across the United States but instead formed clusters . This phenomenon is known as homophily : i.e. , people , both in real life and online , tend to cluster with those who appear similar to themselves . To model homophily , recent research in abusive language detection on Twitter ( Mishra et al . , 2018a ) incorporates embeddings for authors ( i.e. , users who have composed tweets ) that encode the structure of their surrounding communities . The embeddings ( called author profiles ) are generated by applying a node embedding framework to an undirected unlabeled community graph where nodes denote the authors and edges the follower-following relationships amongst them on Twitter . However , these profiles do not capture the linguistic behavior of the authors and their communities and do not convey whether their tweets tend to be abusive or not . In contrast , we represent the community of authors as a heterogeneous graph consisting of two types of nodes , authors and their tweets , rather than a homogeneous community graph of authors only . The primary advantage of such heterogeneous representations is that they enable us to model both community structure as well as the linguistic behavior of authors in these communities . To generate richer author profiles , we then propose a semi-supervised learning approach based on graph convolutional networks ( GCNs ) applied to the heterogeneous graph representation . To the best of our knowledge , our work is the first to use GCNs to model online communities in social media . We demonstrate that our methods provide significant improvements over existing techniques . In this paper , we built on the work of Mishra et al . ( 2018a ) that introduces community-based profiling of authors for abusive language detection . We proposed an approach based on graph convolutional networks to show that author profiles that directly capture the linguistic behavior of authors along with the structural traits of their community significantly advance the current state of the art .", "challenge": "Existing approaches to abusive language detection in Twitter only capture shallow follower-following relationships to model homophily by a node embedding framework without considering linguistic behavior.", "approach": "They propose a graph convolutional network-based approach which captures the community structure linguistic behaviors by representing it as a heterogeneous graph of authors and tweets.", "outcome": "The proposed model outperforms the current state-of-the-art techniques in abusive language detection."} +{"id": "N10-1045", "document": "This paper investigates cross-lingual textual entailment as a semantic relation between two text portions in different languages , and proposes a prospective research direction . We argue that cross-lingual textual entailment ( CLTE ) can be a core technology for several cross-lingual NLP applications and tasks . Through preliminary experiments , we aim at proving the feasibility of the task , and providing a reliable baseline . We also introduce new applications for CLTE that will be explored in future work . Textual Entailment ( TE ) ( Dagan and Glickman , 2004 ) has been proposed as a generic framework for modeling language variability . Given two texts T and H , the task consists in deciding if the meaning of H can be inferred from the meaning of T. So far , TE has been only applied in a monolingual setting , where both texts are assumed to be written in the same language . In this work , we propose and investigate a cross-lingual extension of TE , where we assume that T and H are written in different languages . The great potential of integrating ( monolingual ) TE recognition components into NLP architectures has been reported in several works , such as question answering ( Harabagiu and Hickl , 2006 ) , information retrieval ( Clinchant et al . , 2006 ) , information extraction ( Romano et al . , 2006 ) , and document summarization ( Lloret et al . , 2008 ) . To the best of our knowledge , mainly due to the absence of cross-lingual TE ( CLTE ) recognition components , similar improvements have not been achieved yet in any cross-lingual application . As a matter of fact , despite the great deal of attention that TE has received in recent years ( also witnessed by five editions of the Recognizing Textual Entailment Challenge1 ) , interest for cross-lingual extensions has not been in the mainstream of TE research , which until now has been mainly focused on the English language . Nevertheless , the strong interest towards crosslingual NLP applications ( both from the market and research perspectives , as demonstrated by successful evaluation campaigns such as CLEF2 ) is , to our view , a good reason to start investigating CLTE , as well . Along such direction , research can now benefit from recent advances in other fields , especially machine translation ( MT ) , and the availability of : i ) large amounts of parallel and comparable corpora in many languages , ii ) open source software to compute word-alignments from parallel corpora , and iii ) open source software to set-up strong MT baseline systems . We strongly believe that all these resources can potentially help in developing inference mechanisms on multilingual data . Building on these considerations , this paper aims to put the basis for future research on the crosslingual Textual Entailment task , in order to allow for semantic inference across languages in different NLP tasks . Among these , as a long-term goal , we plan to adopt CLTE to support the alignment of text portions that express the same meaning in different languages . As a possible application scenario , CLTE can be used to address content merging tasks in tidy multilingual environments , such as commercial Web sites , digital libraries , or user generated content collections . Within such framework , as it will be discussed in the last section of this paper , CLTE components can be used for automatic content synchronization in a concurrent , collaborative , and multilingual editing setting , e.g. Wikipedia . This paper presented a preliminary investigation towards cross-lingual Textual Entailment , focusing on possible research directions and alternative methodologies . Baseline results have been provided to demonstrate the potentialities of a simple approach that integrates MT and monolingual TE components . Overall , our work sets a novel framework for further studies and experiments to improve crosslingual NLP tasks . In particular , CLTE can be scaled to more complex problems , such as cross-lingual content merging and synchronization .", "challenge": "Regardless of the potential applications such as machine translation, the textual entailment problem has been investigated monolingually in English but not in cross-lingual setups.", "approach": "They propose a cross-lingual extension of textual entailment where T and H are in different languages to show possible research directions and alternative methodologies.", "outcome": "They provide the feasibility of the task and a reliable baseline through experiments to show the potential of the task."} +{"id": "N06-1029", "document": "Recognition of tone and intonation is essential for speech recognition and language understanding . However , most approaches to this recognition task have relied upon extensive collections of manually tagged data obtained at substantial time and financial cost . In this paper , we explore two approaches to tone learning with substantially reductions in training data . We employ both unsupervised clustering and semi-supervised learning to recognize pitch accent in English and tones in Mandarin Chinese . In unsupervised Mandarin tone clustering experiments , we achieve 57 - 87 % accuracy on materials ranging from broadcast news to clean lab speech . For English pitch accent in broadcast news materials , results reach 78 % . In the semi-supervised framework , we achieve Mandarin tone recognition accuracies ranging from 70 % for broadcast news speech to 94 % for read speech , outperforming both Support Vector Machines ( SVMs ) trained on only the labeled data and the 25 % most common class assignment level . These results indicate that the intrinsic structure of tone and pitch accent acoustics can be exploited to reduce the need for costly labeled training data for tone learning and recognition . Tone and intonation play a crucial role across many languages . However , the use and structure of tone varies widely , ranging from lexical tone which determines word identity to pitch accent signalling information status . Here we consider the recognition of lexical tones in Mandarin Chinese syllables and pitch accent in English . Although intonation is an integral part of language and is requisite for understanding , recognition of tone and pitch accent remains a challenging problem . The majority of current approaches to tone recognition in Mandarin and other East Asian tone languages integrate tone identification with the general task of speech recognition within a Hidden Markov Model framework . In some cases tone recognition is done only implicitly when a word or syllable is constrained jointly by the segmental acoustics and a higher level language model and the word identity determines tone identity . Other strategies build explicit and distinct models for the syllable final region , the vowel and optionally a final nasal , for each tone . Recent research has demonstrated the importance of contextual and coarticulatory influences on the surface realization of tones . ( Xu , 1997 ; Shen , 1990 ) The overall shape of the tone or accent can be substantially modified by the local effects of adjacent tone and intonational elements . Furthermore , broad scale phenomena such as topic and phrase structure can affect pitch height , and pitch shape may be variably affected by the presence of boundary tones . These findings have led to explicit modeling of tonal context within the HMM framework . In addition to earlier approaches that employed phrase structure ( Fujisaki , 1983 ) , several recent approaches to tone recognition in East Asian languages ( Wang and Seneff , 2000 ; Zhou et al . , 2004 ) have incorporated elements of local and broad range contextual influence on tone . Many of these techniques create explicit context-dependent models of the phone , tone , or accent for each context in which they appear , either using the tone sequence for left or right context or using a simplified high-low contrast , as is natural for integration in a Hidden Markov Model speech recognition framework . In pitch accent recognition , recent work by ( Hasegawa-Johnson et al . , 2004 ) has integrated pitch accent and boundary tone recognition with speech recognition using prosodically conditioned models within an HMM framework , improving both speech and prosodic recognition . Since these approaches are integrated with HMM speech recognition models , standard HMM training procedures which rely upon large labeled training sets are used for tone recognition as well . Other tone and pitch accent recognition approaches using other classification frameworks such as support vector machines ( Thubthong and Kijsirikul , 2001 ) and decision trees with boosting and bagging ( Sun , 2002 ) have relied upon large labeled training setsthousands of instances -for classifier learning . This labelled training data is costly to construct , both in terms of time and money , with estimates for some intonation annotation tasks reaching tens of times realtime . This annotation bottleneck as well as a theoretical interest in the learning of tone motivates the use of unsupervised or semi-supervised approaches to tone recognition whereby the reliance on this often scarce resource can be reduced . Little research has been done in the application of unsupervised and semi-supervised techniques for tone and pitch accent recognition . Some preliminary work by ( Gauthier et al . , 2005 ) employs selforganizing maps and measures of f0 velocity for tone learning . In this paper we explore the use of spectral and standard k-means clustering for unsupervised acquisition of tone , and the framework of manifold regularization for semi-supervised tone learning . We find that in clean read speech , unsupervised techniques can identify the underlying Mandarin tone categories with high accuracy , while even on noisier broadcast news speech , Mandarin tones can be recognized well above chance levels , with English pitch accent recognition at near the levels achieved with fully supervised Support Vector Machine ( SVM ) classifiers . Likewise in the semi-supervised framework , tone classification outperforms both most common class assignment and a comparable SVM trained on only the same small set of labeled instances , without recourse to the unlabeled instances . The remainder of paper is organized as follows . Section 2 describes the data sets on which English pitch accent and Mandarin tone learning are performed and the feature extraction process . Section 3 describes the unsupervised and semisupervised techniques employed . Sections 4 and 5 describe the experiments and results in unsupervised and semi-supervised frameworks respectively . Section 6 presents conclusions and future work . We have demonstrated the effectiveness of both unsupervised and semi-supervised techniques for recognition of Mandarin Chinese syllable tones and English pitch accents using acoustic features alone to capture pitch target height and slope . Although outperformed by fully supervised classification techniques using much larger samples of labelled training data , these unsupervised and semi-supervised techniques perform well above most common class assignment , in the best cases approaching 90 % of supervised levels , and , where comparable , well above a good discriminative classifier trained on a comparably small set of labelled data . Unsupervised techniques achieve accuracies of 87 % on the cleanest read speech , reaching 57 % on data from a standard Mandarin broadcast news corpus , and over 78 % on pitch accent classification for English broadcast news . Semi-supervised classification in the Mandarin four-class classification task reaches 94 % accuracy on read speech , 70 % on broadcast news data , improving dramatically over both the simple baseline of 25 % and a standard SVM with an RBF kernel trained only on the labeled examples . Future work will consider a broader range of tone and intonation classification , including the richer tone set of Cantonese as well as Bantu family tone languages , where annotated data truly is very rare . We also hope to integrate a richer contextual representation of tone and intonation consistent with phonetic theory within this unsupervised and semisupervised learning framework . We will further explore improvements in classification accuracy based on increases in labeled and unlabeled training examples .", "challenge": "Existing methods for tone and intonation recognitions require extensive collections of costly manually tagged data and there are few works on unsupervised and semi-supervised techniques.", "approach": "They employ unsupervised and semi-supervised learning to recognize pitch accents in English and Mandarin Chinese with less training data.", "outcome": "They show that the intrinsic structure of tone and pitch accent acoustics can reduce the need for training data, approaching 90% of supervised levels."} +{"id": "P93-1015", "document": "There is a need to develop a suitable computational grammar formalism for free word order languages for two reasons : First , a suitably designed formalism is likely to be more efficient . Second , such a formalism is also likely to be linguistically more elegant and satisfying . In this paper , we describe such a formalism , called the Paninian framework , that has been successfully applied to Indian languages . This paper shows that the Paninian framework applied to modern Indian languages gives an elegant account of the relation between surface form ( vibhakti ) and semantic ( karaka ) roles . The mapping is elegant and compact . The same basic account also explains active-passives and complex sentences . This suggests that the solution is not just adhoc but has a deeper underlying unity . A constraint based parser is described for the framework . The constraints problem reduces to bipartite graph matching problem because of the nature of constraints . Efficient solutions are known for these problems . It is interesting to observe that such a parser ( designed for free word order languages ) compares well in asymptotic time complexity with the parser for context free grammars ( CFGs ) which are basically designed for positional languages . A majority of human languages including Indian and other languages have relatively free word order . tn free word order languages , order of words contains only secondary information such as emphasis etc . Primary information relating to ' gross ' meaning ( e.g. , one that includes semantic relationships ) is contained elsewhere . Most existing computational grammars are based on context free grammars which are basically positional grammars . It is important to develop a suitable computational grammar formalism for free word order languages for two reasons : 1 . A suitably designed formalism will be more efficient because it will be able to make use of primary sources of information directly . 2 . Such a formalism is also likely to be linguistically more elegant and satisfying . Since it will be able to relate to primary sources of information , the grammar is likely to be more economical and easier to write . In this paper , we describe such a formalism , called the Paninian framework , that has been successfully applied to Indian languages . 1 It uses the notion of karaka relations between verbs and nouns in a sentence . The notion of karaka relations is central to the Paninian model . The karaka relations are syntactico-semantic ( or semantico-syntactic ) relations between the verbals and other related constituents in a sentence . They by themselves do not give the semantics . Instead they specify relations which mediate between vibhakti of nominals and verb forms on one hand and semantic relations on the other ( Kiparsky , 1982 ) ( Cardona ( 1976 ( Cardona ( ) , ( 1988 ) ) ) . See Fig . 1 . Two of the important karakas are karta karaka and karma karaka . Frequently , the karta karaka maps to agent theta role , and the karma to theme or goal theta role . Here we will not argue for the linguistic significance of karaka relations and differences with theta relations , as that has been done elsewhere ( Bharati et al . ( 1990 ) and ( 1992 ) ) . In summary , karta karaka is that participant in the action that is most independent . At times , it turns out to be the agent . But that need not be so . Thus , ' boy ' and ' key ' are respectively the karta karakas in the following sentences 1The Paninian framework was originally designed more than two millennia ago for writing a grammar of Sanskrit ; it has been adapted by us to deal with modern Indian languages . The boy opened the lock . The key opened the lock . Note that in the first sentence , the karta ( boy ) maps to agent theta role , while in the second , karta ( key ) maps to instrument theta role . As part of this framework , a mapping is specified between karaka relations and vibhakti ( which covers A. 2 collectively case endings , post-positional markers , etc . ) . This mapping between karakas and vibhakti depends on the verb and its tense aspect modality ( TAM ) label . The mapping is represented by two structures : default karaka charts and karaka chart transformations . The default karaka chart for a verb or a class of verbs gives the mapping for the TAM label tA_hE called basic . It specifies the vibhakti permitted for the applicable karaka relations for a verb when the verb has the basic TAM label . This basic TAM label roughly corresponds to present indefinite tense and is purely syntactic in nature . For other B. 1 TAM labels there are karaka chart transformation rules . Thus , for a given verb with some TAM label , appropriate karaka chart can be obtained using its basic karaka chart and the transformation rule B.2 depending on its TAM label . 2 In Hindi for instance , the basic TAM label is tA_hE ( which roughly stands for the present indefinite ) . The default karaka chart for three of the B.3 karakas is given in Fig . 2 . This explains the vibhaktis in sentences A.1 to A.2 . In A.1 and A.2 , ' Ram ' is karta and ' Mohan ' is karma , because of their vibhakti markers \u00a2 and ko , respectively . 3 ( Note that B.4 ' rAma ' is followed by \u00a2 or empty postposition , and ' mohana ' by ' ko ' postposition . ) A.I rAma mohana ko pltatA hE. 2The transformation rules are a device to represent the karaka charts more compactly . However , as is obvious , they affect the karaka charts and not the parse structure . Therefore , they are different from transformational granmlars . Formally , these rules can be eliminated by having separate karaka charts for each TAM label . But one would miss the liguistic generalization of relating the karaka charts based on TAM labels in a systematic manner . 3In the present examples karta and karma tm'n out to be agent and theme , respectively . Fig . 3 gives some transformation rules for the default mapping for Hindi . It explains the vibhakti in sentences B.1 to B.4 , where Ram is the karta but has different vibhaktis , \u00a2 , he , ko , se , respectively . In each of the sentences , if we transform the karaka chart of Fig . 2 by the transformation rules of Fig . 3 , we get the desired vibhakti for the karta Ram . In general , the transformations affect not only the vibhakti of karta but also that of other karakas . They also ' delete ' karaka roles at times , that is , the ' deleted ' karaka roles must not occur in the sentence . The Paninian framework is similar to the broad class of case based grammars . What distinguishes the Paninian framework is the use of karaka relations rather than theta roles , and the neat dependence of the karaka vibhakti mapping on TAMs and the transformation rules , in case of Indian languages . The same principle also solves the problem of karaka assignment for complex sentences ( Discussed later in Sec . 3 . ) In summary , this paper makes several contributions : \u2022 It shows that the Paninian framework applied to modern Indian languages gives an elegant account of the relation between vibhakti and karaka roles . The mapping is elegant and compact . 8The modified verb in the present sentences is the main verb . \u2022 The same basic account also explains activepassives and complex sentences in these languages . This suggest that the solution is not just adhoc but has a deeper underlying unity . \u2022 It shows how a constraint based parser can be built using the framework . The constraints problem reduces to bipartite graph matching problem because of the nature of constraints . Efficient solutions are known for these problems . It is interesting to observe that such a parser ( designed for free word order languages ) compares well in asymptotic time complexity with the parser for context free grammars ( CFGs ) which are basically designed for positional languages . A parser for Indian languages based on the Paninian theory is operational as part of a machine translation system . As part of our future work , we plan to apply this framework to other free word order languages ( i.e. , other than the Indian languages ) . This theory can also be attempted on positional languages such as English . What is needed is the concept of generalized vibhakti in which position of a word gets incoporated in vibhakti . Thus , for a pure free word order language , the generalized vibhakti contains preor post-positional markers , whereas for a pure positional language it contains position information of a word ( group ) . Clearly , for most natural languages , generalized vibhakti would contain information pertaining to both markers and position .", "challenge": "Although most human languages are order-free, most existing parsers use positional information.", "approach": "They propose to apply the Paninian framework originally designed for Sanskrit to parse Indian languages.", "outcome": "They show that the Paninian framework can be successfully applied to Indian languages such as Hindi."} +{"id": "2022.naacl-main.192", "document": "We present a novel feature attribution method for explaining text classifiers , and analyze it in the context of hate speech detection . Although feature attribution models usually provide a single importance score for each token , we instead provide two complementary and theoreticallygrounded scores -necessity and sufficiencyresulting in more informative explanations . We propose a transparent method that calculates these values by generating explicit perturbations of the input text , allowing the importance scores themselves to be explainable . We employ our method to explain the predictions of different hate speech detection models on the same set of curated examples from a test suite , and show that different values of necessity and sufficiency for identity terms correspond to different kinds of false positive errors , exposing sources of classifier bias against marginalized groups . Explainability in AI ( XAI ) is critical in reaching various objectives during a system 's development and deployment , including debugging the system , ensuring its fairness , safety and security , and understanding and appealing its decisions by end-users ( Vaughan and Wallach , 2021 ; Luo et al . , 2021 ) . A popular class of local explanation techniques is feature attribution methods , where the aim is to provide scores for each feature according to how important that feature is for the classifier decision for a given input . From an intuitive perspective , one issue with feature attribution scores is that it is not always clear how to interpret the assigned importance in operational terms . Specifically , saying that a feature is ' important ' might translate to two different predictions . The first interpretation is that if an important feature value is changed , then the prediction will change . The second interpretation is that , as long as the feature remains , the prediction will not change . The former interpretation corresponds to the necessity of the feature value , while the latter corresponds to its sufficiency . To further illustrate the difference between necessity and sufficiency , we take an example from hate speech detection . Consider the utterance \" I hate women \" . For a perfect model , the token ' women ' should have low sufficiency for a positive prediction , since merely the mention of this identity group should not trigger a hateful prediction . However , this token should have fairly high necessity , since a target identity is required for an abusive utterance to count as hate speech ( e.g. , \" I hate oranges \" should not be classified as hate speech ) . In this paper , we develop a method to estimate the necessity and sufficiency of each word in the input , as explanations for a binary text classifier 's decisions . Model-agnostic feature attribution methods like ours often perturb the input to be explained , obtain the predictions of the model for the perturbed instances , and aggregate the results to make conclusions about which input features are more influential on the model decision . When applying these methods to textual data , it is common to either drop the chosen tokens , or replace them with the mask token for those models that have been trained by fine-tuning a masked language model such as BERT ( Devlin et al . , 2019 ) . However , deleting tokens raises the possibility that a large portion of the perturbed examples are not fluent , and lie well outside the data manifold . Replacing some tokens with the mask token partially remedies this issue , however it raises others . Firstly , the explanation method ceases to be truly model-agnostic . Secondly , a masked sentence is in-distribution for the pre-trained model but out-of-distribution for the fine-tuned model , because the learned manifolds deviate from those formed during pre-training in the fine-tuning step . To avoid these problems we use a generative model to replace tokens with most probable n-grams . Generating perturbations in this way ensures that the perturbed instances are close to the true data manifold . It also provides an additional layer of transparency to the user , so they can decide whether to trust the explanation by checking how reasonable the perturbed examples seem . Although supervised discriminative models rely fundamentally on correlations within the dataset , different models might rely on different correlations more or less depending on model architecture and biases , training methods , and other idiosyncrasies . To capture the distinction between correlations in the data and the direct causes of the prediction , we turn to the notion of interventions from causal inference ( Pearl , 2009 ) . Previous work employing causal definitions of necessity and sufficiency for XAI have assumed tabular data with binary or numerical features . The situation in NLP is much more complex , since each feature is a word in context , and we have no concept of ' flipping ' or ' increasing ' feature values ( as in binary data and numerical data , respectively ) . Instead , our method generates perturbations of the input text that have high probability of being fluent while minimizing the probability that the generated text will also be a direct cause of the prediction we aim to explain . As our application domain we choose hate speech detection , a prominent NLP task with significant social outcomes ( Fortuna and Nunes , 2018 ; Kiritchenko et al . , 2021 ) . It has been shown that contemporary hate speech classifiers tend to learn spurious correlations , including those between identity terms and the positive ( hate ) class , which can result in further discrimination of already marginalized groups ( Dixon et al . , 2018 ; Park et al . , 2018 ; Garg et al . , 2019 ) . We apply our explainability metrics to test classifiers ' fairness towards identity-based groups ( e.g. , women , Muslims ) . We show how necessity and sufficiency metrics calculated for identity terms over hateful sentences can explain the classifier 's behaviour on non-hateful statements , highlighting classifiers ' tendencies to over-rely on the presence of identity terms or to ignore the characteristics of the object of abuse ( e.g. , protected identity groups vs. non-human entities ) . The contributions of this work are as follows : \u2022 We present the first methodology for calculating necessity and sufficiency metrics for text data as a feature attribution method . Arguably , this dual explainability measure is more informative and allows for deeper insights into a model 's inner workings than traditional single metrics . \u2022 We use a generative model for producing input perturbations to avoid the out-of-distribution prediction issues that emerge with token deletion and masking techniques . \u2022 To evaluate the new methodology , we apply it to the task of explaining hate speech classification , and demonstrate that it can detect and explain biases in hate speech classifiers . We make the implementation code freely available to researchers to facilitate further advancement of explainability techniques for NLP.1 This work is a step towards more informative and transparent feature attribution metrics for explaining text classifiers . We argue that standard token importance metrics can be ambiguous in terms of what ' importance ' means . Instead , we adapt the theoretically-grounded concepts of necessity and sufficiency to explain text classifiers . Besides being more informative , the process of generating these two metrics is intuitive and can be explained to lay people in terms of \" how much the perturbations in input change the output of the classifier \" . Moreover , the input perturbations can be presented to the users , leading to a transparent and understandable explainability framework . Considering the complexities of perturbing textual features , we introduced a practical implementation to compute the necessity and sufficiency of the input tokens . Taking hate speech detection as an example application , we showed that sufficiency and necessity can be used to explain the expected differences between a classifier that is intended to detect identity-based hate speech and those trained for detecting general abuse . We also leveraged these metrics to explain the observed over-sensitivity and under-sensitivity to mentions of target groups , issues that are tightly related to fairness in hate speech detection . While the current work focused on binary hate speech detection for English-language social media posts , in future work , we will explore the effectiveness of these metrics in generating explanations for other applications and languages . We will also explore how the new metrics can improve the debugging of the models or communicating the model 's decisionmaking process to the end-users .", "challenge": "Feature attribution methods are used for local explanation but provided scores are not always clear on how to interpret the assigned importance in operational terms.", "approach": "They propose a feature attribution method for text classifiers which computes necessity and sufficiency metrics coupled with a generative model-based explicit text perturbation method.", "outcome": "Experiments with different hate speech detection models towards identity-based groups show that the proposed method can explain the classifier's behaviour on non-hateful statements."} +{"id": "P06-1093", "document": "Call centers handle customer queries from various domains such as computer sales and support , mobile phones , car rental , etc . Each such domain generally has a domain model which is essential to handle customer complaints . These models contain common problem categories , typical customer issues and their solutions , greeting styles . Currently these models are manually created over time . Towards this , we propose an unsupervised technique to generate domain models automatically from call transcriptions . We use a state of the art Automatic Speech Recognition system to transcribe the calls between agents and customers , which still results in high word error rates ( 40 % ) and show that even from these noisy transcriptions of calls we can automatically build a domain model . The domain model is comprised of primarily a topic taxonomy where every node is characterized by topic(s ) , typical Questions-Answers ( Q&As ) , typical actions and call statistics . We show how such a domain model can be used for topic identification of unseen calls . We also propose applications for aiding agents while handling calls and for agent monitoring based on the domain model . Call center is a general term for help desks , information lines and customer service centers . Many companies today operate call centers to handle customer issues . It includes dialog-based ( both voice and online chat ) and email support a user receives from a professional agent . Call centers have become a central focus of most companies as they allow them to be in direct contact with their customers to solve product-related and servicesrelated issues and also for grievance redress . A typical call center agent handles over a hundred calls in a day . Gigabytes of data is produced every day in the form of speech audio , speech transcripts , email , etc . This data is valuable for doing analysis at many levels , e.g. , to obtain statistics about the type of problems and issues associated with different products and services . This data can also be used to evaluate agents and train them to improve their performance . Today 's call centers handle a wide variety of domains such as computer sales and support , mobile phones and apparels . To analyze the calls in any domain , analysts need to identify the key issues in the domain . Further , there may be variations within a domain , say mobile phones , based on the service providers . The analysts generate a domain model through inspection of the call records ( audio , transcripts and emails ) . Such a model can include a listing of the call categories , types of problems solved in each category , listing of the customer issues , typical questions-answers , appropriate call opening and closing styles , etc . In essence , these models provide a structured view of the domain . Manually building such models for various domains may become prohibitively resource intensive . Another important point to note is that these models are dynamic in nature and change over time . As a new version of a mobile phone is introduced , software is launched in a country , a sudden attack of a virus , the model may need to be refined . Hence , an automated way of creating and maintaining such a model is important . In this paper , we have tried to formalize the essential aspects of a domain model . It comprises of primarily a topic taxonomy where every node is characterized by topic(s ) , typical Questions-Answers ( Q&As ) , typical actions and call statistics . To build the model , we first automatically transcribe the calls . Current automatic speech recognition technology for telephone calls have moderate to high word error rates ( Padmanabhan et al . , 2002 ) . We applied various feature engineering techniques to combat the noise introduced by the speech recognition system and applied text clustering techniques to group topically similar calls together . Using clustering at different granularity and identifying the relationship between groups at different granularity we generate a taxonomy of call types . This taxonomy is augmented with various meta information related to each node as mentioned above . Such a model can be used for identification of topics of unseen calls . Towards this , we envision an aiding tool for agents to increase agent effectiveness and an administrative tool for agent appraisal and training . Organization of the paper : We start by describing related work in relevant areas . Section 3 talks about the call center dataset and the speech recognition system used . The following section contains the definition and describes an unsupervised mechanism for building a topical model from automatically transcribed calls . Section 5 demonstrates the usability of such a topical model and proposes possible applications . Section 6 concludes the paper . We have shown that it is possible to retrieve useful information from noisy transcriptions of call center voice conversations . We have shown that the extracted information can be put in the form of a model that succinctly captures the domain and provides a comprehensive view of it . We briefly showed through experiments that this model is an accurate description of the domain . We have also suggested useful scenarios where the model can be used to aid and improve call center performance . A call center handles several hundred-thousand calls per year in various domains . It is very difficult to monitor the performance based on manual processing of the calls . The framework presented in this paper , allows a large part of this work to be automated . A domain specific model that is automatically learnt and updated based on the voice conversations allows the call center to identify problem areas quickly and allocate resources more effectively . In future we would like to semantically cluster the topic specific information so that redundant topics are eliminated from the list . We can use Automatic Taxonomy Generation(ATG ) algorithms for document summarization ( Kummamuru et al . , 2004 ) to build topic taxonomies . We would also like to link our model to technical manuals , catalogs , etc . already available on the different topics in the given domain .", "challenge": "Domain models containing problem categories, typical issues, and greeting styles are manually constructed through inspection of call records to reduce the workload at call centers.", "approach": "They propose an unsupervised technique to generate domain models automatically from call transcriptions obtained from an Automatic Speech Recognition system and clustering.", "outcome": "They show that while the automatically obtained transcriptions are noisy, the proposed method can successfully build a domain model to provide a comprehensive view."} +{"id": "P14-1123", "document": "Designing measures that capture various aspects of language ability is a central task in the design of systems for automatic scoring of spontaneous speech . In this study , we address a key aspect of language proficiency assessment -syntactic complexity . We propose a novel measure of syntactic complexity for spontaneous speech that shows optimum empirical performance on real world data in multiple ways . First , it is both robust and reliable , producing automatic scores that agree well with human rating compared to the stateof-the-art . Second , the measure makes sense theoretically , both from algorithmic and native language acquisition points of view . Assessment of a speaker 's proficiency in a second language is the main task in the domain of automatic evaluation of spontaneous speech ( Zechner et al . , 2009 ) . Prior studies in language acquisition and second language research have conclusively shown that proficiency in a second language is characterized by several factors , some of which are , fluency in language production , pronunciation accuracy , choice of vocabulary , grammatical sophistication and accuracy . The design of automated scoring systems for non-native speaker speaking proficiency is guided by these studies in the choice of pertinent objective measures of these key aspects of language proficiency . The focus of this study is the design and performance analysis of a measure of the syntactic complexity of non-native English responses for use in automatic scoring systems . The state-ofthe art automated scoring system for spontaneous speech ( Zechner et al . , 2009 ; Higgins et al . , 2011 ) currently uses measures of fluency and pronunciation ( acoustic aspects ) to produce scores that are in reasonable agreement with human-rated scores of proficiency . Despite its good performance , there is a need to extend its coverage to higher order aspects of language ability . Fluency and pronunciation may , by themselves , already be good indicators of proficiency in non-native speakers , but from a construct validity perspective1 , it is necessary that an automatic assessment model measure higher-order aspects of language proficiency . Syntactic complexity is one such aspect of proficiency . By \" syntactic complexity \" , we mean a learner 's ability to use a wide range of sophisticated grammatical structures . This study is different from studies that focus on capturing grammatical errors in non-native speakers ( Foster and Skehan , 1996 ; Iwashita et al . , 2008 ) . Instead of focusing on grammatical errors that are found to be highly representative of language proficiency , our interest is in capturing the range of forms that surface in language production and the degree of sophistication of such forms , collectively referred to as syntactic complexity in ( Ortega , 2003 ) . The choice and design of objective measures of language proficiency is governed by two crucial constraints : 1 . Validity : a measure should show high discriminative ability between various levels of language proficiency , and the scores produced by the use of this measure should show high agreement with human-assigned scores . 2 . Robustness : a measures should be derived automatically and should be robust to errors in the measure generation process . A critical impediment to the robustness constraint in the state-of-the-art is the multi-stage au-tomated process , where errors in the speech recognition stage ( the very first stage ) affect subsequent stages . Guided by studies in second language development , we design a measure of syntactic complexity that captures patterns indicative of proficient and non-proficient grammatical structures by a shallow-analysis of spoken language , as opposed to a deep syntactic analysis , and analyze the performance of the automatic scoring model with its inclusion . We compare and contrast the proposed measure with that found to be optimum in Yoon and Bhat ( 2012 ) . Our primary contributions in this study are : \u2022 We show that the measure of syntactic complexity derived from a shallow-analysis of spoken utterances satisfies the design constraint of high discriminative ability between proficiency levels . In addition , including our proposed measure of syntactic complexity in an automatic scoring model results in a statistically significant performance gain over the state-of-the-art . \u2022 The proposed measure , derived through a completely automated process , satisfies the robustness criterion reasonably well . \u2022 In the domain of native language acquisition , the presence or absence of a grammatical structure indicates grammatical development . We observe that the proposed approach elegantly and effectively captures this presencebased criterion of grammatical development , since the feature indicative of presence or absence of a grammatical structure is optimal from an algorithmic point of view . Seeking alternatives to measuring syntactic complexity of spoken responses via syntactic parsers , we study a shallow-analysis based approach for use in automatic scoring . Empirically , we show that the proposed measure , based on a maximum entropy classification , satisfied the constraints of the design of an objective measure to a high degree . In addition , the proposed measure was found to be relatively robust to ASR errors . The measure outperformed a related measure of syntactic complexity ( also based on shallow-analysis of spoken response ) previously found to be well-suited for automatic scoring . Including the measure of syntactic complexity in an automatic scoring model resulted in statistically significant performance gains over the stateof-the-art . We also make an interesting observation that the impressionistic evaluation of syntactic complexity is better approximated by the presence or absence of grammar and usage patterns ( and not by their frequency of occurrence ) , an idea supported by studies in native language acquisition .", "challenge": "Existing approaches for automatic scoring of English language ability only fluency and pronunciation missing higher order aspects of language ability.", "approach": "They propose a maximum entropy classification-based measure of syntactic complexity for spontaneous speech by a shallow-analysis of spoken language with discriminative ability between proficiency levels.", "outcome": "The proposed measure combined with an automatic scoring model outperforms the existing model and is shown to be relatively robust to ASR errors."} +{"id": "P11-1119", "document": "Dialogue act classification is a central challenge for dialogue systems . Although the importance of emotion in human dialogue is widely recognized , most dialogue act classification models make limited or no use of affective channels in dialogue act classification . This paper presents a novel affect-enriched dialogue act classifier for task-oriented dialogue that models facial expressions of users , in particular , facial expressions related to confusion . The findings indicate that the affectenriched classifiers perform significantly better for distinguishing user requests for feedback and grounding dialogue acts within textual dialogue . The results point to ways in which dialogue systems can effectively leverage affective channels to improve dialogue act classification . Dialogue systems aim to engage users in rich , adaptive natural language conversation . For these systems , understanding the role of a user 's utterance in the broader context of the dialogue is a key challenge ( Sridhar , Bangalore , & Narayanan , 2009 ) . Central to this endeavor is dialogue act classification , which categorizes the intention behind the user 's move ( e.g. , asking a question , providing declarative information ) . Automatic dialogue act classification has been the focus of a large body of research , and a variety of approaches , including sequential models ( Stolcke et al . , 2000 ) , vector-based models ( Sridhar , Bangalore , & Narayanan , 2009 ) , and most recently , featureenhanced latent semantic analysis ( Di Eugenio , Xie , & Serafin , 2010 ) , have shown promise . These models may be further improved by leveraging regularities of the dialogue from both linguistic and extra-linguistic sources . Users ' expressions of emotion are one such source . Human interaction has long been understood to include rich phenomena consisting of verbal and nonverbal cues , with facial expressions playing a vital role ( Knapp & Hall , 2006 ; McNeill , 1992 ; Mehrabian , 2007 ; Russell , Bachorowski , & Fernandez-Dols , 2003 ; Schmidt & Cohn , 2001 ) . While the importance of emotional expressions in dialogue is widely recognized , the majority of dialogue act classification projects have focused either peripherally ( or not at all ) on emotion , such as by leveraging acoustic and prosodic features of spoken utterances to aid in online dialogue act classification ( Sridhar , Bangalore , & Narayanan , 2009 ) . Other research on emotion in dialogue has involved detecting affect and adapting to it within a dialogue system ( Forbes-Riley , Rotaru , Litman , & Tetreault , 2009 ; L\u00f3pez-C\u00f3zar , Silovsky , & Griol , 2010 ) , but this work has not explored leveraging affect information for automatic user dialogue act classification . Outside of dialogue , sentiment analysis within discourse is an active area of research ( L\u00f3pez-C\u00f3zar et al . , 2010 ) , but it is generally lim-ited to modeling textual features and not multimodal expressions of emotion such as facial actions . Such multimodal expressions have only just begun to be explored within corpus-based dialogue research ( Calvo & D'Mello , 2010 ; Cavicchio , 2009 ) . This paper presents a novel affect-enriched dialogue act classification approach that leverages knowledge of users ' facial expressions during computer-mediated textual human-human dialogue . Intuitively , the user 's affective state is a promising source of information that may help to distinguish between particular dialogue acts ( e.g. , a confused user may be more likely to ask a question ) . We focus specifically on occurrences of students ' confusion-related facial actions during taskoriented tutorial dialogue . Confusion was selected as the focus of this work for several reasons . First , confusion is known to be prevalent within tutoring , and its implications for student learning are thought to run deep ( Graesser , Lu , Olde , Cooper-Pye , & Whitten , 2005 ) . Second , while identifying the \" ground truth \" of emotion based on any external display by a user presents challenges , prior research has demonstrated a correlation between particular facial action units and confusion during learning ( Craig , D'Mello , Witherspoon , Sullins , & Graesser , 2004 ; D'Mello , Craig , Sullins , & Graesser , 2006 ; McDaniel et al . , 2007 ) . Finally , automatic facial action recognition technologies are developing rapidly , and confusion-related facial action events are among those that can be reliably recognized automatically ( Bartlett et al . , 2006 ; Cohn , Reed , Ambadar , Xiao , & Moriyama , 2004 ; Pantic & Bartlett , 2007 ; Zeng , Pantic , Roisman , & Huang , 2009 ) . This promising development bodes well for the feasibility of automatic real-time confusion detection within dialogue systems . Emotion plays a vital role in human interactions . In particular , the role of facial expressions in humanhuman dialogue is widely recognized . Facial expressions offer a promising channel for understanding the emotions experienced by users of dialogue systems , particularly given the ubiquity of webcam technologies and the increasing number of dialogue systems that are deployed on webcamenabled devices . This paper has reported on a first step toward using knowledge of user facial expressions to improve a dialogue act classification model for tutorial dialogue , and the results demonstrate that facial expressions hold great promise for distinguishing the pedagogically relevant dialogue act REQUEST FOR FEEDBACK , and the conversational moves of GROUNDING . These early findings highlight the importance of future work in this area . Dialogue act classification models have not fully leveraged some of the techniques emerging from work on sentiment analysis . These approaches may prove particularly useful for identifying emotions in dialogue utterances . Another important direction for future work involves more fully exploring the ways in which affect expression differs between textual and spoken dialogue . Finally , as automatic facial tagging technologies mature , they may prove powerful enough to enable broadly deployed dialogue systems to feasibly leverage facial expression data in the near future .", "challenge": "Dialogue act classification which is a central challenge for dialogue systems currently makes limited or no use of emotional expressions while they are widely recognized.", "approach": "They propose an affect-enriched classifier for task-oriented and computer-mediated textual human-human tutorial dialogues which uses confusion-related facial expressions by students.", "outcome": "The proposed classifier can leverage affective channels and improve on distinguishing user requests for feedback and grounding dialogue acts within a textual dialogue."} +{"id": "D14-1187", "document": "When Part-of-Speech annotated data is scarce , e.g. for under-resourced languages , one can turn to cross-lingual transfer and crawled dictionaries to collect partially supervised data . We cast this problem in the framework of ambiguous learning and show how to learn an accurate history-based model . Experiments on ten languages show significant improvements over prior state of the art performance . In the past two decades , supervised Machine Learning techniques have established new performance standards for many NLP tasks . Their success however crucially depends on the availability of annotated in-domain data , a not so common situation . This means that for many application domains and/or less-resourced languages , alternative ML techniques need to be designed to accommodate unannotated or partially annotated data . Several attempts have recently been made to mitigate the lack of annotated corpora using parallel data pairing a ( source ) text in a resource-rich language with its counterpart in a less-resourced language . By transferring labels from the source to the target , it becomes possible to obtain noisy , yet useful , annotations that can be used to train a model for the target language in a weakly supervised manner . This research trend was initiated by Yarowsky et al . ( 2001 ) , who consider the transfer of POS and other syntactic information , and further developed in ( Hwa et al . , 2005 ; Ganchev et al . , 2009 ) for syntactic dependencies , in ( Pad\u00f3 and Lapata , 2009 ; Kozhevnikov and Titov , 2013 ; van der Plas et al . , 2014 ) for semantic role labeling and in ( Kim et al . , 2012 ) for named-entity recognition , to name a few . Assuming that labels can actually be projected across languages , these techniques face the issue of extending standard supervised techniques with partial and/or uncertain labels in the presence of alignment noise . In comparison to the early approach of Yarowsky et al . ( 2001 ) in which POS are directly transferred , subject to heuristic filtering rules , recent works consider the integration of softer constraints using expectation regularization techniques ( Wang and Manning , 2014 ) , the combination of alignment-based POS transfer with additional information sources such as dictionaries ( Li et al . , 2012 ; T\u00e4ckstr\u00f6m et al . , 2013 ) ( Section 2 ) , or even the simultaneous use of both techniques ( Ganchev and Das , 2013 ) . In this paper , we reproduce the weakly supervised setting of T\u00e4ckstr\u00f6m et al . ( 2013 ) . By recasting this setting in the framework of ambiguous learning ( Bordes et al . , 2010 ; Cour et al . , 2011 ) ( Section 3 ) , we propose an alternative learning methodology and show that it improves the state of the art performance on a large array of languages ( Section 4 ) . Our analysis of the remaining errors suggests that in cross-lingual settings , improvements of error rates can have multiple causes and should be looked at with great care ( Section 4.2 ) . All tools and resources used in this study are available at http://perso.limsi.fr/ wisniews / ambiguous . In this paper , we have presented a novel learning methodology to learn from ambiguous supervision information , and used it to train several POS taggers . Using this method , we have been able to achieve performance that surpasses the best reported results , sometimes by a wide margin . Further work will attempt to better analyse these results , which could be caused by several subtle differences between HBAL and the baseline system . Nonetheless , these experiments confirm that cross-lingual projection of annotations have the potential to help in building very efficient POS taggers with very little monolingual supervision data . Our analysis of these results also suggests that , for this task , additional gains might be more easily obtained by fixing systematic biases introduced by conflicting mappings between tags or by train / test domain mismatch than by designing more sophisticated weakly supervised learners .", "challenge": "To mitigate the data scarcity problem, for POS tagging, there are some works that project labels to other lower-resource languages but it causes alignment noises.", "approach": "They propose to treat the cross-lingual POS tagging problem in ambiguous learning and train several POS taggers.", "outcome": "Their approach outperforms current best models sometimes by a large margin even with a little monolingual supervision."} +{"id": "P98-1010", "document": "Recognizing shallow linguistic patterns , such as basic syntactic relationships between words , is a common task in applied natural language and text processing . The common practice for approaching this task is by tedious manual definition of possible pattern structures , often in the form of regular expressions or finite automata . This paper presents a novel memory-based learning method that recognizes shallow patterns in new text based on a bracketed training corpus . The training data are stored as-is , in efficient suffix-tree data structures . Generalization is performed on-line at recognition time by comparing subsequences of the new text to positive and negative evidence in the corpus . This way , no information in the training is lost , as can happen in other learning systems that construct a single generalized model at the time of training . The paper presents experimental results for recognizing noun phrase , subject-verb and verb-object patterns in English . Since the learning approach enables easy porting to new domains , we plan to apply it to syntactic patterns in other languages and to sub-language patterns for information extraction . Identifying local patterns of syntactic sequences and relationships is a fundamental task in natural language processing ( NLP ) . Such patterns may correspond to syntactic phrases , like noun phrases , or to pairs of words that participate in a syntactic relationship , like the heads of a verb-object relation . Such patterns have been found useful in various application areas , including information extraction , text summarization , and bilingual alignment . Syntactic patterns are useful also for many basic computational linguistic tasks , such as statistical word similarity and various disambiguation problems . One approach for detecting syntactic patterns is to obtain a full parse of a sentence and then extract the required patterns . However , obtaining a complete parse tree for a sentence is difficult in many cases , and may not be necessary at all for identifying most instances of local syntactic patterns . An alternative approach is to avoid the complexity of full parsing and instead to rely only on local information . A variety of methods have been developed within this framework , known as shallow parsing , chunking , local parsing etc . ( e.g. , ( Abney , 1991 ; Greffenstette , 1993 ) ) . These works have shown that it is possible to identify most instances of local syntactic patterns by rules that examine only the pattern itself and its nearby context . Often , the rules are applied to sentences that were tagged by partof-speech ( POS ) and are phrased by some form of regular expressions or finite state automata . Manual writing of local syntactic rules has become a common practice for many applications . However , writing rules is often tedious and time consuming . Furthermore , extending the rules to different languages or sub-language domains can require substantial resources and expertise that are often not available . As in many areas of NLP , a learning approach is appealing . Surprisingly , though , rather little work has been devoted to learning local syntactic patterns , mostly noun phrases ( Ramshaw and Marcus , 1995 ; Vilain and Day , 1996 ) . This paper presents a novel general learning approach for recognizing local sequential patterns , that may be perceived as falling within the memorybased learning paradigm . The method utilizes a part-of-speech tagged training corpus in which all instances of the target pattern are marked ( bracketed ) . The training data are stored as-is in suffix-tree data structures , which enable linear time searching for subsequences in the corpus . The memory-based nature of the presented algorithm stems from its deduction strategy : a new instance of the target pattern is recognized by examining the raw training corpus , searching for positive and negative evidence with respect to the given test sequence . No model is created for the training corpus , and the raw examples are not converted to any other representation . Consider the following example 1 . Suppose we 1We use here the POS tags : DT ----determiner , ADJ = adjective , hDV = adverb , C0NJ = conjunction , VB = verb , PP = preposition , NN = singular noun , and NNP ----plural noun . want to decide whether the candidate sequence DT ADJ ADJ NN NNP is a noun phrase ( NP ) by comparing it to the training corpus . A good match would be if the entire sequence appears as-is several times in the corpus . However , due to data sparseness , an exact match can not always be expected . A somewhat weaker match may be obtained if we consider sub-parts of the candidate sequence ( called tiles ) . For example , suppose the corpus contains noun phrase instances with the following structures : ( i ) DT ADJ ADJ NN NN ( 2 ) DT ADJ NN NNP The first structure provides positive evidence that the sequence \" DT ADJ ADJ NN \" is a possible NP prefix while the second structure provides evidence for \" ADJ NN NNP \" being an NP suffix . Together , these two training instances provide positive evidence that covers the entire candidate . Considering evidence for sub-parts of the pattern enables us to generalize over the exact structures that are present in the corpus . Similarly , we also consider the negative evidence for such sub-parts by noting where they occur in the corpus without being a corresponding part of a target instance . The proposed method , as described in detail in the next section , formalizes this type of reasoning . It searches specialized data structures for both positive and negative evidence for sub-parts of the candidate structure , and considers additional factors such as context and evidence overlap . Section 3 presents experimental results for three target syntactic patterns in English , and Section 4 describes related work . We have presented a novel general schema and a particular instantiation of it for learning sequential patterns . Applying the method to three syntactic patterns in English yielded positive results , suggesting its applicability for recognizing local linguistic patterns . In future work we plan to investigate a datadriven approach for optimal selection and weighting of statistical features of candidate scores , as well as to apply the method to syntactic patterns of Hebrew and to domain-specific patterns for information extraction .", "challenge": "Existing approaches to linguistic pattern recognition have issues which emerge from difficulties in parse tree acquisition or tedious and time consuming manual rule writing.", "approach": "They propose a memory-based learning method that recognizes patterns by matching new texts against a bracketed training corpus stored in an efficient suffix-tree data structure.", "outcome": "The proposed model achieves positive results in recognizing the English noun phase, subject-verb and verb-object patterns."} +{"id": "N13-1049", "document": "In this paper , we study the problem of automatic enrichment of a morphologically underspecified treebank for Arabic , a morphologically rich language . We show that we can map from a tagset of size six to one with 485 tags at an accuracy rate of 94%-95 % . We can also identify the unspecified lemmas in the treebank with an accuracy over 97 % . Furthermore , we demonstrate that using our automatic annotations improves the performance of a state-of-the-art Arabic morphological tagger . Our approach combines a variety of techniques from corpus-based statistical models to linguistic rules that target specific phenomena . These results suggest that the cost of treebanking can be reduced by designing underspecified treebanks that can be subsequently enriched automatically . Collections of manually-annotated morphological and syntactic analyses of sentences , or treebanks , are an important resource for building statistical parsing models or for syntax-aware approaches to applications such as machine translation . Rich treebank annotations have also been used for a variety of natural language processing ( NLP ) applications such as tokenization , diacritization , part-of-speech ( POS ) tagging , morphological disambiguation , base phrase chunking , and semantic role labeling . The development of a treebank with rich annotations is demanding in time and money , especially for morphologically complex languages . Consequently , the richer the annotation , the slower the annotation process and the smaller the size of the treebank . As such , a tradeoff is usually made between the size of the treebank and the richness of its annotations . In this paper , we investigate the possibility of automatically enriching the morphologically underspecified Columbia Arabic Treebank ( CATiB ) ( Habash and Roth , 2009 ; Habash et al . , 2009 ) with the more complex POS tags and lemmas used in the Penn Arabic Treebank ( PATB ) ( Maamouri et al . , 2004 ) . We employ a variety of techniques that range from corpus-based statistical models to handwritten rules based on linguistic observations . Our best method reaches accuracy rates of 94%-95 % on full POS tag identification . We can also identify the unspecified lemmas in CATiB with an accuracy over 97 % . 37 % of our POS tag errors are due to gold tree or gold POS errors . A learning curve experiment to evaluate the dependence of our method on annotated data shows that while the quality of some components may reduce sharply with less data ( 12 % absolute reduction in accuracy when using 1 32 of the data or some 10 K annotated words ) , the overall effect is a lot smaller ( 2 % absolute drop ) . These results suggest that the cost of treebanking can be reduced by designing underspecified treebanks that can be subsequently enriched automatically . The rest of this paper is structured as follows : Section 2 presents related work ; Section 3 details various language background facts about Arabic and its treebanking ; Section 4 explains our approach ; and Section 5 presents and discusses our results . We have demonstrated that an underspecified version of an Arabic treebank can be fully specified for Arabic 's rich morphology automatically at an accuracy rate of 94%-95 % for POS tags and 97 % for lemmas . Our approach combines a variety of techniques from corpus-based statistical models ( which require some rich annotations ) to linguistic rules that target specific phenomena . Since the underspecified treebank is much faster to manually annotate than its fully specified version , these results suggest that the cost of treebanking can be reduced by designing underspecified treebanks that can be subsequently enriched automatically . In the future , we plan to extend the automatic enrichment effort to include more complex features such as empty nodes and semantic labels . We also plan to take the insights from this effort and apply them to treebanks of other languages . A small portion of a treebank that is fully annotated in rich format will of course be needed before we can apply these insights to other languages .", "challenge": "Developing a treebank with rich annotations for morphologically rich languages is time-consuming and expensive leading to a trade-off between the size and the annotation richness.", "approach": "They propose to automatically enrich the underspecified Arabic Treebank with complex POS tags and lemmas by employing techniques from corpus-based statistical models to rule-based methods.", "outcome": "Their methods achieve high accuracy on full POS tag and the unspecified lemmas identification indicating the Treebank annotation cost can be reduced by automatic methods."} +{"id": "2020.aacl-main.67", "document": "This paper evaluates the utility of Rhetorical Structure Theory ( RST ) trees and relations in discourse coherence evaluation . We show that incorporating silver-standard RST features can increase accuracy when classifying coherence . We demonstrate this through our tree-recursive neural model , namely RST-Recursive , which takes advantage of the text 's RST features produced by a state of the art RST parser . We evaluate our approach on the Grammarly Corpus for Discourse Coherence ( GCDC ) and show that when ensembled with the current state of the art , we can achieve the new state of the art accuracy on this benchmark . Furthermore , when deployed alone , RST-Recursive achieves competitive accuracy while having 62 % fewer parameters . Discourse coherence has been the subject of much research in Computational Linguistics thanks to its widespread applications ( Lai and Tetreault , 2018 ) . Most current methods can be described as either stemming from explicit representations based on the Centering Theory ( Grosz et al . , 1994 ) , or deep learning approaches that learn without the use of hand-crafted linguistic features . Our work explores a third research avenue based on the Rhetorical Structure Theory ( RST ) ( Mann and Thompson , 1988 ) . We hypothesize that texts of low / high coherence tend to adhere to different discourse structures . Thus , we pose that using even silver-standard RST features should help in separating coherent texts from incoherent ones . This stems from the definition of the coherence itselfas the writer of a document needs to follow specific rules for building a clear narrative or argument structure in which the role of each constituent of the document should be appropriate with respect * to its local and global context , and even existing discourse parsers should be able to predict a plausible structure that is consistent across all coherent documents . However , if a parser has difficulty interpreting a given document , it will be more likely to produce unrealistic trees with improbable patterns of discourse relations between constituents . This idea was first explored by Feng et al . ( 2014 ) , who followed an approach similar to Barzilay and Lapata ( 2008 ) by estimating entity transition likelihoods , but instead using discourse relations ( predicted by a state of the art discourse parser ( Feng and Hirst , 2014 ) ) that entities participate in as opposed to their grammatical roles . Their method achieved significant improvements in performance even when using silver-standard discourse trees , showing potential in the use of parsed RST features for classifying textual coherence . In this paper , we explore the usefulness of silverstandard parsed RST features in neural coherence classification . We propose two new methods , RST-Recursive and Ensemble . The former achieves reasonably good performance , only 2 % short of state of the art , while more robust with 62 % fewer parameters . The latter demonstrates the added advantage of RST features in improving classification accuracy of the existing state of the art methods by setting new state of the art performance with a modest but promising margin . This signifies that the document 's rhetorical structure is an important aspect of its perceived clarity . Naturally , this improvement in performance is bounded by the quality of parsed RST features and could increase as better discourse parsers are developed . In the future , exploring other RST-based architectures for coherence classification , as well as better RST ensemble schemes and improving RST parsing can be avenues of potentially fruitful research . Additional research on multipronged approaches that draw from Centering Theory , RST and deep learning all together can also be of value .", "challenge": "Existing methods for discourse coherence are either based on explicit representations from the Centering Theory or deep learning approaches without hand-crafted linguistic features.", "approach": "They propose a tree-recursive neural model with silver-standard RST features based on a hypothesis that low or high coherence adheres to different discourse structures.", "outcome": "The proposed model is competitive with the state-of-the-art model while having 62% fewer parameters and further boosts are achieved when ensembled with it."} +{"id": "P08-1083", "document": "Morphological disambiguation proceeds in 2 stages : ( 1 ) an analyzer provides all possible analyses for a given token and ( 2 ) a stochastic disambiguation module picks the most likely analysis in context . When the analyzer does not recognize a given token , we hit the problem of unknowns . In large scale corpora , unknowns appear at a rate of 5 to 10 % ( depending on the genre and the maturity of the lexicon ) . We address the task of computing the distribution p(t|w ) for unknown words for full morphological disambiguation in Hebrew . We introduce a novel algorithm that is language independent : it exploits a maximum entropy letters model trained over the known words observed in the corpus and the distribution of the unknown words in known tag contexts , through iterative approximation . The algorithm achieves 30 % error reduction on disambiguation of unknown words over a competitive baseline ( to a level of 70 % accurate full disambiguation of unknown words ) . We have also verified that taking advantage of a strong language-specific model of morphological patterns provides the same level of disambiguation . The algorithm we have developed exploits distributional information latent in a wide-coverage lexicon and large quantities of unlabeled data . The term unknowns denotes tokens in a text that can not be resolved in a given lexicon . For the task of full morphological analysis , the lexicon must provide all possible morphological analyses for any given token . In this case , unknown tokens can be categorized into two classes of missing information : unknown tokens are not recognized at all by the lexicon , and unknown analyses , where the set of analyses for a lexeme does not contain the correct analysis for a given token . Despite efforts on improving the underlying lexicon , unknowns typically represent 5 % to 10 % of the number of tokens in large-scale corpora . The alternative to continuously investing manual effort in improving the lexicon is to design methods to learn possible analyses for unknowns from observable features : their letter structure and their context . In this paper , we investigate the characteristics of Hebrew unknowns for full morphological analysis , and propose a new method for handling such unavoidable lack of information . Our method generates a distribution of possible analyses for unknowns . In our evaluation , these learned distributions include the correct analysis for unknown words in 85 % of the cases , contributing an error reduction of over 30 % over a competitive baseline for the overall task of full morphological analysis in Hebrew . The task of a morphological analyzer is to produce all possible analyses for a given token . In Hebrew , the analysis for each token is of the form lexeme-and-features 1 : lemma , affixes , lexical cate-gory ( POS ) , and a set of inflection properties ( according to the POS ) -gender , number , person , status and tense . In this work , we refer to the morphological analyzer of MILA -the Knowledge Center for Processing Hebrew 2 ( hereafter KC analyzer ) . It is a synthetic analyzer , composed of two data resources -a lexicon of about 2,400 lexemes , and a set of generation rules ( see ( Adler , 2007 , Section 4.2 ) ) . In addition , we use an unlabeled text corpus , composed of stories taken from three Hebrew daily news papers ( Aruts 7 , Haaretz , The Marker ) , of 42 M tokens . We observed 3,561 different composite tags ( e.g. , noun-sing-fem-prepPrefix : be ) over this corpus . These 3,561 tags form the large tagset over which we train our learner . On the one hand , this tagset is much larger than the largest tagset used in English ( from 17 tags in most unsupervised POS tagging experiments , to the 46 tags of the WSJ corpus and the about 150 tags of the LOB corpus ) . On the other hand , our tagset is intrinsically factored as a set of dependent sub-features , which we explicitly represent . The task we address in this paper is morphological disambiguation : given a sentence , obtain the list of all possible analyses for each word from the analyzer , and disambiguate each word in context . On average , each token in the 42 M corpus is given 2.7 possible analyses by the analyzer ( much higher than the average 1.41 POS tag ambiguity reported in English ( Dermatas and Kokkinakis , 1995 ) ) . In previous work , we report disambiguation rates of 89 % for full morphological disambiguation ( using an unsupervised EM-HMM model ) and 92.5 % for part of speech and segmentation ( without assigning all the inflectional features of the words ) . In order to estimate the importance of unknowns in Hebrew , we analyze tokens in several aspects : ( 1 ) the number of unknown tokens , as observed on the corpus of 42 M tokens ; ( 2 ) a manual classification of a sample of 10 K unknown token types out of the 200 K unknown types identified in the corpus ; ( 3 ) the number of unknown analyses , based on an annotated corpus of 200 K tokens , and their classification . About 4.5 % of the 42 M token instances in the Buckwalter 's Arabic analyzer ( 2004 ) , which looks for any legal combination of prefix-stem-suffix , but does not provide full morphological features such as gender , number , case etc . 2 http://mila.cs.technion.ac.il.html training corpus were unknown tokens ( 45 % of the 450 K token types ) . For less edited text , such as random text sampled from the Web , the percentage is much higher -about 7.5 % . In order to classify these unknown tokens , we sampled 10 K unknown token types and examined them manually . The classification of these tokens with their distribution is shown in Table 1 3 . As can be seen , there are two main classes of unknown token types : Neologisms ( 32 % ) and Proper nouns ( 48 % ) , which cover about 80 % of the unknown token instances . The POS distribution of the unknown tokens of our annotated corpus is shown in Table 2 . As expected , most unknowns are open class words : proper names , nouns or adjectives . Regarding unknown analyses , in our annotated corpus , we found 3 % of the 100 K token instances were missing the correct analysis in the lexicon ( 3.65 % of the token types ) . The POS distribution of the unknown analyses is listed in Table 2 . The high rate of unknown analyses for prepositions at about 3 % is a specific phenomenon in Hebrew , where prepositions are often prefixes agglutinated to the first word of the noun phrase they head . We observe the very low rate of unknown verbs ( 2 % ) -which are well marked morphologically in Hebrew , and where the rate of neologism introduction seems quite low . This evidence illustrates the need for resolution of unknowns : The naive policy of selecting ' proper name ' for all unknowns will cover only half of the errors caused by unknown tokens , i.e. , 30 % of the whole unknown tokens and analyses . The other 70 % of the unknowns ( 5.3 % of the words in the text in our experiments ) will be assigned a wrong tag . As a result of this observation , our strategy is to focus on full morphological analysis for unknown tokens and apply a proper name classifier for unknown analyses and unknown tokens . In this paper , we investigate various methods for achieving full morphological analysis distribution for unknown tokens . The methods are not based on an annotated corpus , nor on hand-crafted rules , but instead exploit the distribution of words in an available lexicon and the letter similarity of the unknown words with known words . We have addressed the task of computing the distribution p(t|w ) for unknown words for full morphological disambiguation in Hebrew . The algorithm we have proposed is language independent : it exploits a maximum entropy letters model trained over the known words observed in the corpus and the distribution of the unknown words in known tag contexts , through iterative approximation . The algorithm achieves 30 % error reduction on disambiguation of unknown words over a competitive baseline ( to a level of 70 % accurate full disambiguation of unknown words ) . We have also verified that taking advantage of a strong language-specific model of morphological patterns provides the same level of disambiguation . The algorithm we have developed exploits distributional information latent in a wide-coverage lexicon and large quantities of unlabeled data . We observe that the task of analyzing unknown tokens for POS in Hebrew remains challenging when compared with English ( 78 % vs. 85 % ) . We hypothesize this is due to the highly ambiguous pattern of prefixation that occurs widely in Hebrew and are currently investigating syntagmatic models that exploit the specific nature of agglutinated prefixes in Hebrew .", "challenge": "In a large corpus, 5 to 10% of tokens are unknown to analyzers used for morphological analysis and cannot be resolved in a given lexicon.", "approach": "They propose a language independen algorithm which exploits a maximum entropy letters model trained over known words and gives distributions of unknowns by iterative approximation.", "outcome": "By evaluations in Hebrew, the proposed model achieves a 30% error reduction on unknown tokens over a baseline and is comparable to a language-specific model."} +{"id": "P19-1184", "document": "This paper introduces the PhotoBook dataset , a large-scale collection of visually-grounded , task-oriented dialogues in English designed to investigate shared dialogue history accumulating during conversation . Taking inspiration from seminal work on dialogue analysis , we propose a data-collection task formulated as a collaborative game prompting two online participants to refer to images utilising both their visual context as well as previously established referring expressions . We provide a detailed description of the task setup and a thorough analysis of the 2,500 dialogues collected . To further illustrate the novel features of the dataset , we propose a baseline model for reference resolution which uses a simple method to take into account shared information accumulated in a reference chain . Our results show that this information is particularly important to resolve later descriptions and underline the need to develop more sophisticated models of common ground in dialogue interaction . 1 The past few years have seen an increasing interest in developing computational agents for visually grounded dialogue , the task of using natural language to communicate about visual content in a multi-agent setup . The models developed for this task often focus on specific aspects such as image labelling ( Mao et al . , 2016 ; Vedantam et al . , 2017 ) , object reference ( Kazemzadeh et al . , 2014 ; De Vries et al . , 2017a ) , visual question answering ( Antol et al . , 2015 ) , and first attempts of visual dialogue proper ( Das et al . , 2017 ) , but fail to produce consistent outputs over a conversation . We hypothesise that one of the main reasons for this shortcoming is the models ' inability to effectively utilise dialogue history . Human interlocutors are known to collaboratively establish a shared repository of mutual information during a conversation ( Clark and Wilkes-Gibbs , 1986 ; Clark , 1996 ; Brennan and Clark , 1996 ) . This common ground ( Stalnaker , 1978 ) then is used to optimise understanding and communication efficiency . Equipping artificial dialogue agents with a similar representation of dialogue context thus is a pivotal next step in improving the quality of their dialogue output . To facilitate progress towards more consistent and effective conversation models , we introduce the PhotoBook dataset : a large collection of 2,500 human-human goal-oriented English conversations between two participants , who are asked to identify shared images in their respective photo books by exchanging messages via written chat . This setup takes inspiration from experimental paradigms extensively used within the psycholinguistics literature to investigate partnerspecific common ground ( for an overview , see Brown-Schmidt et al . , 2015 ) , adapting them to the requirements imposed by online crowdsourcing methods . The task is formulated as a game consisting of five rounds . Figure 1 shows an example of a participant 's display . Over the five rounds of a game , a selection of previously displayed images will be visible again , prompting participants to re-refer to images utilising both their visual context as well as previously established referring expressions . The resulting dialogue data therefore allows for tracking the common ground developing between dialogue participants . We describe in detail the PhotoBook task and the data collection , and present a thorough analysis of the dialogues in the dataset . In addition , to showcase how the new dataset may be exploited for computational modelling , we propose a reference resolution baseline model trained to identify target images being discussed in a given dialogue segment . The model uses a simple method to take into account information accumulated in a reference chain . Our results show that this information is particularly important to resolve later descriptions and highlight the importance of developing more sophisticated models of common ground in dialogue interaction . The PhotoBook dataset , together with the data collection protocol , the automatically extracted reference chains , and the code used for our analyses and models are available at the following site : https://dmg-photobook.github.io . We have presented the first large-scale dataset of goal-oriented , visually grounded dialogues for investigating shared linguistic history . Through the data collection 's task setup , participants repeatedly refer to a controlled set of target images , which allows them to improve task efficiency if they utilise their developing common ground and establish conceptual pacts ( Brennan and Clark , 1996 ) on referring expressions . The collected dialogues exhibit a significant shortening of utterances throughout a game , with final referring expressions starkly differing from both standard image captions and initial descriptions . To illustrate the potential of the dataset , we trained a baseline reference resolution model and showed that information accumulated over a reference chain helps to resolve later descriptions . Our results suggest that more sophisticated models are needed to fully exploit shared linguistic history . The current paper showcases only some of the aspects of the PhotoBook dataset , which we hereby release to the public ( https:// dmg-photobook.github.io ) . In future work , the data can be used to further investigate common ground and conceptual pacts ; be extended through manual annotations for a more thorough linguistic analysis of co-reference chains ; exploit the combination of vision and language to develop computational models for referring expression generation ; or use the PhotoBook task in the ParlAI framework for Turing-Test-like evaluation of dialogue agents .", "challenge": "Existing computational agents for visually grounded dialogue fail to produce consistent outputs because of models' inability to effectively utilise dialogue history.", "approach": "They present a collection of visually-grounded, task-oriented dialogue in English from a collaborative game with two online participants and a baseline model for reference resolution.", "outcome": "Their experiments on the proposed dataset and model show that shared information accumulated in a reference chain is important to resolve later descriptions."} +{"id": "N10-1002", "document": "In this paper , we present an innovative chart mining technique for improving parse coverage based on partial parse outputs from precision grammars . The general approach of mining features from partial analyses is applicable to a range of lexical acquisition tasks , and is particularly suited to domain-specific lexical tuning and lexical acquisition using lowcoverage grammars . As an illustration of the functionality of our proposed technique , we develop a lexical acquisition model for English verb particle constructions which operates over unlexicalised features mined from a partial parsing chart . The proposed technique is shown to outperform a state-of-the-art parser over the target task , despite being based on relatively simplistic features . Parsing with precision grammars is increasingly achieving broad coverage over open-domain texts for a range of constraint-based frameworks ( e.g. , TAG , LFG , HPSG and CCG ) , and is being used in real-world applications including information extraction , question answering , grammar checking and machine translation ( Uszkoreit , 2002 ; Oepen et al . , 2004 ; Frank et al . , 2006 ; Zhang and Kordoni , 2008 ; MacKinlay et al . , 2009 ) . In this context , a \" precision grammar \" is a grammar which has been engineered to model grammaticality , and contrasts with a treebank-induced grammar , for example . Inevitably , however , such applications demand complete parsing outputs , based on the assumption that the text under investigation will be completely analysable by the grammar . As precision grammars generally make strong assumptions about complete lexical coverage and grammaticality of the input , their utility is limited over noisy or domain-specific data . This lack of complete coverage can make parsing with precision grammars less attractive than parsing with shallower methods . One technique that has been successfully applied to improve parser and grammar coverage over a given corpus is error mining ( van Noord , 2004 ; de Kok et al . , 2009 ) , whereby n-grams with low \" parsability \" are gathered from the large-scale output of a parser as an indication of parser or ( precision ) grammar errors . However , error mining is very much oriented towards grammar engineering : its results are a mixture of different ( mistreated ) linguistic phenomena together with engineering errors for the grammar engineer to work through and act upon . Additionally , it generally does not provide any insight into the cause of the parser failure , and it is difficult to identify specific language phenomena from the output . In this paper , we instead propose a chart mining technique that works on intermediate parsing results from a parsing chart . In essence , the method analyses the validity of different analyses for words or constructions based on the \" lifetime \" and probability of each within the chart , combining the constraints of the grammar with probabilities to evaluate the plausibility of each . For purposes of exemplification of the proposed technique , we apply chart mining to a deep lexical acquisition ( DLA ) task , using a maximum entropybased prediction model trained over a seed lexicon and treebank . The experimental set up is the following : given a set of sentences containing putative instances of English verb particle constructions , extract a list of non-compositional VPCs optionally with valence information . For comparison , we parse the same sentence set using a state-of-the-art statistical parser , and extract the VPCs from the parser output . Our results show that our chart mining method produces a model which is superior to the treebank parser . To our knowledge , the only other work that has looked at partial parsing results of precision grammars as a means of linguistic error analysis is that of Kiefer et al . ( 1999 ) and Zhang et al . ( 2007a ) , where partial parsing models were proposed to select a set of passive edges that together cover the input sequence . Compared to these approaches , our proposed chart mining technique is more general and can be adapted to specific tasks and domains . While we experiment exclusively with an HPSG grammar in this paper , it is important to note that the proposed method can be applied to any grammar formalism which is compatible with chart parsing , and where it is possible to describe an unlexicalised lexical entry for the different categories of lexical item that are to be extracted ( see Section 3.2 for details ) . The remainder of the paper is organised as follows . Section 2 defines the task of VPC extraction . Section 3 presents the chart mining technique and the feature extraction process for the VPC extraction task . Section 4 evaluates the model performance with comparison to two competitor models over several different measures . Section 5 further discusses the general applicability of chart mining . Finally , Section 6 concludes the paper . We have proposed a chart mining technique for lexical acquisition based on partial parsing with precision grammars . We applied the proposed method to the task of extracting English verb particle constructions from a prescribed set of corpus instances . Our results showed that simple unlexicalised features mined from the chart can be used to effectively extract VPCs , and that the model outperforms a probabilistic baseline and the Charniak parser at VPC extraction .", "challenge": "Error mining aids the limitations of precision parsers on noisy or domain-specific data but it does not provide the cause of the parse failure.", "approach": "They propose a chart mining technique to improve parse coverage based on partial parse outputs which works on outputs from precision parsers.", "outcome": "The proposed method outperforms the state-of-the-art statistical treebank parser on a deep lexical acquisition task despite being based on simple features."} +{"id": "N10-1034", "document": "A variety of query systems have been developed for interrogating parsed corpora , or treebanks . With the arrival of efficient , widecoverage parsers , it is feasible to create very large databases of trees . However , existing approaches that use in-memory search , or relational or XML database technologies , do not scale up . We describe a method for storage , indexing , and query of treebanks that uses an information retrieval engine . Several experiments with a large treebank demonstrate excellent scaling characteristics for a wide range of query types . This work facilitates the curation of much larger treebanks , and enables them to be used effectively in a variety of scientific and engineering tasks . The problem of representing and querying linguistic annotations has been an active area of research for several years . Much of the work has grown from efforts to curate large databases of annotated text such as treebanks , for use in developing and testing language technologies ( Marcus et al . , 1993 ; Abeill\u00e9 , 2003 ; Hockenmaier and Steedman , 2007 ) . At least a dozen linguistic tree query languages have been developed for interrogating treebanks ( see \u00a7 2 ) . While high quality syntactic parsers are able to efficiently annotate large quantities of English text ( Clark and Curran , 2007 ) , existing approaches to query do not work on the same scale . Many existing systems load the entire corpus into memory and check a user-supplied query against every tree . Others avoid the memory limitation , and use relational or XML database systems . Although these have built-in support for indexes , they do not scale up either ( Ghodke and Bird , 2008 ; Zhang et al . , 2001 ) ) . The ability to interrogate large collections of parsed text has important practical applications . First , it opens the way to a new kind of information retrieval ( IR ) that is sensitive to syntactic information , permitting users to do more focussed search . At the simplest level , an ambiguous query term like wind or park could be disambiguated with the help of a POS tag ( e.g. wind / N , park / V ) . ( Existing IR engines already support query with part-of-speech tags ( Chowdhury and McCabe , 1998 ) ) . More complex queries could stipulate the syntactic category of apple is in subject position . A second benefit of large scale tree query is for natural language processing . For example , we might compute the likelihood that a given noun appears as the agent or patient of a verb , as a measure of animacy . We can use features derived from syntactic trees in order to support semantic role labeling , language modeling , and information extraction ( Chen and Rambow , 2003 ; Collins et al . , 2005 ; Hakenberg et al . , 2009 ) . A further benefit for natural language processing , though not yet realized , is for a treebank and query engine to provide the underlying storage and retrieval for a variety of linguistic applications . Just as a relational database is present in most business applications , providing reliable and efficient access to relational data , such a system would act as a repository of annotated texts , and expose an expressive API to client applications . A third benefit of large scale tree query is to support syntactic investigations , e.g. for develop-ing syntactic theories or preparing materials for language learners . Published treebanks will usually not attest particular words in the context of some infrequent construction , to the detriment of syntactic studies that make predictions about such combinations , and language learners wanting to see instances of some construction involving words from some specialized topic . A much larger treebank alleviates these problems . To improve recall performance , multiple parses for a given sentence could be stored ( possibly derived from different parsers ) . A fourth benefit for large scale tree query is to support the curation of treebanks , a major enterprise in its own right ( Abeill\u00e9 , 2003 ) . Manual selection and correction of automatically generated parse trees is a substantial part of the task of preparing a treebank . At the point of making such decisions , it is often helpful for an annotator to view existing annotations of a given construction which have already been manually validated ( Hiroshi et al . , 2005 ) . Occasionally , an earlier annotation decision may need to be reconsidered in the light of new examples , leading to further queries and to corrections that are spread across the whole corpus ( Wallis , 2003 ; Xue et al . , 2005 ) . This paper explores a new methods for scaling up tree query using an IR engine . In \u00a7 2 we describe existing tree query systems , elaborating on the design decisions , and on key aspects of their implementation and performance . In \u00a7 3 we describe a method for indexing trees using an IR engine , and discuss the details of our open source implementation . In \u00a7 4 we report results from a variety of experiments involving two data collections . The first collection contains of 5.5 million parsed trees , two orders of magnitude larger than those used by existing tree query systems , while the second collection contains 26.5 million trees . We have shown how an IR engine can be used to build a high performance tree query system . It outperforms existing approaches using indexless inmemory search , or custom indexes , or relational database systems , or XML database systems . We reported the results of a variety of experiments to demonstrate the efficiency of query for a variety of query types on two treebanks consisting of around 5 and 26 million sentences , more than two orders of magnitude larger than what existing systems support . The approach is quite general , and not limited to particular treebank formats or query languages . This work suggests that web-scale tree query may soon be feasible . This opens the door to some interesting possibilities : augmenting web search with syntactic constraints , the ability discover rare examples of particular syntactic constructions , and as a technique for garnering better statistics and more sensitive features for the purpose of constructing language models .", "challenge": "Developments of treebanks achieve creating large databases of trees however existing search or database technologies for them do not scale in size.", "approach": "They describe a method for storage, indexing, and query of treebank based on an information retrieve engine to scale up tree query.", "outcome": "The proposed method outperforms existing systems on a wide range of query types and can hold two large treebanks that existing systems cannot support."} +{"id": "P18-1034", "document": "We present a generative model to map natural language questions into SQL queries . Existing neural network based approaches typically generate a SQL query wordby-word , however , a large portion of the generated results is incorrect or not executable due to the mismatch between question words and table contents . Our approach addresses this problem by considering the structure of table and the syntax of SQL language . The quality of the generated SQL query is significantly improved through ( 1 ) learning to replicate content from column names , cells or SQL keywords ; and ( 2 ) improving the generation of WHERE clause by leveraging the column-cell relation . Experiments are conducted on WikiSQL , a recently released dataset with the largest question-SQL pairs . Our approach significantly improves the state-of-the-art execution accuracy from 69.0 % to 74.4 % . We focus on semantic parsing that maps natural language utterances to executable programs ( Zelle and Mooney , 1996 ; Wong and Mooney , 2007 ; Zettlemoyer and Collins , 2007 ; Kwiatkowski et al . , 2011 ; Pasupat and Liang , 2015 ; Iyer et al . , 2017 ; Iyyer et al . , 2017 ) . In this work , we regard SQL as the programming language , which could be executed on a table or a relational database to obtain an outcome . Datasets are the main driver of progress for statistical approaches in semantic parsing ( Liang , 2016 ) . Recently , Zhong et al . ( 2017 ) release WikiSQL , the largest handannotated semantic parsing dataset which is an order of magnitude larger than other datasets in terms of both the number of logical forms and the number of tables . Pointer network ( Vinyals et al . , 2015 ) based approach is developed , which generates a SQL query word-by-word through replicating from a word sequence consisting of question words , column names and SQL keywords . However , a large portion of generated results is incorrect or not executable due to the mismatch between question words and column names ( or cells ) . This also reflects the real scenario where users do not always use exactly the same column name or cell content to express the question . To address the aforementioned problem , we present a generative semantic parser that considers the structure of table and the syntax of SQL language . The approach is partly inspired by the success of structure / grammar driven neural network approaches in semantic parsing ( Xiao et al . , 2016 ; Krishnamurthy et al . , 2017 ) . Our approach is based on pointer networks , which encodes the question into continuous vectors , and synthesizes the SQL query with three channels . The model learns when to generate a column name , a cell or a SQL keyword . We further incorporate columncell relation to mitigate the ill-formed outcomes . We conduct experiments on WikiSQL . Results show that our approach outperforms existing systems , improving state-of-the-art execution accuracy to 74.4 % and logical form accuracy to 60.7 % . An extensive analysis reveals the advantages and limitations of our approach . In this work , we develop STAMP , a Syntax-and Table-Aware seMantic Parser that automatically maps natural language questions to SQL queries , which could be executed on web table or relational dataset to get the answer . STAMP has three channels , and it learns to switch to which channel at each time step . STAMP considers cell information and the relation between cell and column name in the generation process . Experiments are conducted on the WikiSQL dataset . Results show that STAMP achieves the new state-of-the-art performance on WikiSQL . We conduct extensive experiment analysis to show advantages and limitations of our approach , and where is the room for others to make further improvements . SQL language has more complicated queries than the cases included in the WikiSQL dataset , including ( 1 ) querying over multiple relational databases , ( 2 ) nested SQL query as condition value , ( 3 ) more operations such as \" group by \" and \" order by \" , etc . In this work , the STAMP model is not designed for the first and second cases , but it could be easily adapted to the third case through incorporating additional SQL keywords and of course the learning of which requires dataset of the same type . In the future , we plan to improve the accuracy of the column prediction component . We also plan to build a large-scale dataset that considers more sophisticated SQL queries . We also plan to extend the approach to low-resource scenarios ( Feng et al . , 2018 ) .", "challenge": "Because existing text-to-SQL models generate queries word-by-word, they are often not executable due to the mismatch between question words and column names.", "approach": "They propose pointer network-based methods which can take the structure of the table and SQL syntax into account.", "outcome": "The proposed model significantly outperforms existing models in execution accuracy and logical form accuracy on the WikiSQL dataset."} +{"id": "D07-1016", "document": "Inclusions from other languages can be a significant source of errors for monolingual parsers . We show this for English inclusions , which are sufficiently frequent to present a problem when parsing German . We describe an annotation-free approach for accurately detecting such inclusions , and develop two methods for interfacing this approach with a state-of-the-art parser for German . An evaluation on the TIGER corpus shows that our inclusion entity model achieves a performance gain of 4.3 points in F-score over a baseline of no inclusion detection , and even outperforms a parser with access to gold standard part-of-speech tags . The status of English as a global language means that English words and phrases are frequently borrowed by other languages , especially in domains such as science and technology , commerce , advertising , and current affairs . This is an instance of language mixing , whereby inclusions from other languages appear in an otherwise monolingual text . While the processing of foreign inclusions has received some attention in the text-to-speech ( TTS ) literature ( see Section 2 ) , the natural language processing ( NLP ) community has paid little attention both to the problem of inclusion detection , and to potential applications thereof . Also the extent to which inclusions pose a problem to existing NLP methods has not been investigated . In this paper , we address this challenge . We focus on English inclusions in German text . Anglicisms and other borrowings from English form by far the most frequent foreign inclusions in German . In specific domains , up to 6.4 % of the tokens of a German text can be English inclusions . Even in regular newspaper text as used for many NLP applications , English inclusions can be found in up to 7.4 % of all sentences ( see Section 3 for both figures ) . Virtually all existing NLP algorithms assume that the input is monolingual , and does not contain foreign inclusions . It is possible that this is a safe assumption , and inclusions can be dealt with accurately by existing methods , without resorting to specialized mechanisms . The alternative hypothesis , however , seems more plausible : foreign inclusions pose a problem for existing approaches , and sentences containing them are processed less accurately . A parser , for example , is likely to have problems with inclusions -most of the time , they are unknown words , and as they originate from another language , standard methods for unknown words guessing ( suffix stripping , etc . ) are unlikely to be successful . Furthermore , the fact that inclusions are often multiword expressions ( e.g. , named entities ) means that simply part-of-speech ( POS ) tagging them accurately is not sufficient : if the parser posits a phrase boundary within an inclusion this is likely to severely decrease parsing accuracy . In this paper , we focus on the impact of English inclusions on the parsing of German text . We describe an annotation-free method that accurately recognizes English inclusions , and demonstrate that inclusion detection improves the performance of a state-of-the-art parser for German . We show that the way of interfacing the inclusion detection and the parser is crucial , and propose a method for modifying the underlying probabilistic grammar in order to enable the parser to process inclusions accurately . This paper is organized as follows . We review related work in Section 2 , and present the English inclusion classifier in Section 3 . Section 4 describes our results on interfacing inclusion detection with parsing , and Section 5 presents an error analysis . Discussion and conclusion follow in Section 6 . This paper has argued that English inclusions in German text is an increasingly pervasive instance of language mixing . Starting with the hypothesis that such inclusions can be a significant source of errors for monolingual parsers , we found evidence that an unmodified state-of-the-art parser for Ger- man performs substantially worse on a set of sentences with English inclusions compared to a set of length-matched sentences randomly sampled from the same corpus . The lower performance on the inclusion set persisted even when the parser when given gold standard POS tags in the input . To overcome the poor accuracy of parsing inclusions , we developed two methods for interfacing the parser with an existing annotation-free inclusion detection system . The first method restricts the POS tags for inclusions that the parser can assign to those found in the data . The second method applies tree transformations to ensure that inclusions are treated as phrases . An evaluation on the TIGER corpus shows that the second method yields a performance gain of 4.3 in F-score over a baseline of no inclusion detection , and even outperforms a model involving perfect POS tagging of inclusions . To summarize , we have shown that foreign inclusions present a problem for a monolingual parser . We also demonstrated that it is insufficient to know where inclusions are or even what their parts of speech are . Parsing performance only improves if the parser also has knowledge about the structure of the inclusions . It is particularly important to know when adjacent foreign words are likely to be part of the same phrase . As our error analysis showed , this prevents cascading errors further up in the parse tree . Finally , our results indicate that future work could improve parsing performance for inclusions further : we found that parsing the inclusion set is still harder than parsing a randomly sampled test set , even for our best-performing model . This provides an upper bound on the performance we can expect from a parser that uses inclusion detection . Future work will also involve determining the English inclusion classifier 's merit when applied to rule-based parsing .", "challenge": "While expressions in English as the international language appear in many other languages, how it impacts the monolingual parsing accuracy has not been investigated.", "approach": "They describe an annotation-free method that recognizes English in German texts coupled with two methods for interfacing it with parsers.", "outcome": "They show that English is frequently present in German texts to prevent accurate parsing even with gold part-of-speech tags but the proposed methods can improve."} +{"id": "D16-1213", "document": "The brain is the locus of our language ability , and so brain images can be used to ground linguistic theories . Here we introduce Brain-Bench , a lightweight system for testing distributional models of word semantics . We compare the performance of several models , and show that the performance on brain-image tasks differs from the performance on behavioral tasks . We release our benchmark test as part of a web service . There is active debate over how we should test semantic models . In fact , in 2016 there was an entire workshop dedicated to the testing of semantic representations ( RepEval , 2016 ) . Several before us have argued for the usage of brain data to test semantic models ( Anderson et al . , 2013 ; Murphy et al . , 2012 ; Anderson et al . , 2015 ) , as a brain image represents a snapshot of one person 's own semantic representation . Still , testing semantic models against brain imaging data is rarely done by those not intimately involved in psycholinguistics or neurolinguistics . This may be due to a lack of familiarity with neuroimaging methods and publicly available datasets . We present the first iteration of BrainBench , a new system that makes it easy to test semantic models using brain imaging data ( Available at http://www.langlearnlab.cs.uvic . ca / brainbench/ ) . Our system has methodology that is similar to popular tests based on behavioral * Corresponding Author data ( see Section 2.2 ) , and has the additional benefit of being fast enough to offer as a web service . We have presented our new system , BrainBench , which is a fast and lightweight alternative to previous methods for comparing DS models to brain images . Our proposed methodology is more similar to well-known behavioral tasks , as BrainBench also uses the similarity of words as a proxy for meaning . We hope that this contribution will bring brain imaging tests \" to the masses \" and encourage discussion around the testing of DS models against brain imaging data .", "challenge": "While brain data can be used for evaluating semantic models, it is limited in accessibility because of familiarity and available datasets.", "approach": "They introduce a benchmark system hosted on the web to facilitate the evaluation of semantic models with brain data images.", "outcome": "Benchmark experiments on the proposed evaluation framework show the difference performances of different models."} +{"id": "E06-1015", "document": "In recent years tree kernels have been proposed for the automatic learning of natural language applications . Unfortunately , they show ( a ) an inherent super linear complexity and ( b ) a lower accuracy than traditional attribute / value methods . In this paper , we show that tree kernels are very helpful in the processing of natural language as ( a ) we provide a simple algorithm to compute tree kernels in linear average running time and ( b ) our study on the classification properties of diverse tree kernels show that kernel combinations always improve the traditional methods . Experiments with Support Vector Machines on the predicate argument classification task provide empirical support to our thesis . In recent years tree kernels have been shown to be interesting approaches for the modeling of syntactic information in natural language tasks , e.g. syntactic parsing ( Collins and Duffy , 2002 ) , relation extraction ( Zelenko et al . , 2003 ) , Named Entity recognition ( Cumby and Roth , 2003 ; Culotta and Sorensen , 2004 ) and Semantic Parsing ( Moschitti , 2004 ) . The main tree kernel advantage is the possibility to generate a high number of syntactic features and let the learning algorithm to select those most relevant for a specific application . In contrast , their major drawback are ( a ) the computational time complexity which is superlinear in the number of tree nodes and ( b ) the accuracy that they produce is often lower than the one provided by linear models on manually designed features . To solve problem ( a ) , a linear complexity algorithm for the subtree ( ST ) kernel computation , was designed in ( Vishwanathan and Smola , 2002 ) . Unfortunately , the ST set is rather poorer than the one generated by the subset tree ( SST ) kernel designed in ( Collins and Duffy , 2002 ) . Intuitively , an ST rooted in a node n of the target tree always contains all n 's descendants until the leaves . This does not hold for the SSTs whose leaves can be internal nodes . To solve the problem ( b ) , a study on different tree substructure spaces should be carried out to derive the tree kernel that provide the highest accuracy . On the one hand , SSTs provide learning algorithms with richer information which may be critical to capture syntactic properties of parse trees as shown , for example , in ( Zelenko et al . , 2003 ; Moschitti , 2004 ) . On the other hand , if the SST space contains too many irrelevant features , overfitting may occur and decrease the classification accuracy ( Cumby and Roth , 2003 ) . As a consequence , the fewer features of the ST approach may be more appropriate . In this paper , we aim to solve the above problems . We present ( a ) an algorithm for the evaluation of the ST and SST kernels which runs in linear average time and ( b ) a study of the impact of diverse tree kernels on the accuracy of Support Vector Machines ( SVMs ) . Our fast algorithm computes the kernels between two syntactic parse trees in O(m + n ) average time , where m and n are the number of nodes in the two trees . This low complexity allows SVMs to carry out experiments on hundreds of thousands of training instances since it is not higher than the complexity of the polynomial ker-nel , widely used on large experimentation e.g. ( Pradhan et al . , 2004 ) . To confirm such hypothesis , we measured the impact of the algorithm on the time required by SVMs for the learning of about 122,774 predicate argument examples annotated in PropBank ( Kingsbury and Palmer , 2002 ) and 37,948 instances annotated in FrameNet ( Fillmore , 1982 ) . Regarding the classification properties , we studied the argument labeling accuracy of ST and SST kernels and their combinations with the standard features ( Gildea and Jurafsky , 2002 ) . The results show that , on both PropBank and FrameNet datasets , the SST-based kernel , i.e. the richest in terms of substructures , produces the highest SVM accuracy . When SSTs are combined with the manual designed features , we always obtain the best figure classifier . This suggests that the many fragments included in the SST space are relevant and , since their manual design may be problematic ( requiring a higher programming effort and deeper knowledge of the linguistic phenomenon ) , tree kernels provide a remarkable help in feature engineering . In the remainder of this paper , Section 2 describes the parse tree kernels and our fast algorithm . Section 3 introduces the predicate argument classification problem and its solution . Section 4 shows the comparative performance in term of the execution time and accuracy . Finally , Section 5 discusses the related work whereas Section 6 summarizes the conclusions . In this paper , we have shown that tree kernels can effectively be adopted in practical natural language applications . The main arguments against their use are their efficiency and accuracy lower than traditional feature based approaches . We have shown that a fast algorithm ( FTK ) can evaluate tree kernels in a linear average running time and also that the overall converging time required by SVMs is compatible with very large data sets . Regarding the accuracy , the experiments with Support Vector Machines on the PropBank and FrameNet predicate argument structures show that : ( a ) the richer the kernel is in term of substructures ( e.g. SST ) , the higher the accuracy is , ( b ) tree kernels are effective also in case of automatic parse trees and ( c ) as kernel combinations always improve traditional feature models , the best approach is to combine scalar-based and structured based kernels .", "challenge": "While tree kernels enjoy generating a high number of syntactic features and use them flexibly, they suffer from super linear complexity and lower accuracies.", "approach": "They propose an algorithm which runs in linear average time to evaluate tree kernels used in support vector machines.", "outcome": "Their analysis with the proposed evaluation method shows that a fast algorithm can run in linear time and richer kernels achieve higher accuracy."} +{"id": "P19-1269", "document": "Accurate , automatic evaluation of machine translation is critical for system tuning , and evaluating progress in the field . We proposed a simple unsupervised metric , and additional supervised metrics which rely on contextual word embeddings to encode the translation and reference sentences . We find that these models rival or surpass all existing metrics in the WMT 2017 sentence-level and systemlevel tracks , and our trained model has a substantially higher correlation with human judgements than all existing metrics on the WMT 2017 to-English sentence level dataset . Evaluation metrics are a fundamental component of machine translation ( MT ) and other language generation tasks . The problem of assessing whether a translation is both adequate and coherent is a challenging text analysis problem , which is still unsolved , despite many years of effort by the research community . Shallow surfacelevel metrics , such as BLEU and TER ( Papineni et al . , 2002 ; Snover et al . , 2006 ) , still predominate in practice , due in part to their reasonable correlation to human judgements , and their being parameter free , making them easily portable to new languages . In contrast , trained metrics ( Song and Cohn , 2011 ; Stanojevic and Sima'an , 2014 ; Ma et al . , 2017 ; Shimanaka et al . , 2018 ) , which are learned to match human evaluation data , have been shown to result in a large boost in performance . This paper aims to improve over existing MT evaluation methods , through developing a series of new metrics based on contextual word embeddings ( Peters et al . , 2018 ; Devlin et al . , 2019 ) , a technique which captures rich and portable representations of words in context , which have been shown to provide important signal to many other NLP tasks ( Rajpurkar et al . , 2018 ) . We propose a simple untrained model that uses off-the-shelf contextual embeddings to compute approximate recall , when comparing a reference to an automatic translation , as well as trained models , including : a recurrent model over reference and translation sequences , incorporating attention ; and the adaptation of an NLI method ( Chen et al . , 2017 ) to MT evaluation . These approaches , though simple in formulation , are highly effective , and rival or surpass the best approaches from WMT 2017 . Moreover , we show further improvements in performance when our trained models are learned using noisy crowd-sourced data , i.e. , having single annotations for more instances is better than collecting and aggregating multiple annotations for single instances . The net result is an approach that is more data efficient than existing methods , while producing substantially better human correlations.1 We show that contextual embeddings are very useful for evaluation , even in simple untrained models , as well as in deeper attention based methods . When trained on a larger , much noisier range of instances , we demonstrate a substantial improvement over the state of the art . In future work , we plan to extend these models by using cross-lingual embeddings , and combine information from translation-source interactions as well as translation-reference interactions . There are also direct applications to Quality Estimation , by using the source instead of the reference . B System-level results for WMT 17 news and WMT 2016 IT domain", "challenge": "While evaluation metrics are fundamental for machine translation systems, the problem of assessing whether a translation is both adequate and coherent remains unsolved.", "approach": "They propose untrained contextual embeddings and also trained models such as a recurrent model to compute approximate recall between a reference and an automatic translation.", "outcome": "The proposed models rival or surpass existing metrics on WMT 2017 sentence and system-level tracks while having higher correlations with human judgements."} +{"id": "D17-1135", "document": "Existing approaches for Chinese zero pronoun resolution typically utilize only syntactical and lexical features while ignoring semantic information . The fundamental reason is that zero pronouns have no descriptive information , which brings difficulty in explicitly capturing their semantic similarities with antecedents . Meanwhile , representing zero pronouns is challenging since they are merely gaps that convey no actual content . In this paper , we address this issue by building a deep memory network that is capable of encoding zero pronouns into vector representations with information obtained from their contexts and potential antecedents . Consequently , our resolver takes advantage of semantic information by using these continuous distributed representations . Experiments on the OntoNotes 5.0 dataset show that the proposed memory network could substantially outperform the state-of-the-art systems in various experimental settings . A zero pronoun ( ZP ) is a gap in a sentence , which refers to an entity that supplies the necessary information for interpreting the gap ( Zhao and Ng , 2007 ) . A ZP can be either anaphoric if it corefers to one or more preceding noun phrases ( antecedents ) in the associated text , or non-anaphoric if there are no such noun phrases . Below is an example of ZPs and their antecedents , where \" \u03c6 \" denotes the ZP . [ \u8b66\u65b9 ] \u8868\u793a \u4ed6\u4eec \u81ea\u6740 \u7684 \u53ef\u80fd\u6027 \u5f88\u9ad8 \uff0c \u4e0d \u8fc7 \u03c6 1 \u4e5f \u4e0d \u6392\u9664 \u03c6 2 \u6709 \u4ed6\u6740 \u7684 \u53ef\u80fd \u3002 * Email corresponding . ( [ The police ] said that they are more likely to commit suicide , but \u03c6 1 could not rule out \u03c6 2 the possibility of homicide . ) In this example , the ZP \" \u03c6 1 \" is an anaphoric ZP that refers to the antecedent \" \u8b66\u65b9 / The police \" while the ZP \" \u03c6 2 \" is non-anaphoric . Unlike overt pronouns , ZPs lack grammatical attributes such as gender and number that have been proven to be essential in pronoun resolution ( Chen and Ng , 2014a ) , which makes ZP resolution a more challenging task than overt pronoun resolution . Automatic Chinese ZP resolution is typically composed of two steps , i.e. , anaphoric zero pronoun ( AZP ) identification that identifies whether a ZP is anaphoric ; and AZP resolution , which determines antecedents for AZPs . For AZP identification , state-of-the-art resolvers use machine learning algorithms to build AZP classifiers in a supervised manner ( Chen and Ng , 2013 , 2016 ) . For AZP resolution , literature approaches include unsupervised methods ( Chen and Ng , 2014b , 2015 ) , feature-based supervised models ( Zhao and Ng , 2007 ; Kong and Zhou , 2010 ) , and neural network models ( Chen and Ng , 2016 ) . Neural network models for AZP resolution are of growing interest for their capacity to learn task-specific representations without extensive feature engineering and to effectively exploit lexical information for ZPs and their candidate antecedents in a more scalable manner than feature-based models . Despite these advantages , existing supervised approaches ( Zhao and Ng , 2007 ; Chen and Ng , 2013 , 2016 ) for AZP resolution typically utilize only syntactical and lexical information through features . They overlook semantic information that is regarded as an important factor in the resolution of common noun phrases ( Ng , 2007 ) . The fundamental reason is that ZPs have no descriptive information , which results in difficulty in calculating semantic similarities and relatedness scores between the ZPs and their antecedents . Therefore , the proper representations of ZPs are required so as to take advantage of semantic information when resolving ZPs . However , representing ZPs is challenging because they are merely gaps that convey no actual content . One straightforward method to address this issue is to represent ZPs with supplemental information provided by some available components , such as contexts and candidate antecedents . Motivated by Chen and Ng ( 2016 ) who encode a ZP 's lexical contexts by utilizing its preceding word and governing verb , we notice that a ZP 's context can help to describe the ZP itself . As an example of its usefulness , given the sentence \" \u03c6 taste spicy \" , people may resolve the ZP \" \u03c6 \" to the candidate antecedent \" red peppers \" , but can hardly regard \" my shoes \" as its antecedent , because they naturally look at the ZP 's context \" taste spicy \" to resolve it ( \" my shoes \" can not \" taste spicy \" ) . Meanwhile , considering that the antecedents of a ZP provide the necessary information for interpreting the gap ( ZP ) , it is a natural way to express a ZP by its potential antecedents . However , only some subsets of candidate antecedents are needed to represent a ZP1 . To achieve this goal , a desirable solution should be capable of explicitly capturing the importance of each candidate antecedent and using them to build up the representation for the ZP . In this paper , inspired by the recent success of computational models with attention mechanism and explicit memory ( Sukhbaatar et al . , 2015 ; Tang et al . , 2016 ; Kumar et al . , 2015 ) , we focus on AZP resolution , proposing the zero pronounspecific memory network ( ZPMN ) that is competent for representing a ZP with information obtained from its contexts and candidate antecedents . These representations provide our system with an ability to take advantage of semantic information when resolving ZPs . Our ZPMN consists of multiple computational layers with shared parameters . With the underlying intuition that not all candidate antecedents are equally relevant for representing the ZP , we develop each computational layer as an attention-based model , which first learns the importance of each candidate antecedent and then utilizes this information to calculate the continu-ous distributed representation of the ZP . The attention weights over candidate antecedents with respect to the ZP 's representation obtained by the last layer are regarded as the ZP coreference classification result . Given that every component is differentiable , the entire model could be efficiently trained end-to-end with gradient descent . We evaluate our method on the Chinese portions of the OntoNotes 5.0 corpus by comparing with the baseline systems in different experimental settings . Results show that our approach significantly outperforms the baseline algorithms and achieves state-of-the-art performance . In this study , we propose a novel zero pronounspecific memory network that is capable of encoding zero pronouns into the vector representations with supplemental information obtained from their contexts and candidate antecedents . Consequently , these continuous distributed vectors provide our model with an ability to take advantage of the semantic information when resolving zero pronouns . We evaluate our method on the Chinese portion of OntoNotes 5.0 dataset and report substantial improvements over the state-ofthe-art systems in various experimental settings .", "challenge": "Existing Chinese zero pronoun resolution methods ignore semantic information because zero pronouns have no descriptive information to explicitly capture their semantic similarities with antecedents.", "approach": "They propose an attention-based deep memory network which encodes zero pronouns into vector representations of information obtained from their contexts and potential antecedents.", "outcome": "The proposed model outperforms the state-of-the-art and baseline models in various experimental settings on the Chinese portion of the OntoNotes 5.0 dataset."} +{"id": "W99-0615", "document": "We present a technique which complements Hidden Markov Models by incorporating some lexicalized states representing syntactically uncommon words . ' Our approach examines the distribution of transitions , selects the uncommon words , and makes lexicalized states for the words . We perfor'med a part-of-speech tagging experiment on the Brown corpus to evaluate the resultant language model and discovered that this technique improved the tagging accuracy by 0.21 % at the 95 % level of confidence . Hidden Markov ' Models are widely used for statistical language modelling in various fields , e.g. , part-of-speech tagging or speech recognition ( Rabiner and Juang , 1986 ) . The models are based on Markov assumptions , which make it possible to view the language prediction as a Markov process . ' In general , we make the firstorder Markov ass'umptions that the current tag is only dependant on the previous tag and that the current word is only dependant on the current tag . These are very ' strong ' assumptions , so that the first-order Hidden Markov Models have the advantage of drastically reducing the number of its parameters . On the other hand , the assumptions restrict the model from utilizing enough constraints provided by the local context and the resultant model consults only a single category ' as the contex . A lot of effort has been devoted in the past to make up for the insufficient contextual information of the first-order probabilistic model . The second order Hidden Markov Models with \" The research underlying this paper was supported t ) 3 \" research grants fl'om Korea Science and Engineering Foundation . appropriate smoothing techniques show better performance than the first order models and is considered a state-of-the-art technique ( Merialdo , 1994 ; Brants , 1996 ) . The complexity of the model is however relatively very high considering the small improvement of the performance . Garside describes IDIOMTAG ( Garside et al . , 1987 ) which is a component of a part-ofspeech tagging system named CLAWS . ID-IOMTAG serves as a front-end to the tagger and modifies some initially assigned tags in order to reduce the amount of ambiguity to be dealt with by the tagger . IDIOMTAG can look at any combination of words and tags , with or without intervening words . By using the IDIOMTAG , CLAWS system improved tagging accuracy from 94 % to 96 - 97 % . However , the manual-intensive process of producing idiom tags is very expensive although IDIOMTAG proved fruitful . Kupiec ( Kupiec , 1992 ) describes a technique of augmenting the Hidden Markov Models for part-of-speech tagging by the use of networks . Besides the original states representing each part-of-speech , the network contains additional states to reduce the noun / adjective confusion , and to extend the context for predicting past participles from preceding auxiliary verbs when they are separated by adverbs . By using these additional states , the tagging system improved the accuracy from 95.7 % to 96.0 % . However , the additional context is chosen by analyzing the tagging errors manually . An automatic refining technique for Hidden Markov Models has been proposed by Brants ( Brants , 1996 ) . It starts with some initial first order Markov Model . Some states of the model are selected to be split or merged to take into account their predecessors . As a result , each of new states represents a extended context . With this technique , Brants reported a performance cquivalent to the second order Hidden Markov Models . In this paper , we present an automatic refining technique for statistical language models . First , we examine the distribution of transitions of lexicalized categories . Next , we break out the uncommon ones from their categories and make new states for them . All processes are automated and the user has only to determine the extent of the breaking-out . In this paper , we present a method for complementing the Hidden Markov Models . With this method , we lexicalize the Hidden Markov Model seletively and automatically by examining the transition distribution of each state relating to certain words . Experimental results showed that the selective lexicalization improved the tagging accurary from about 95.79 % to about 96.00 % . Using normal tests for statistical significance we found that the improvement is significant at the 95 % level of confidence . Tile cost for this imt~rovenmnt is minimal . The resulting network contains 210 additional lexicalized states which are found automatically . Moreover , the lexicalization will not decrease the tagging speed 2 , because the lexicalized states and their corresponding original states are exclusive in our lexicalized network , and thus the rate of ambiguity is not increased even if the lexicalized states are included . Our approach leaves much room for improvement . We have so far considered only the outgoing transitions from the target states . As a result , we have discriminated only the words with right-associativity . We could also discriminate the words with left-associativity by examining the incoming transitions to the state . Furthermore , we could extend the context by using the second-order context as represented in Figure l(c ) . We believe that the same technique presented in this paper could be applied to the proposed extensions .", "challenge": "Hidden Markov Models used for language modeling are limited by the Markov assumptions and existing workarounds either make models complex or require expensive manual efforts.", "approach": "They propose to complement Hidden Markov Models by first examining the transition distribution and breaking out new lexicalized states for uncommon words.", "outcome": "The proposed model improves on a part-of-speech tagging task from Brown corpus by finding 210 lexicalized states automatically while keeping the additional cost minimal."} +{"id": "P12-1106", "document": "An ideal summarization system should produce summaries that have high content coverage and linguistic quality . Many state-ofthe-art summarization systems focus on content coverage by extracting content-dense sentences from source articles . A current research focus is to process these sentences so that they read fluently as a whole . The current AE-SOP task encourages research on evaluating summaries on content , readability , and overall responsiveness . In this work , we adapt a machine translation metric to measure content coverage , apply an enhanced discourse coherence model to evaluate summary readability , and combine both in a trained regression model to evaluate overall responsiveness . The results show significantly improved performance over AESOP 2011 submitted metrics . Research and development on automatic and manual evaluation of summarization systems have been mainly focused on content coverage ( Lin and Hovy , 2003 ; Nenkova and Passonneau , 2004 ; Hovy et al . , 2006 ; Zhou et al . , 2006 ) . However , users may still find it difficult to read such high-content coverage summaries as they lack fluency . To promote research on automatic evaluation of summary readability , the Text Analysis Conference ( TAC ) ( Owczarzak and Dang , 2011 ) introduced a new subtask on readability to its Automatically Evaluating Summaries of Peers ( AESOP ) task . Most of the state-of-the-art summarization systems ( Ng et al . , 2011 ; Zhang et al . , 2011 ; Conroy et al . , 2011 ) are extraction-based . They extract the most content-dense sentences from source articles . If no post-processing is performed to the generated summaries , the presentation of the extracted sentences may confuse readers . Knott ( 1996 ) argued that when the sentences of a text are randomly ordered , the text becomes difficult to understand , as its discourse structure is disturbed . Lin et al . ( 2011 ) validated this argument by using a trained model to differentiate an original text from a randomlyordered permutation of its sentences by looking at their discourse structures . This prior work leads us to believe that we can apply such discourse models to evaluate the readability of extract-based summaries . We will discuss the application of Lin et al . 's discourse coherence model to evaluate readability of machine generated summaries . We also introduce two new feature sources to enhance the model with hierarchical and Explicit / Non-Explicit information , and demonstrate that they improve the original model . There are parallels between evaluations of machine translation ( MT ) and summarization with respect to textual content . For instance , the widely used ROUGE ( Lin and Hovy , 2003 ) metrics are influenced by BLEU ( Papineni et al . , 2002 ) : both look at surface n-gram overlap for content coverage . Motivated by this , we will adapt a state-of-theart , linear programming-based MT evaluation metric , TESLA ( Liu et al . , 2010 ) , to evaluate the content coverage of summaries . TAC 's overall responsiveness metric evaluates the quality of a summary with regard to both its content and readability . Given this , we combine our two component coherence and content models into an SVM-trained regression model as our surrogate to overall responsiveness . Our experiments show that the coherence model significantly outperforms all AESOP 2011 submissions on both initial and update tasks , while the adapted MT evaluation metric and the combined model significantly outperform all submissions on the initial task . To the best of our knowledge , this is the first work that applies a discourse coherence model to measure the readability of summaries in the AESOP task . 2 Related Work Nenkova and Passonneau ( 2004 ) proposed a manual evaluation method that was based on the idea that there is no single best model summary for a collection of documents . Human annotators construct a pyramid to capture important Summarization Content Units ( SCUs ) and their weights , which is used to evaluate machine generated summaries . Lin and Hovy ( 2003 ) introduced an automatic summarization evaluation metric , called ROUGE , which was motivated by the MT evaluation metric , BLEU ( Papineni et al . , 2002 ) . It automatically determines the content quality of a summary by comparing it to the model summaries and counting the overlapping n-gram units . Two configurations -ROUGE-2 , which counts bigram overlaps , and ROUGE-SU4 , which counts unigram and bigram overlaps in a word window of four -have been found to correlate well with human evaluations . Hovy et al . ( 2006 ) pointed out that automated methods such as ROUGE , which match fixed length n-grams , face two problems of tuning the appropriate fragment lengths and matching them properly . They introduced an evaluation method that makes use of small units of content , called Basic Elements ( BEs ) . Their method automatically segments a text into BEs , matches similar BEs , and finally scores them . Both ROUGE and BE have been implemented and included in the ROUGE / BE evaluation toolkit 1 , which has been used as the default evaluation tool in the summarization track in the Document Un-1 http://berouge.com / default.aspx derstanding Conference ( DUC ) and Text Analysis Conference ( TAC ) . DUC and TAC also manually evaluated machine generated summaries by adopting the Pyramid method . Besides evaluating with ROUGE / BE and Pyramid , DUC and TAC also asked human judges to score every candidate summary with regard to its content , readability , and overall responsiveness . DUC and TAC defined linguistic quality to cover several aspects : grammaticality , non-redundancy , referential clarity , focus , and structure / coherence . Recently , Pitler et al . ( 2010 ) conducted experiments on various metrics designed to capture these aspects . Their experimental results on DUC 2006 and 2007 show that grammaticality can be measured by a set of syntactic features , while the last three aspects are best evaluated by local coherence . Conroy and Dang ( 2008 ) combined two manual linguistic scores -grammaticality and focus -with various ROUGE / BE metrics , and showed this helps better predict the responsiveness of the summarizers . Since 2009 , TAC introduced the task of Automatically Evaluating Summaries of Peers ( AESOP ) . AESOP 2009 and 2010 focused on two summary qualities : content and overall responsiveness . Summary content is measured by comparing the output of an automatic metric with the manual Pyramid score . Overall responsiveness measures a combination of content and linguistic quality . In AESOP 2011 ( Owczarzak and Dang , 2011 ) , automatic metrics are also evaluated for their ability to assess summary readability , i.e. , to measure how linguistically readable a machine generated summary is . Submitted metrics that perform consistently well on the three aspects include Giannakopoulos and Karkaletsis ( 2011 ) , Conroy et al . ( 2011 ) , and de Oliveira ( 2011 ) . Giannakopoulos and Karkaletsis ( 2011 ) created two character-based n-gram graph representations for both the model and candidate summaries , and applied graph matching algorithm to assess their similarity . Conroy et al . ( 2011 ) extended the model in ( Conroy and Dang , 2008 ) to include shallow linguistic features such as term overlap , redundancy , and term and sentence entropy . de Oliveira ( 2011 ) modeled the similarity between the model and candidate summaries as a maximum bipartite matching problem , where the two summaries are represented as two sets of nodes and precision and recall are cal- culated from the matched edges . However , none of the AESOP metrics currently apply deep linguistic analysis , which includes discourse analysis . Motivated by the parallels between summarization and MT evaluation , we will adapt a state-ofthe-art MT evaluation metric to measure summary content quality . To apply deep linguistic analysis , we also enhance an existing discourse coherence model to evaluate summary readability . We focus on metrics that measure the average quality of machine summarizers , i.e. , metrics that can rank a set of machine summarizers correctly ( human summarizers are not included in the list ) . We proposed TESLA-S by adapting an MT evaluation metric to measure summary content coverage , and introduced DICOMER by applying a dis- course coherence model with newly introduced features to evaluate summary readability . We combined these two metrics in the CREMER metric -an SVM-trained regression model -for automatic summarization overall responsiveness evaluation . Experimental results on AESOP 2011 show that DICOMER significantly outperforms all submitted metrics on both initial and update tasks with large gaps , while TESLA-S and CREMER significantly outperform all metrics on the initial task.3", "challenge": "Existing evaluation metrics for summarization models focus on content coverage while fluency also impacts readability.", "approach": "They propose to use a machine translation metric for content coverage, a discourse coherence model for readability, and combine them for overall responsiveness.", "outcome": "The proposed metric significantly improves over all the metrics submitted to AESOP 2011."} +{"id": "2022.naacl-main.46", "document": "In this paper , we define the task of gender rewriting in contexts involving two users ( I and/or You ) -first and second grammatical persons with independent grammatical gender preferences . We focus on Arabic , a gendermarking morphologically rich language . We develop a multi-step system that combines the positive aspects of both rule-based and neural rewriting models . Our results successfully demonstrate the viability of this approach on a recently created corpus for Arabic gender rewriting , achieving 88.42 M 2 F 0.5 on a blind test set . Our proposed system improves over previous work on the first-person-only version of this task , by 3.05 absolute increase in M 2 F 0.5 . We demonstrate a use case of our gender rewriting system by using it to post-edit the output of a commercial MT system to provide personalized outputs based on the users ' grammatical gender preferences . We make our code , data , and pretrained models publicly available . 1 Gender bias is a fundamental problem in natural language processing ( NLP ) and it has been receiving an increasing attention across a variety of core tasks such as machine translation ( MT ) , co-reference resolution , and dialogue systems . Research has shown that NLP systems have the ability to embed and amplify gender bias ( Sun et al . , 2019 ) , which not only degrades users ' experiences but also creates representational harm ( Blodgett et al . , 2020 ) . The embedded bias within NLP systems is usually attributed to training models on biased data that reflects the social inequalities of the world we live in . However , even the most balanced of models can still exhibit and amplify bias if they are designed to produce a single text output without taking their users ' gender preferences into consideration . Therefore , to provide the correct user-aware output , NLP systems should be designed to produce outputs that are as gender-specific as the users information they have access to . Users information could be either embedded as part of the input or provided externally by the users themselves . In cases where this information is unavailable to the system , generating all gender-specific forms or a gender-neutral form is more appropriate . Producing user-aware outputs becomes more challenging for systems targeting multi-user contexts ( first and second persons , with independent grammatical gender preferences ) , particularly when dealing with gender-marking morphologically rich languages . In this paper , we define the task of gender rewriting in contexts involving two users ( I and/or You ) -first and second grammatical persons with independent grammatical gender preferences and we focus on Arabic , a gender-marking morphologically rich language . The main contributions of our work are as follows : 1 . We introduce a multi-step gender rewriting system that combines the positive aspects of rule-based and neural models . 2 . We demonstrate our approach 's effectiveness by establishing a strong benchmark on a publicly available multi-user Arabic gender rewriting corpus . 3 . We show that our best system yields state-ofthe-art results on the first-person-only version of this task , beating previous work . 4 . We demonstrate a use case of our system by post-editing the output of an MT system to match users ' grammatical gender preferences . This paper is organized as follows . We first discuss related work ( \u00a7 2 ) as well as relevant Arabic linguistic facts ( \u00a7 3 ) . We then define the gender rewriting task in \u00a7 4 and describe the data we use and the gender rewriting model we build in \u00a7 5 and \u00a7 6 . Lastly , we present our experimental setup ( \u00a7 7 ) and results ( \u00a7 8) and conclude in \u00a7 9 . We defined the task of gender rewriting in contexts involving two users ( I and/or You ) , and developed a multi-step system that combines the positive aspects of both rule-based and neural rewriting models . Our best models establish the benchmark for this newly defined task and the SOTA for a previously defined first person version of it . We further demonstrated a use case of our gender rewriting system by post-editing the output of a commercial MT system to provide personalized outputs based on the users ' grammatical gender preferences . In future work , we plan to explore the use of other pretrained models , and to work on the problem of gender rewriting in other languages and dialectal varieties .", "challenge": "Systems which can produce user-aware outputs to avoid embedding and amplifying gender bias remain a challenge, particularly with gender-marking morphologically rich languages.", "approach": "They define the task of gender rewriting in contexts involving two users and propose a multi-step system that combines the rule-based and neural models.", "outcome": "The proposed approach is shown to be a strong baseline on the Arabic gender rewriting corpus and outperforms the state-of-the-art of the first-person-only version."} +{"id": "2021.naacl-main.29", "document": "The de-facto standard decoding method for semantic parsing in recent years has been to autoregressively decode the abstract syntax tree of the target program using a top-down depthfirst traversal . In this work , we propose an alternative approach : a Semi-autoregressive Bottom-up Parser ( SMBOP ) that constructs at decoding step t the top-K sub-trees of height \u2264 t. Our parser enjoys several benefits compared to top-down autoregressive parsing . From an efficiency perspective , bottom-up parsing allows to decode all sub-trees of a certain height in parallel , leading to logarithmic runtime complexity rather than linear . From a modeling perspective , a bottom-up parser learns representations for meaningful semantic sub-programs at each step , rather than for semantically-vacuous partial trees . We apply SMBOP on SPIDER , a challenging zero-shot semantic parsing benchmark , and show that SMBOP leads to a 2.2x speed-up in decoding time and a \u223c5x speed-up in training time , compared to a semantic parser that uses autoregressive decoding . SMBOP obtains 71.1 denotation accuracy on SPIDER , establishing a new state-of-the-art , and 69.5 exact match , comparable to the 69.6 exact match of the autoregressive RAT-SQL+GRAPPA . Semantic parsing , the task of mapping natural language utterances into programs ( Zelle and Mooney , 1996 ; Zettlemoyer and Collins , 2005 ; Clarke et al . ; Liang et al . , 2011 ) , has converged in recent years on a standard encoder-decoder architecture . Recently , meaningful advances emerged on the encoder side , including developments in Transformer-based architectures ( Wang et al . , 2020a ) and new pretraining techniques ( Yin et al . , 2020 ; Herzig et al . , 2020 ; Yu et al . , 2020 ; Deng et al . , 2020 ; Shi et al . , 2021 ) . Conversely , the decoder has remained roughly constant for years , where the abstract syntax tree of the target program is autoregressively decoded in a top-down manner ( Yin and Neubig , 2017 ; Krishnamurthy et al . , 2017 ; Rabinovich et al . , 2017 ) . Bottom-up decoding in semantic parsing has received little attention ( Cheng et al . , 2019 ; Odena et al . , 2020 ) . In this work , we propose a bottom-up semantic parser , and demonstrate that equipped with recent developments in Transformer-based ( Vaswani et al . , 2017 ) architectures , it offers several advantages . From an efficiency perspective , bottom-up parsing can naturally be done semiautoregressively : at each decoding step t , the parser generates in parallel the top-K program sub-trees of depth \u2264 t ( akin to beam search ) . This leads to runtime complexity that is logarithmic in the tree size , rather than linear , contributing to the rocketing interest in efficient and greener artificial intelligence technologies ( Schwartz et al . , 2020 ) . From a modeling perspective , neural bottom-up parsing provides learned representations for meaningful ( and executable ) sub-programs , which are sub-trees computed during the search procedure , in contrast to top-down parsing , where hidden states represent partial trees without clear semantics . Figure 1 illustrates a single decoding step of our parser . Given a beam Z t with K = 4 trees of height t ( blue vectors ) , we use cross-attention to contextualize the trees with information from the input question ( orange ) . Then , we score the frontier , that is , the set of all trees of height t + 1 that can be constructed using a grammar from the current beam , and the top-K trees are kept ( purple ) . Last , a representation for each of the new K trees is generated and placed in the new beam Z t+1 . After T decoding steps , the parser returns the highest-scoring tree in Z T that corresponds to a full program . Because we have gold trees at training time , the entire model is trained jointly using maximum likelihood . We evaluate our model , SMBOP1 ( SeMiautoregressive Bottom-up semantic Parser ) , on SPI- What are the names of actors over 60 ? Prune frontier In this work we present the first semiautoregressive bottom-up semantic parser that enjoys logarithmic theoretical runtime , and show that it leads to a 2.2x speed-up in decoding and \u223c5x faster taining , while maintaining state-of-the-art performance . Our work shows that bottom-up parsing , where the model learns representations for semantically meaningful sub-trees is a promising research direction , that can contribute in the future to setups such as contextual semantic parsing , where sub-trees often repeat , and can enjoy the benefits of execution at training time . Future work can also leverage work on learning tree representations ( Shiv and Quirk , 2019 ) to further improve parser performance .", "challenge": "Although bottom-up approaches for semantic parsing can theoretically achieve efficiency and produce meaningful sub-trees, they have not been studied well.", "approach": "They propose a transformer-based semi-autoregressive bottom-up parser for semantic parsing and couple it with a state-of-the-art encoder.", "outcome": "The proposed bottom-up parser can substantially improve over existing models in performance and also in decoding speed."} +{"id": "2021.eacl-main.196", "document": "Recent advances in language and vision push forward the research of captioning a single image to describing visual differences between image pairs . Suppose there are two images , I 1 and I 2 , and the task is to generate a description W 1,2 comparing them , existing methods directly model \u27e8I 1 , I 2 \u27e9 \u2192 W 1,2 mapping without the semantic understanding of individuals . In this paper , we introduce a Learningto-Compare ( L2C ) model , which learns to understand the semantic structures of these two images and compare them while learning to describe each one . We demonstrate that L2C benefits from a comparison between explicit semantic representations and singleimage captions , and generalizes better on the new testing image pairs . It outperforms the baseline on both automatic evaluation and human evaluation for the Birds-to-Words dataset . The task of generating textual descriptions of images tests a machine 's ability to understand visual data and interpret it in natural language . It is a fundamental research problem lying at the intersection of natural language processing , computer vision , and cognitive science . For example , single-image captioning ( Farhadi et al . , 2010 ; Kulkarni et al . , 2013 ; Vinyals et al . , 2015 ; Xu et al . , 2015 ) has been extensively studied . Recently , a new intriguing task , visual comparison , along with several benchmarks ( Jhamtani and Berg-Kirkpatrick , 2018 ; Tan et al . , 2019 ; Park et al . , 2019 ; Forbes et al . , 2019 ) has drawn increasing attention in the community . To complete the task and generate comparative descriptions , a machine should understand the visual differences between a pair of images ( see Figure 1 ) . Previous methods ( Jhamtani and Berg-Kirkpatrick , 2018 ) as the ResNet features ( He et al . , 2016 ) as a whole , and build end-to-end neural networks to predict the description of visual comparison directly . In contrast , humans can easily reason about the visual components of a single image and describe the visual differences between two images based on their semantic understanding of each one . Humans do not need to look at thousands of image pairs to describe the difference of new image pairs , as they can leverage their understanding of single images for visual comparison . Therefore , we believe that visual differences should be learned by understanding and comparing every single image 's semantic representation . A most recent work ( Zhang et al . , 2020 ) conceptually supports this argument , where they show that low-level ResNet visual features lead to poor generalization in vision-and-language navigation , and high-level semantic segmentation helps the agent Image I 1 Image I 2 V 1 V 2 Relation- enhanced features V g 1 V g 2 LSTM h t In this paper , we present a learning-to-compare framework for generating visual comparisons . Our segmentation encoder with semantic pooling and graph reasoning could construct structured image representations . We also show that learning to describe visual differences benefits from understanding the semantics of each image .", "challenge": "To generate a different description between two images, in contrast to how humans reason, existing models do not understand the semantics of each image.", "approach": "They propose a Learning-to-Compare model that can consider the semantic structures of each image using graph convolutional networks before comparing them and generating a description.", "outcome": "The proposed graph-based learning-to-compare model outperforms the baseline models using automatic and human evaluations and also generalizes better."} +{"id": "P16-1218", "document": "In this paper , we propose a neural network model for graph-based dependency parsing which utilizes Bidirectional LSTM ( BLSTM ) to capture richer contextual information instead of using high-order factorization , and enable our model to use much fewer features than previous work . In addition , we propose an effective way to learn sentence segment embedding on sentence-level based on an extra forward LSTM network . Although our model uses only first-order factorization , experiments on English Peen Treebank and Chinese Penn Treebank show that our model could be competitive with previous higher-order graph-based dependency parsing models and state-of-the-art models . Dependency parsing is a fundamental task for language processing which has been investigated for decades . It has been applied in a wide range of applications such as information extraction and machine translation . Among a variety of dependency parsing models , graph-based models are attractive for their ability of scoring the parsing decisions on a whole-tree basis . Typical graph-based models factor the dependency tree into subgraphs , including single arcs ( McDonald et al . , 2005 ) , sibling or grandparent arcs ( McDonald and Pereira , 2006 ; Carreras , 2007 ) or higher-order substructures ( Koo and Collins , 2010 ; Ma and Zhao , 2012 ) and then score the whole tree by summing scores of the subgraphs . In these models , subgraphs are usually represented as high-dimensional feature vectors which are then fed into a linear model to learn the feature weights . However , conventional graph-based models heavily rely on feature engineering and their performance is restricted by the design of features . In addition , standard decoding algorithm ( Eisner , 2000 ) only works for the first-order model which limits the scope of feature selection . To incorporate high-order features , Eisner algorithm must be somehow extended or modified , which is usually done at high cost in terms of efficiency . The fourth-order graph-based model ( Ma and Zhao , 2012 ) , which seems the highest-order model so far to our knowledge , requires O(n 5 ) time and O(n 4 ) space . Due to the high computational cost , highorder models are normally restricted to producing only unlabeled parses to avoid extra cost introduced by inclusion of arc-labels into the parse trees . To alleviate the burden of feature engineering , Pei et al . ( 2015 ) presented an effective neural network model for graph-based dependency parsing . They only use atomic features such as word unigrams and POS tag unigrams and leave the model to automatically learn the feature combinations . However , their model requires many atomic features and still relies on high-order factorization strategy to further improve the accuracy . Different from previous work , we propose an LSTM-based dependency parsing model in this paper and aim to use LSTM network to capture richer contextual information to support parsing decisions , instead of adopting a high-order factorization . The main advantages of our model are as follows : \u2022 By introducing Bidirectional LSTM , our model shows strong ability to capture potential long range contextual information and exhibits improved accuracy in recovering long distance dependencies . It is different to previous work in which a similar effect is usually achieved by high-order factorization . More-over , our model also eliminates the need for setting feature selection windows and reduces the number of features to a minimum level . \u2022 We propose an LSTM-based sentence segment embedding method named LSTM-Minus , in which distributed representation of sentence segment is learned by using subtraction between LSTM hidden vectors . Experiment shows this further enhances our model 's ability to access to sentence-level information . \u2022 Last but important , our model is a first-order model using standard Eisner algorithm for decoding , the computational cost remains at the lowest level among graph-based models . Our model does not trade-off efficiency for accuracy . We evaluate our model on the English Penn Treebank and Chinese Penn Treebank , experiments show that our model achieves competitive parsing accuracy compared with conventional high-order models , however , with a much lower computational cost . In this paper , we propose an LSTM-based neural network model for graph-based dependency parsing . Utilizing Bidirectional LSTM and segment embeddings learned by LSTM-Minus allows our model access to sentence-level information , making our model more accurate in recovering longdistance dependencies with only first-order factorization . Experiments on PTB and CTB show that our model could be competitive with conventional high-order models with a faster speed .", "challenge": "Neural graph-based dependency parsers can alleviate feature engineering but require atomic features and use a high-order factorization strategy which is incompatible with standard decoding algorithms.", "approach": "They propose to use Bidirectional LSTM to capture contextual information instead of high-order factorization and learn sentence segment embeddings on the sentence-level by another LSTM.", "outcome": "While the proposed model uses first-order factorization, it performs competitively with state-of-the-art models on English and Chinese Penn Treebank by capturing long range contextual information."} +{"id": "P11-1162", "document": "Text mining and data harvesting algorithms have become popular in the computational linguistics community . They employ patterns that specify the kind of information to be harvested , and usually bootstrap either the pattern learning or the term harvesting process ( or both ) in a recursive cycle , using data learned in one step to generate more seeds for the next . They therefore treat the source text corpus as a network , in which words are the nodes and relations linking them are the edges . The results of computational network analysis , especially from the world wide web , are thus applicable . Surprisingly , these results have not yet been broadly introduced into the computational linguistics community . In this paper we show how various results apply to text mining , how they explain some previously observed phenomena , and how they can be helpful for computational linguistics applications . Text mining / harvesting algorithms have been applied in recent years for various uses , including learning of semantic constraints for verb participants ( Lin and Pantel , 2002 ) related pairs in various relations , such as part-whole ( Girju et al . , 2003 ) , cause ( Pantel and Pennacchiotti , 2006 ) , and other typical information extraction relations , large collections of entities ( Soderland et al . , 1999 ; Etzioni et al . , 2005 ) , features of objects ( Pasca , 2004 ) and ontologies ( Carlson et al . , 2010 ) . They generally start with one or more seed terms and employ patterns that specify the desired information as it relates to the seed(s ) . Several approaches have been developed specifically for learning patterns , including guided pattern collection with manual filtering ( Riloff and Shepherd , 1997 ) automated surface-level pattern induction ( Agichtein and Gravano , 2000 ; Ravichandran and Hovy , 2002 ) probabilistic methods for taxonomy relation learning ( Snow et al . , 2005 ) and kernel methods for relation learning ( Zelenko et al . , 2003 ) . Generally , the harvesting procedure is recursive , in which data ( terms or patterns ) gathered in one step of a cycle are used as seeds in the following step , to gather more terms or patterns . This method treats the source text as a graph or network , consisting of terms ( words ) as nodes and inter-term relations as edges . Each relation type induces a different network1 . Text mining is a process of network traversal , and faces the standard problems of handling cycles , ranking search alternatives , estimating yield maxima , etc . The computational properties of large networks and large network traversal have been studied intensively ( Sabidussi , 1966 ; Freeman , 1979 ; Watts and Strogatz , 1998 ) and especially , over the past years , in the context of the world wide web ( Page et al . , 1999 ; Broder et al . , 2000 ; Kleinberg and Lawrence , 2001 ; Li et al . , 2005 ; Clauset et al . , 2009 ) . Surprisingly , except in ( Talukdar and Pereira , 2010 ) , this work has not yet been related to text mining research in the computational linguistics community . The work is , however , relevant in at least two ways . It sometimes explains why text mining algo-rithms have the limitations and thresholds that are empirically found ( or suspected ) , and it may suggest ways to improve text mining algorithms for some applications . In Section 2 , we review some related work . In Section 3 we describe the general harvesting procedure , and follow with an examination of the various statistical properties of implicit semantic networks in Section 4 , using our implemented harvester to provide illustrative statistics . In Section 5 we discuss implications for computational linguistics research . In this paper we describe the implicit ' hidden ' semantic network graph structure induced over the text of the web and other sources by the semantic relations people use in sentences . We describe how term harvesting patterns whose seed terms are harvested and then applied recursively can be used to discover these semantic term networks . Although these networks differ considerably from the web in relation density , type , and network size , we show , somewhat surprisingly , that the same power-law , smallworld effect , transitivity , and most other characteristics that apply to the web 's hyperlinked network structure hold also for the implicit semantic term graphs-certainly for the semantic relations and languages we have studied , and most probably for almost all semantic relations and human languages . This rather interesting observation leads us to surmise that the hyperlinks people create in the web are of essentially the same type as the semantic relations people use in normal sentences , and that they form an extension of normal language that was not needed before because people did not have the ability within the span of a single sentence to ' embed ' structures larger than a clause-certainly not a whole other page 's worth of information . The principal exception is the academic citation reference ( lexicalized as \" see \" ) , which is not used in modern webpages . Rather , the ' lexicalization ' now used is a formatting convention : the hyperlink is colored and often underlined , facilities offered by computer screens but not available to speech or easy in traditional typesetting .", "challenge": "Corpora can be seen as a network in text mining and data harvesting however how recent network analysis methods would perform is unknown.", "approach": "They study if recent network analysis tools can provide insights for computational linguistics by connecting two hidden semantic network graph structures in texts.", "outcome": "They show that there are similarities between networks on the Web and ones induced from texts such as in power-law, small-world effect and transitivity."} +{"id": "2021.acl-long.301", "document": "Question answering ( QA ) systems for large document collections typically use pipelines that ( i ) retrieve possibly relevant documents , ( ii ) re-rank them , ( iii ) rank paragraphs or other snippets of the top-ranked documents , and ( iv ) select spans of the top-ranked snippets as exact answers . Pipelines are conceptually simple , but errors propagate from one component to the next , without later components being able to revise earlier decisions . We present an architecture for joint document and snippet ranking , the two middle stages , which leverages the intuition that relevant documents have good snippets and good snippets come from relevant documents . The architecture is general and can be used with any neural text relevance ranker . We experiment with two main instantiations of the architecture , based on POSIT-DRMM ( PDRMM ) and a BERT-based ranker . Experiments on biomedical data from BIOASQ show that our joint models vastly outperform the pipelines in snippet retrieval , the main goal for QA , with fewer trainable parameters , also remaining competitive in document retrieval . Furthermore , our joint PDRMM-based model is competitive with BERT-based models , despite using orders of magnitude fewer parameters . These claims are also supported by human evaluation on two test batches of BIOASQ . To test our key findings on another dataset , we modified the Natural Questions dataset so that it can also be used for document and snippet retrieval . Our joint PDRMM-based model again outperforms the corresponding pipeline in snippet retrieval on the modified Natural Questions dataset , even though it performs worse than the pipeline in document retrieval . We make our code and the modified Natural Questions dataset publicly available . Question answering ( QA ) systems that search large document collections ( Voorhees , 2001 ; Tsatsaro-nis et al . , 2015 ; Chen et al . , 2017 ) typically use pipelines operating at gradually finer text granularities . A fully-fledged pipeline includes components that ( i ) retrieve possibly relevant documents typically using conventional information retrieval ( IR ) ; ( ii ) re-rank the retrieved documents employing a computationally more expensive document ranker ; ( iii ) rank the passages , sentences , or other ' snippets ' of the top-ranked documents ; and ( iv ) select spans of the top-ranked snippets as ' exact ' answers . Recently , stages ( ii)-(iv ) are often pipelined neural models , trained individually ( Hui et al . , 2017 ; Pang et al . , 2017 ; Lee et al . , 2018 ; McDonald et al . , 2018 ; Pandey et al . , 2019 ; Mackenzie et al . , 2020 ; Sekuli\u0107 et al . , 2020 ) . Although pipelines are conceptually simple , errors propagate from one component to the next ( Hosein et al . , 2019 ) , without later components being able to revise earlier decisions . For example , once a document has been assigned a low relevance score , finding a particularly relevant snippet can not change the document 's score . We propose an architecture for joint document and snippet ranking , i.e. , stages ( ii ) and ( iii ) , which leverages the intuition that relevant documents have good snippets and good snippets come from relevant documents . We note that modern web search engines display the most relevant snippets of the top-ranked documents to help users quickly identify truly relevant documents and answers ( Sultan et al . , 2016 ; Xu et al . , 2019 ; Yang et al . , 2019a ) . The top-ranked snippets can also be used as a starting point for multi-document query-focused summarization , as in the BIOASQ challenge ( Tsatsaronis et al . , 2015 ) . Hence , methods that identify good snippets are useful in several other applications , apart from QA . We also note that many neural models for stage ( iv ) have been proposed , often called QA or Machine Reading Comprehension ( MRC ) models ( Kadlec et al . , 2016 ; Cui et al . , 2017 ; Zhang et al . , 2020 ) , but they typically search for answers only in a particular , usually paragraph-sized snippet , which is given per question . For QA systems that search large document collections , stages ( ii ) and ( iii ) are also important , if not more important , but have been studied much less in recent years , and not in a single joint neural model . The proposed joint architecture is general and can be used in conjunction with any neural text relevance ranker ( Mitra and Craswell , 2018 ) . Given a query and N possibly relevant documents from stage ( i ) , the neural text relevance ranker scores all the snippets of the N documents . Additional neural layers re-compute the score ( ranking ) of each document from the scores of its snippets . Other layers then revise the scores of the snippets taking into account the new scores of the documents . The entire model is trained to jointly predict document and snippet relevance scores . We experiment with two main instantiations of the proposed architecture , using POSIT-DRMM ( McDonald et al . , 2018 ) , hereafter called PDRMM , as the neural text ranker , or a BERT-based ranker ( Devlin et al . , 2019 ) . We show how both PDRMM and BERT can be used to score documents and snippets in pipelines , then how our architecture can turn them into models that jointly score documents and snippets . Experimental results on biomedical data from BIOASQ ( Tsatsaronis et al . , 2015 ) show the joint models vastly outperform the corresponding pipelines in snippet extraction , with fewer trainable parameters . Although our joint architecture is engineered to favor retrieving good snippets ( as a near-final stage of QA ) , results show that the joint models are also competitive in document retrieval . We also show that our joint version of PDRMM , which has the fewest parameters of all models and does not use BERT , is competitive to BERT-based models , while also outperforming the best system of BIOASQ 6 ( Brokos et al . , 2018 ) in both document and snippet retrieval . These claims are also supported by human evaluation on two test batches of BIOASQ 7 ( 2019 ) . To test our key findings on another dataset , we modified Natural Questions ( Kwiatkowski et al . , 2019 ) , which only includes questions and answer spans from a single document , so that it can be used for document and snippet retrieval . Again , our joint PDRMMbased model largely outperforms the corresponding pipeline in snippet retrieval on the modified Natural Questions , though it does not perform better than the pipeline in document retrieval , since the joint model is geared towards snippet retrieval , i.e. , even though it is forced to extract snippets from fewer relevant documents . Finally , we show that all the neural pipelines and joint models we considered improve the BM25 ranking of traditional IR on both datasets . We make our code and the modified Natural Questions publicly available.1 2 Methods Our contributions can be summarized as follows : ( 1 ) We proposed an architecture to jointly rank documents and snippets with respect to a question , two particularly important stages in QA for large document collections ; our architecture can be used with any neural text relevance model . ( 2 ) We instantiated the proposed architecture using a recent neural relevance model ( PDRMM ) and a BERTbased ranker . ( 3 ) Using biomedical data ( from BIOASQ ) , we showed that the two resulting joint models ( PDRMM-based and BERT-based ) vastly outperform the corresponding pipelines in snippet re-trieval , the main goal in QA for document collections , using fewer parameters , and also remaining competitive in document retrieval . ( 4 ) We showed that the joint model ( PDRMM-based ) that does not use BERT is competitive with BERT-based models , outperforming the best BIOASQ 6 system ; our joint models ( PDRMM-and BERT-based ) also outperformed all BIOASQ 7 competitors . ( 5 ) We provide a modified version of the Natural Questions dataset , suitable for document and snippet retrieval . ( 6 ) We showed that our joint PDRMM-based model also largely outperforms the corresponding pipeline on open-domain data ( Natural Questions ) in snippet retrieval , even though it performs worse than the pipeline in document retrieval . ( 7 ) We showed that all the neural pipelines and joint models we considered improve the traditional BM25 ranking on both datasets . ( 8) We make our code publicly available . We hope to extend our models and datasets for stage ( iv ) , i.e. , to also identify exact answer spans within snippets ( paragraphs ) , similar to the answer spans of SQUAD ( Rajpurkar et al . , 2016 ( Rajpurkar et al . , , 2018 ) ) . This would lead to a multi-granular retrieval task , where systems would have to retrieve relevant documents , relevant snippets , and exact answer spans from the relevant snippets . BIOASQ already includes this multi-granular task , but exact answers are provided only for factoid questions and they are freely written by humans , as in MS-MARCO , with similar limitations . Hence , appropriately modified versions of the BIOASQ datasets are needed . Table 4 shows that further performance gains ( 6.80 to 7.85 document MAP , 15.42 to 17.34 snippet MAP ) are possible by tuning the weights of the two losses . The best scores are obtained when using both the extra sentence and document features . However , the model performs reasonably well even when one of the two types of extra features is removed , with the exception of \u03bb snip = 10 . The standard deviations of the MAP scores over the folds of the cross-validation indicate that the performance of the model is reasonably stable .", "challenge": "Pipeline systems for question answering systems for large document collections suffer from error propagation from one stage to the following.", "approach": "They propose to jointly model document and snippet rankings in a generic way to make work with any neural text relevance rankers.", "outcome": "The proposed systems outperform existing BERT-based models by using much fewer parameters on biomedical data and also on the Natural Questions dataset."} +{"id": "H05-1121", "document": "Query expansion techniques generally select new query terms from a set of top ranked documents . Although a user 's manual judgment of those documents would much help to select good expansion terms , it is difficult to get enough feedback from users in practical situations . In this paper we propose a query expansion technique which performs well even if a user notifies just a relevant document and a non-relevant document . In order to tackle this specific condition , we introduce two refinements to a well-known query expansion technique . One is application of a transductive learning technique in order to increase relevant documents . The other is a modified parameter estimation method which laps the predictions by multiple learning trials and try to differentiate the importance of candidate terms for expansion in relevant documents . Experimental results show that our technique outperforms some traditional query expansion methods in several evaluation measures . Query expansion is a simple but very useful technique to improve search performance by adding some terms to an initial query . While many query expansion techniques have been proposed so far , a standard method of performing is to use relevance information from a user ( Ruthven , 2003 ) . If we can use more relevant documents in query expansion , the likelihood of selecting query terms achieving high search improvement increases . However it is impractical to expect enough relevance information . Some researchers said that a user usually notifies few relevance feedback or nothing ( Dumais and et al . , 2003 ) . In this paper we investigate the potential performance of query expansion under the condition that we can utilize little relevance information , especially we only know a relevant document and a nonrelevant document . To overcome the lack of relevance information , we tentatively increase the number of relevant documents by a machine learning technique called Transductive Learning . Compared with ordinal inductive learning approach , this learning technique works even if there is few training examples . In our case , we can use many documents in a hit-list , however we know the relevancy of few documents . When applying query expansion , we use those increased documents as if they were true relevant ones . When applying the learning , there occurs some difficult problems of parameter settings . We also try to provide a reasonable resolution for the problems and show the effectiveness of our proposed method in experiments . The point of our query expansion method is that we focus on the availability of relevance information in practical situations . There are several researches which deal with this problem . Pseudo relevance feedback which assumes top n documents as relevant ones is one example . This method is simple and relatively effective if a search engine returns a hit-list which contains a certain number of relative documents in the upper part . However , unless this assumption holds , it usually gives a worse ranking than the initial search . Thus several researchers propose some specific procedure to make pseudo feedback be effective ( Yu and et al , 2003 ; Lam-Adesina and Jones , 2001 ) . In another way , Onoda ( Onoda et al . , 2004 ) tried to apply one-class SVM ( Support Vector Machine ) to relevance feedback . Their purpose is to improve search performance by using only nonrelevant documents . Though their motivation is similar to ours in terms of applying a machine learning method to complement the lack of relevance information , the assumption is somewhat different . Our assumption is to utilizes manual but the minimum relevance judgment . Transductive leaning has already been applied in the field of image retrieval ( He and et al . , 2004 ) . In this research , they proposed a transductive method called the manifold-ranking algorithm and showed its effectiveness by comparing with active learning based Support Vector Machine . However , their setting of relevance judgment is not different from many other traditional researches . They fix the total number of images that are marked by a user to 20 . As we have already claimed , this setting is not practical because most users feel that 20 is too much for judgment . We think none of research has not yet answered the question . For relevance judgment , most of the researches have adopted either of the following settings . One is the setting of \" Enough relevant documents are available \" , and the other is \" No relevant document is available \" . In contrast to them , we adopt the setting of \" Only one relevant document is available \" . Our aim is to achieve performance improvement with the minimum effort of judging relevancy of documents . The reminder of this paper is structured as follows . Section 2 describes two fundamental techniques for our query expansion method . Section 3 explains a technique to complement the smallness of manual relevance judgment . Section 4 introduces a whole procedure of our query expansion method step by step . Section 5 shows empirical evidence of the effectiveness of our method compared with two traditional query expansion methods . Section 6 investigates the experimental results more in detail . Finally , Section 7 summarizes our findings . In this paper we proposed a novel query expansion method which only use the minimum manual judgment . To complement the lack of relevant documents , this method utilizes the SGT transductive learning algorithm to predict the relevancy of unjudged documents . Since the performance of SGT much depends on an estimation of the fraction of relevant documents , we propose a method to sample some good fraction values . We also propose a method to laps the predictions of multiple SGT trials with above sampled fraction values and try to differentiate the importance of candidate terms for expansion in relevant documents . The experimental results showed our method outperforms other query expansion methods in the evaluations of several criteria .", "challenge": "Although existing works on query expansion assume some relevant documents at their disposal, it is impractical because users provide few or no feedback.", "approach": "They consider an experimental setup where there is only one relevant document provided by a user and extend a transductive learning technique to compensate.", "outcome": "The proposed technique outperforms traditional query expansion methods in several evaluation metrics."} +{"id": "D19-1367", "document": "In this paper , we study differentiable neural architecture search ( NAS ) methods for natural language processing . In particular , we improve differentiable architecture search by removing the softmax-local constraint . Also , we apply differentiable NAS to named entity recognition ( NER ) . It is the first time that differentiable NAS methods are adopted in NLP tasks other than language modeling . On both the PTB language modeling and CoNLL-2003 English NER data , our method outperforms strong baselines . It achieves a new state-ofthe-art on the NER task . Neural architecture search ( NAS ) has become popular recently in machine learning for their ability to find new models and to free researchers from the hard work of designing network architectures . The earliest of these approaches use reinforcement learning ( RL ) to learn promising architectures in a discrete space ( Zoph and Le , 2016 ) , whereas others have successfully modeled the problem in a continuous manner ( Liu et al . , 2019 ; Xie et al . , 2019b ; Huang and Xiang , 2019 ) . As an instance of the latter , differentiable architecture search ( DARTS ) employs continuous relaxation to architecture representation and makes gradient descent straightforwardly applicable to search . This leads to an efficient search process that is orders of magnitude faster than the RL-based counterparts . Like recent methods in NAS ( Xie and Yuille , 2017 ; Zoph and Le , 2016 ; Baker et al . , 2016 ) , DARTS represents networks as a directed acyclic graph for a given computation cell ( see Figure 1(a ) ) . An edge between nodes performs a predefined operation to transform the input ( i.e. , tail ) Figure 1 : An overview of DARTS cell and our cell to the output ( i.e. , head ) . For a continuous network space , DARTS uses the softmax trick to relax the categorical choice of edges to soft decisions . Then , one can optimize over the graph using standard gradient descent methods . The optimized network is inferred by choosing the edges with maximum weights in softmax . However , DARTS is a \" local \" model because the softmax-based relaxation is imposed on each bundle of edges between two nodes . This leads to a biased model in that edges coming from different nodes are not comparable . Such a constraint limits the inference space to sub-graphs with one edge between each pair of nodes . Also , the learned network might be redundant because every node has to receive edges from all predecessors no matter they are necessary or not . This problem is similar to the bias problem in other graph-based models where local decisions make the model nonoptimal ( Lafferty et al . , 2001 ; Daphne Koller and Nir Friedman , 2009 ) . Here we present an improvement of DARTS , called I-DARTS , that further relaxes the softmaxlocal constraint . The idea is simple -we consider all incoming edges to a given node in a single softmax . This offers a broader choice of edges and enlarges the space we infer the network from . For example , one can simultaneously select multiple important edges between two nodes and leave some node pairs unlinked ( see Figure 1(b ) ) . I-DARTS outperforms strong baselines on the PTB language modeling and CoNLL named entity recognition ( NER ) tasks . This gives a new stateof-the-art on the NER dataset . To our knowledge , it is the first time to apply differentiable architecture search methods to NLP tasks other than language modeling . More interestingly , we observe that our method is 1.4X faster than DARTS for convergence of architecture search . Also , we provide the architectures learned by I-DARTS , which can be referred for related tasks . We improved the DARTS to address the bias problem by removing the softmax-local constraint . Our method is search efficient and discovers several better architectures for PTB language modeling and CoNLL named entity recognition ( NER ) tasks . We plan to consider the network density problem in search and apply I-DARTS to more tasks in our future study .", "challenge": "Softmax-based relaxation used for differentiable architecture search makes edges coming from different nodes incomparable limiting the inference space to sub-graphs.", "approach": "They propose removing the softmax constraint from the existing differentiable architecture search method to enable a broader choice of edges and enlarge the search space.", "outcome": "The proposed method outperforms an existing method on the CoNLL NER dataset and also achieves 1.4 times faster convergence time."} +{"id": "2021.acl-long.339", "document": "Generating open-domain conversational responses in the desired style usually suffers from the lack of parallel data in the style . Meanwhile , using monolingual stylistic data to increase style intensity often leads to the expense of decreasing content relevance . In this paper , we propose to disentangle the content and style in latent space by diluting sentence-level information in style representations . Combining the desired style representation and a response content representation will then obtain a stylistic response . Our approach achieves a higher BERT-based style intensity score and comparable BLEU scores , compared with baselines . Human evaluation results show that our approach significantly improves style intensity and maintains content relevance . Linguistic style is an essential aspect of natural language interaction and provides particular ways of using language to engage with the audiences ( Kabbara and Cheung , 2016 ) . In human-bot conversations , it is crucial to generate stylistic responses for increasing user engagement to conversational systems ( Gan et al . , 2017 ) . Currently , most of the existing parallel datasets are not stylistically consistent . Samples in these datasets are usually contributed by a variety of users , resulting in an averaging effect across style characteristics ( Zhang et al . , 2018a ) . Meanwhile , constructing a parallel stylistic dataset for training the open-domain conversational agents is both labor-intensive and time-consuming . Recent studies show the effect of stylizing responses using a monolingual dataset in the desired style and a conventional conversational dataset ( Niu and Bansal , 2018 ; Gao et al . , 2019b ) . However , increasing style intensity often leads to ( Niu and Bansal , 2018 ) , Style Fusion ( Gao et al . , 2019b ) , and our approach , targeting the Holmes style , which is quite formal and polite . the expense of decreasing content relevance between dialogue history and response . As an example in Figure 1 shows , Niu and Bansal ( 2018 ) independently train a response generation model and a stylistic language model and subsequently interpolates them in the inference phase . Lacking the interaction between the stylistic language model and response generation encoder , it usually yields a trade-off between style intensity and content relevance . Gao et al . ( 2019a , b ) fuse a structured latent space where the direction denotes the diversity , and the distance denotes style intensity and content relevance . The main issue is that style intensity and content relevance are contradictory in measurement but are coupling to the same \" distance \" metric of the latent space . To sum up , the key issue of the above studies is the improper entanglement of style and content . To address the issue , we propose to disentangle the style and content of a response . The disentanglement is conducted on the structured latent space , where each sentence ( dialogue history , response , and stylistic sentence ) is projected into a vector representation . We further split the representation into two components : style and content representations . The former is a corpus-level feature since sentences within a dataset have the same style . In contrast , the content representation is a sentence-level feature decided by a sentence itself . We thus disentangle the content and style by diluting sentence-level information in the style representation . This encourages the encoding of content information into the content representation . Otherwise , the content information will be corrupted in the style representation , making it hard to reconstruct the original content in the subsequent decoding process . We conduct experiments on DailyDialogue conversational dataset ( Li et al . , 2017 ) and Holmes monolingual stylistic dataset ( Gao et al . , 2019b ) . Experimental results show that our proposed approach improves style intensity and maintains content relevance . Our contributions are listed below : \u2022 We propose a unified framework to simultaneously improve style intensity and maintain content relevance for neural stylistic response generation . \u2022 We introduce a scheme of learning latent variables by a diluting strategy to disentangle the style and content . \u2022 Experimental results show that our approach achieves higher performance in style intensity without decreasing content relevance , compared with previous approaches . We propose a uniform framework to simultaneously improve the style intensity and maintain the content relevance for neural stylistic response generation . In contrast to existing approaches , our approach disentangles the style and the content in the latent space by a diluting strategy . Experiments show that our approach improves the style intensity of generated responses and maintains the content relevance at the same time , which demonstrates the effectiveness of this approach .", "challenge": "Because existing studies for open-domain conversational response generation have improper entanglement of style and content, they fail to achieve the desired styles for content relevance.", "approach": "They propose to disentangle the content and style in latent space by diluting sentence-level information, then combine style and content representations to produce stylistic responses.", "outcome": "The proposed method achieves a higher BERT-based style intensity score and comparable BLEU scores, and human evaluation shows improvements in style intensity and content relevance."} +{"id": "2021.naacl-main.73", "document": "Large Transformers pretrained over clinical notes from Electronic Health Records ( EHR ) have afforded substantial gains in performance on predictive clinical tasks . The cost of training such models ( and the necessity of data access to do so ) coupled with their utility motivates parameter sharing , i.e. , the release of pretrained models such as ClinicalBERT ( Alsentzer et al . , 2019 ) . While most efforts have used deidentified EHR , many researchers have access to large sets of sensitive , nondeidentified EHR with which they might train a BERT model ( or similar ) . Would it be safe to release the weights of such a model if they did ? In this work , we design a battery of approaches intended to recover Personal Health Information ( PHI ) from a trained BERT . Specifically , we attempt to recover patient names and conditions with which they are associated . We find that simple probing methods are not able to meaningfully extract sensitive information from BERT trained over the MIMIC-III corpus of EHR . However , more sophisticated \" attacks \" may succeed in doing so : To facilitate such research , we make our experimental setup and baseline probing models available . 1 Pretraining large ( masked ) language models such as BERT ( Devlin et al . , 2019 ) over domain specific corpora has yielded consistent performance gains across a broad range of tasks . In biomedical NLP , this has often meant pretraining models over collections of Electronic Health Records ( EHRs ) ( Alsentzer et al . , 2019 ) . For example , Huang et al . ( 2019 ) showed that pretraining models over EHR data improves performance on clinical predictive tasks . Given their empirical utility , and the fact that pretraining large networks requires a nontrivial amount of compute , there is a natural desire to equal contribution . 1 https://github.com / elehman16/ exposing_patient_data_release . share the model parameters for use by other researchers in the community . However , in the context of pretraining models over patient EHR , this poses unique potential privacy concerns : Might the parameters of trained models leak sensitive patient information ? In the United States , the Health Insurance Portability and Accountability Act ( HIPAA ) prohibits the sharing of such text if it contains any reference to Protected Health Information ( PHI ) . If one removes all reference to PHI , the data is considered \" deidentified \" , and is therefore legal to share . While researchers may not directly share nondeidentified text,2 it is unclear to what extent models pretrained on non-deidentified data pose privacy risks . Further , recent work has shown that general purpose large language models are prone to memorizing sensitive information which can subsequently be extracted ( Carlini et al . , 2020 ) . In the context of biomedical NLP , such concerns have been cited as reasons for withholding direct publication of trained model weights ( McKinney et al . , 2020 ) . These uncertainties will continue to hamper dissemination of trained models among the broader biomedical NLP research community , motivating a need to investigate the susceptibility of such models to adversarial attacks . This work is a first step towards exploring the potential privacy implications of sharing model weights induced over non-deidentified EHR text . We propose and run a battery of experiments intended to evaluate the degree to which Transformers ( here , BERT ) pretrained via standard masked language modeling objectives over notes in EHR might reveal sensitive information ( Figure 1 ) . We find that simple methods are able to recover associations between patients and conditions at rates better than chance , but not with performance beyond that achievable using baseline condition frequencies . This holds even when we enrich clinical notes by explicitly inserting patient names into every sentence . Our results using a recently proposed , more sophisticated attack based on generating text ( Carlini et al . , 2020 ) are mixed , and constitute a promising direction for future work . We have performed an initial investigation into the degree to which large Transformers pretrained over EHR data might reveal sensitive personal health information ( PHI ) . We ran a battery of experiments in which we attempted to recover such information from BERT model weights estimated over the MIMIC-III dataset ( into which we artificially reintroduced patient names , as MIMIC is deidentified ) . Across these experiments , we found that we were mostly unable to meaningfully expose PHI using simple methods . Moreover , even when we constructed a variant of data in which we prepended patient names to every sentence prior to pretraining BERT , we were still unable to recover sensitive information reliably . Our initial results using more advanced techniques based on generation ( Carlini et al . 2020 ; Table 9 ) are intriguing but inconclusive at present . Our results certainly do not rule out the possibility that more advanced methods might reveal PHI . But , these findings do at least suggest that doing so is not trivial . To facilitate further research , we make our experimental setup and baseline probing models available : https://github.com/ elehman16 / exposing_patient_data_release .", "challenge": "Whether distributing a language model such as BERT pre-trained on non-deidentified data like Electronic Health Records can leak sensitive patient information remains unknown.", "approach": "They design a set of approaches aiming to recover Personal Health Information such as patient names and associated conditions from a trained BERT.", "outcome": "They find that simple probing methods are unable to extract sensitive information better than change even by using patient names explicitly."} +{"id": "P09-1092", "document": "We investigate the influence of information status ( IS ) on constituent order in German , and integrate our findings into a loglinear surface realisation ranking model . We show that the distribution of pairs of IS categories is strongly asymmetric . Moreover , each category is correlated with morphosyntactic features , which can be automatically detected . We build a loglinear model that incorporates these asymmetries for ranking German string realisations from input LFG F-structures . We show that it achieves a statistically significantly higher BLEU score than the baseline system without these features . There are many factors that influence word order , e.g. humanness , definiteness , linear order of grammatical functions , givenness , focus , constituent weight . In some cases , it can be relatively straightforward to automatically detect these features ( i.e. in the case of definiteness , this is a syntactic property ) . The more complex the feature , the more difficult it is to automatically detect . It is common knowledge that information status1 ( henceforth , IS ) has a strong influence on syntax and word order ; for instance , in inversions , where the subject follows some preposed element , Birner ( 1994 ) reports that the preposed element must not be newer in the discourse than the subject . We would like to be able to use information related to IS in the automatic generation of German text . Ideally , we would automatically annotate text with IS labels and learn from this data . Unfortunately , however , to date , there has been little success in automatically annotating text with IS . We believe , however , that despite this shortcoming , we can still take advantage of some of the insights gained from looking at the influence of IS on word order . Specifically , we look at the problem from a more general perspective by computing an asymmetry ratio for each pair of IS categories . Results show that there are a large number of pairs exhibiting clear ordering preferences when co-occurring in the same clause . The question then becomes , without being able to automatically detect these IS category pairs , can we , nevertheless , take advantage of these strong asymmetric patterns in generation . We investigate the ( automatically detectable ) morphosyntactic characteristics of each asymmetric IS pair and integrate these syntactic asymmetric properties into the generation process . The paper is structured as follows : Section 2 outlines the underlying realisation ranking system for our experiments . Section 3 introduces information status and Section 4 describes how we extract and measure asymmetries in information status . In Section 5 , we examine the syntactic characteristics of the IS asymmetries . Section 6 outlines realisation ranking experiments to test the integration of IS into the system . We discuss our findings in Section 7 and finally we conclude in Section 8 . In this paper we presented a novel method of including IS into the task of generation ranking . Since automatic annotation of IS labels themselves is not currently possible , we approximate the IS categories by their syntactic characteristics . By calculating strong asymmetries between pairs of IS labels , and establishing the most frequent syntactic characteristics of these asymmetries , we designed a new set of features for a log-linear ranking model . In comparison to a baseline model , we achieve statistically significant improvement in BLEU score . We showed that these improvements were not only due to the effect of purely syntactic asymmetries , but that the IS asymmetries were what drove the improved model .", "challenge": "While it is known that information status has a strong influence on syntax and word order, its automatic annotation has not been successful.", "approach": "They propose to integrate syntactic asymmetric properties of information status into a log-linear model to generate German by approximating their category pairs by syntactic characteristics.", "outcome": "The proposed approach achieves a higher BLEU score than the baseline system, not only by the effect of purely syntactic asymmetries but of information status."} +{"id": "E99-1005", "document": "This paper explores the determinants of adjective-noun plausibility by using correlation analysis to compare judgements elicited from human subjects with five corpus-based variables : co-occurrence frequency of the adjective-noun pair , noun frequency , conditional probability of the noun given the adjective , the log-likelihood ratio , and Resnik 's ( 1993 ) selectional association measure . The highest correlation is obtained with the co-occurrence frequency , which points to the strongly lexicalist and collocational nature of adjective-noun combinations . Research on linguistic plausibility has focused mainly on the effects of argument plausibility during the processing of locally ambiguous sentences . Psycholinguists have investigated whether the plausibility of the direct object affects reading times for sentences like ( 1 ) . Here , argument plausibility refers to \" pragmatic plausibility \" or \" local semantic fit \" ( Holmes et al . , 1989 ) , and judgements of plausibility are typically obtained by asking subjects to rate sentence fragments containing verb-argument combinations ( as an example consider the bracketed parts of the sentences in ( 1 ) ) . Such experiments typically use an ordinal scale for plausibility ( e.g. , from 1 to 7 ) . ( 1 ) a. [ The senior senator regretted the decision ] had ever been made public . b. [ The senior senator regretted the reporter ] had ever seen the report . The majority of research has focussed on investigating the effect of rated plausibility for verb-object combinations in human sentence processing ( Garnsey et al . , 1997 ; Pickering and Traxler , 1998 ) . However , plausibility effects have also been observed for adjectivenoun combinations in a head-modifier relationship . Murphy ( 1990 ) has shown that typical adjectivenoun phrases ( e.g. , salty olives ) are easier to interpret in comparison to atypical ones ( e.g. , sweet olives ) . Murphy provides a schema-based explanation for this finding by postulating that in typical adjective-noun phrases , the adjective modifies part of the noun 's schema and consequently it is understood more quickly , whereas in atypical combinations , the adjective modifies non-schematic aspects of the noun , which leads to interpretation difficulties . Smadja ( 1991 ) argues that the reason people prefer strong tea to powerful tea and powerful car to strong car is neither purely syntactic nor purely semantic , but rather lexical . A similar argument is put forward by Cruse ( 1986 ) , who observes that the adjective spotless collocates well with the noun kitchen , relatively worse with the noun complexion and not all with the noun taste . According to Cruse , words like spotless have idiosyncratic collocational restrictions : differences in the degree of acceptability of the adjective and its collocates do not seem to depend on the meaning of the individual words . This paper explored the determinants of linguistic plausibility , a concept that is potentially relevant for lexical choice in natural language generation systems . Adjective-noun plausibility served as a test bed for a number of corpus-based models of linguistic plausibility . Plausibility judgements were obtained from human subjects for 90 randomly selected adjective-noun pairs . The ratings revealed a clear effect of familiarity of the adjective-noun pair ( operationalised by corpus co-occurrence frequency ) . In a correlation analysis we compared judged plausibility with the predictions of five corpus-based variables . The highest correlation was obtained with the co-occurrence frequency of the adjective-noun pair . Conditional probability , the log-likelihood ratio , and Resnik 's ( 1993 ) selectional association measure were also significantly correlated with plausibility ratings . The correlation with Resnik 's measure was negative , contrary to the predictions of his model . This points to a problem with his technique for estimating word class frequencies , which is aggravated by the collocational nature of noun-adjective combinations . Overall , the results confirm the strongly lexicalist and collocational nature of adjective-noun combinations . This fact could be exploited in a generation system by taking into account corpus co-occurrence counts for adjective-noun pairs ( which can be obtained straightforwardly ) during lexical choice . Future research has to identify how this approach can be generalised to unseen data .", "challenge": "Existing studies on the plausibility of texts focus on verb-object combinations while there could be some impacts from adjective-noun pairs as well.", "approach": "They perform correlational analysis to explore the determinants of adjective-noun plausibility against human judgements.", "outcome": "They show that co-occurrence frequency correlates play the biggest role as the determinans of adjective-noun plausibility."} +{"id": "2020.acl-main.669", "document": "Unsupervised relation extraction ( URE ) extracts relations between named entities from raw text without manually-labelled data and existing knowledge bases ( KBs ) . URE methods can be categorised into generative and discriminative approaches , which rely either on hand-crafted features or surface form . However , we demonstrate that by using only named entities to induce relation types , we can outperform existing methods on two popular datasets . We conduct a comparison and evaluation of our findings with other URE techniques , to ascertain the important features in URE . We conclude that entity types provide a strong inductive bias for URE . 1 Relation extraction ( RE ) extracts semantic relations between entities from plain text . For instance , \" Jon Robin Baitz head , born in Los Angeles tail ... \" expresses the relation /people / person / place of birth between the two head-tail entities . Extracted relations are then used for several downstream tasks such as information retrieval ( Corcoglioniti et al . , 2016 ) and knowledge base construction ( Al-Zaidy and Giles , 2018 ) . RE has been widely studied using fully supervised learning ( Nguyen and Grishman , 2015 ; Miwa and Bansal , 2016 ; Zhang et al . , 2017 Zhang et al . , , 2018 ) ) and distantly supervised approaches ( Mintz et al . , 2009 ; Riedel et al . , 2010 ; Lin et al . , 2016 ) . Unsupervised relation extraction ( URE ) methods have not been explored as much as fully or distantly supervised learning techniques . URE is promising , since it does not require manually annotated data nor human curated knowledge bases ( KBs ) , which are expensive to produce . Therefore , it can be applied to domains and languages where annotated data and KBs are not available . Moreover , URE can discover new relation types , since it is not restricted to specific relation types in the same way as fully and distantly supervised methods . One might argue that Open Information Extraction ( OpenIE ) can also discover new relations . However , OpenIE identifies relations based on textual surface information . Thus , similar relations with different textual forms may not be recognised . Unlike OpenIE , URE groups similar relations into clusters . Despite these advantages , there are only a few attempts tackling URE using machine learning ( ML ) ( Hasegawa et al . , 2004 ; Banko et al . , 2007 ; Yao et al . , 2011 ; Marcheggiani and Titov , 2016 ; Simon et al . , 2019 ) . Similarly to other unsupervised learning tasks , a challenge in URE is how to evaluate results . Recent approaches ( Yao et al . , 2011 ; Marcheggiani and Titov , 2016 ; Simon et al . , 2019 ) employ a widely used data generation setting in distantly supervised RE , i.e. , aligning a large amount of raw text against triplets in a curated KB . A standard metric score is computed by comparing the output relation clusters against the automatically annotated relations . In particular , the NYT-FB dataset ( Marcheggiani and Titov , 2016 ) which is used for evaluation , has been created by mapping relation triplets in Freebase ( Bollacker et al . , 2008 ) against plain text articles in the New York Times ( NYT ) corpus ( Sandhaus , 2008 ) . Standard clustering evaluation metrics for URE include B 3 ( Bagga and Baldwin , 1998 ) , V-measure ( Rosenberg and Hirschberg , 2007 ) , and ARI ( Hubert and Arabie , 1985 ) . Although the above mentioned experimental setting can be created automatically , there are three challenges to overcome . Firstly , the development and test sets are silver , i.e. , they include noisy labelled instances , since they are not human-curated . Secondly , the development and test sentences are part of the training set , i.e. , a transductive setting . It is thus unclear how well the existing models perform on unseen sentences . Finally , NYT-FB can be considered highly imbalanced , since only 2.1 % of the training sentences can be aligned with Freebase 's triplets . Due to the noisy nature of silver data ( NYT-FB ) , evaluation on silver data will not accurately reflect the system performance . We also need unseen data during testing to examine the system generalisation . To overcome these challenges , we will employ the test set of TACRED ( Zhang et al . , 2017 ) , a widely used manually annotated corpus . Regarding the imbalanced data , we will demonstrate that in fact around 60 % ( instead of 2.1 % ) of instances in the training set express relation types defined in Freebase . In this work , we present a simple URE approach relying only on entity types that can obtain improved performance compared to current methods . Specifically , given a sentence consisting of two entities and their corresponding entity types , e.g. , PERSON and LOCATION , we induce relations as the combination of entity types , e.g. , PERSON-LOCATION . It should be noted that we employ only entity types because their combinations form reasonably coarse relation types ( e.g. , PERSON-LOCATION covers /people / person / place of birth defined in Freebase ) . We further discuss our improved performance in \u00a7 3 . Our contributions are as follows : ( i ) We perform experiments on both automatically / manuallylabelled datasets , namely NYT-FB and TACRED , respectively . We show that two methods using only entity types can outperform the state-of-theart models including both feature-engineering and deep learning approaches . The surprising results raise questions about the current state of unsupervised relation extraction . ( ii ) For model design , we show that link predictor provides a good signal to train a URE model ( Fig 1 ) . We also illustrate that entity types are a strong inductive bias for URE ( Table 1 ) . We have shown the importance of entity types in URE . Our methods use only entity types , yet they yield higher performance than previous work on both NYT-FB and TACRED . We have investigated the current experimental setting , concluding that a strong inductive bias is required to train a relation extraction model without labelled data . URE remains challenging , which requires improved methods to deal with silver data . We also plan to use different types of labelled data , e.g. , domain specific data sets , to ascertain whether entity type information is more discriminative in sub-languages .", "challenge": "Unsupervised relation extraction has not been explored although it can be applied to languages and domains where annotated data or knowledge bases are not available.", "approach": "They propose an approach which only relies on entity types and performs a comparison with other unsupervised approaches to find important features.", "outcome": "They find that only using entity types can outperform existing methods on two datasets indicating that they provide a strong inductive bias."} +{"id": "D11-1081", "document": "Although discriminative training guarantees to improve statistical machine translation by incorporating a large amount of overlapping features , it is hard to scale up to large data due to decoding complexity . We propose a new algorithm to generate translation forest of training data in linear time with the help of word alignment . Our algorithm also alleviates the oracle selection problem by ensuring that a forest always contains derivations that exactly yield the reference translation . With millions of features trained on 519 K sentences in 0.03 second per sentence , our system achieves significant improvement by 0.84 BLEU over the baseline system on the NIST Chinese-English test sets . Discriminative model ( Och and Ney , 2002 ) can easily incorporate non-independent and overlapping features , and has been dominating the research field of statistical machine translation ( SMT ) in the last decade . Recent work have shown that SMT benefits a lot from exploiting large amount of features ( Liang et al . , 2006 ; Tillmann and Zhang , 2006 ; Watanabe et al . , 2007 ; Blunsom et al . , 2008 ; Chiang et al . , 2009 ) . However , the training of the large number of features was always restricted in fairly small data sets . Some systems limit the number of training examples , while others use short sentences to maintain efficiency . Overfitting problem often comes when training many features on a small data ( Watanabe et al . , 2007 ; Chiang et al . , 2009 ) . Obviously , using much more data can alleviate such problem . Furthermore , large data also enables us to globally train millions of sparse lexical features which offer accurate clues for SMT . Despite these advantages , to the best of our knowledge , no previous discriminative training paradigms scale up to use a large amount of training data . The main obstacle comes from the complexity of packed forests or n-best lists generation which requires to search through all possible translations of each training example , which is computationally prohibitive in practice for SMT . To make normalization efficient , contrastive estimation ( Smith and Eisner , 2005 ; Poon et al . , 2009 ) introduce neighborhood for unsupervised log-linear model , and has presented positive results in various tasks . Motivated by these work , we use a translation forest ( Section 3 ) which contains both \" reference \" derivations that potentially yield the reference translation and also neighboring \" non-reference \" derivations that fail to produce the reference translation.1 However , the complexity of generating this translation forest is up to O(n 6 ) , because we still need biparsing to create the reference derivations . Consequently , we propose a method to fast generate a subset of the forest . The key idea ( Section 4 ) is to initialize a reference derivation tree with maximum score by the help of word alignment , and then traverse the tree to generate the subset forest in linear time . Besides the efficiency improvement , such a forest allows us to train the model without resort- r 1 X \u21d2 \u27e8X 1 bei X 2 , X 1 was X 2 \u27e9 e 2 r 2 X \u21d2 \u27e8qiangshou bei X 1 , the gunman was X 1 \u27e9 e 3 r 3 X \u21d2 \u27e8jingfang X 1 , X 1 by the police\u27e9 e 4 r 4 X \u21d2 \u27e8jingfang X 1 , police X 1 \u27e9 e 5 r 5 X \u21d2 \u27e8qiangshou , the gunman\u27e9 e 6 r 6 X \u21d2 \u27e8jibi , shot dead\u27e9 Figure 1 : A translation forest which is the running example throughout this paper . The reference translation is \" the gunman was killed by the police \" . ( 1 ) Solid hyperedges denote a \" reference \" derivation tree t 1 which exactly yields the reference translation . ( 2 ) Replacing e 3 in t 1 with e 4 results a competing non-reference derivation t 2 , which fails to swap the order of X 3,4 . ( 3 ) Removing e 1 and e 5 in t 1 and adding e 2 leads to another reference derivation t 3 . Generally , this is done by deleting a node X 0,1 . ing to constructing the oracle reference ( Liang et al . , 2006 ; Watanabe et al . , 2007 ; Chiang et al . , 2009 ) , which is non-trivial for SMT and needs to be determined experimentally . Given such forests , we globally learn a log-linear model using stochastic gradient descend ( Section 5 ) . Overall , both the generation of forests and the training algorithm are scalable , enabling us to train millions of features on large-scale data . To show the effect of our framework , we globally train millions of word level context features motivated by word sense disambiguation ( Chan et al . , 2007 ) together with the features used in traditional SMT system ( Section 6 ) . Training on 519 K sentence pairs in 0.03 seconds per sentence , we achieve significantly improvement over the traditional pipeline by 0.84 BLEU . We have presented a fast generation algorithm for translation forest which contains both reference derivations and neighboring non-reference derivations for large-scale SMT discriminative training . We have achieved significantly improvement of 0.84 BLEU by incorporate 13.9 M feature trained on 519 K data in 0.03 second per sentence . In this paper , we define the forest based on competing derivations which only differ in one rule . There may be better classes of forest that can produce a better performance . It 's interesting to modify the definition of forest , and use more local operators to increase the size of forest . Furthermore , since the generation of forests is quite general , it 's straight to apply our forest on other learning algorithms . Finally , we hope to exploit more features such as reordering features and syntactic features so as to further improve the performance .", "challenge": "Applying discriminative models to statistical machine translation systems is expensive, however; reducing training set size leads to overfitting.", "approach": "They propose a faster algorithm for translation forest generation which runs in linear time.", "outcome": "The proposed forest generation method can process a sentence in 0.03 seconds using a large feature space and also significantly outperforms the existing pipeline system."} +{"id": "N16-1046", "document": "Neural machine translation ( NMT ) with recurrent neural networks , has proven to be an effective technique for end-to-end machine translation . However , in spite of its promising advances over traditional translation methods , it typically suffers from an issue of unbalanced outputs , that arise from both the nature of recurrent neural networks themselves , and the challenges inherent in machine translation . To overcome this issue , we propose an agreement model for neural machine translation and show its effectiveness on large-scale Japaneseto-English and Chinese-to-English translation tasks . Our results show the model can achieve improvements of up to 1.4 BLEU over the strongest baseline NMT system . With the help of an ensemble technique , this new end-to-end NMT approach finally outperformed phrasebased and hierarchical phrase-based Moses baselines by up to 5.6 BLEU points . Recurrent neural network ( RNN ) has achieved great successes on several structured prediction tasks ( Graves , 2013 ; Watanabe and Sumita , 2015 ; Dyer et al . , 2015 ) , in which RNNs are required to make a sequence of dependent predictions . One of its advantages is that an unbounded history is available to enrich the context for the prediction at the current time-step . Despite its successes , recently , ( Liu et al . , 2016 ) pointed out that the RNN suffers from a fundamental issue of generating unbalanced outputs : that is to say the suffixes of its outputs are typically worse than the prefixes . This is due to the fact that later predictions directly depend on the accuracy of previous predictions . They empirically demonstrated this issue on two simple sequence-to-sequence learning tasks : machine transliteration and grapheme-to-phoneme conversion . On the more general sequence-to-sequence learning task of machine translation ( MT ) , neural machine translation ( NMT ) based on RNNs has recently become an active research topic ( Sutskever et al . , 2014 ; Bahdanau et al . , 2014 ) . Compared to those two simple tasks , MT involves in much larger vocabulary and frequent reordering between input and output sequences . This makes the prediction at each time-step far more challenging . In addition , sequences in MT are much longer , with averaged length of 36.7 being about 5 times longer than that in grapheme-to-phoneme conversion . Therefore , we believe that the history is more likely to contain incorrect predictions and the issue of unbalanced outputs may be more serious . This hypothesis is supported later ( see Table 1 in \u00a7 4.1 ) , by an analysis that shows the quality of the prefixes of translation hypotheses is much higher than that of the suffixes . To address this issue for NMT , in this paper we extend the agreement model proposed in ( Liu et al . , 2016 ) to the task of machine translation . Its key idea is to encourage the agreement between a pair of target-directional ( left-to-right and right-to-left ) NMT models in order to produce more balanced translations and thus improve the overall translation quality . Our contribution is two-fold : \u2022 We introduce a simple and general method to address the issue of unbalanced outputs for NMT ( \u00a7 3 ) . This method is robust without any extra hyperparameters to tune and is easy to implement . In addition , it is general enough to be applied on top of any of the existing RNN translation models , although it was implemented on top of the model in ( Bahdanau et al . , 2014 ) in this paper . \u2022 We provide an empirical evaluation of the technique on large scale Japanese-to-English and Chinese-to-English translation tasks . The results show our model can generate more balanced translation results , and achieves substantial improvements ( of up to 1.4 BLEU points ) over the strongest NMT baseline ( \u00a7 4 ) . With the help of an ensemble technique , our new end-to-end NMT gains up to 5.6 BLEU points over phrase-based and hierarchical phrasebased Moses ( Koehn et al . , 2007 ) systems.1 2 Overview of Neural Machine Translation Suppose x = x 1 , x 2 , \u2022 \u2022 \u2022 , x m denotes a source sentence , y = y 1 , y 2 , \u2022 \u2022 \u2022 , y n denotes a target sen- tence . In addition , let x < t = x 1 , x 2 , \u2022 \u2022 \u2022 , x t-1 denote a prefix of x. Neural Machine Translation ( NMT ) directly maps a source sentence into a target within a probabilistic framework . Formally , it defines a conditional probability over a pair of sequences x and y via a recurrent neural network as follows : EQUATION where \u03b8 is the set of model parameters ; h t denotes a hidden state ( i.e. a vector ) of y at timestep t ; g is a transformation function from a hidden state to a vector with dimension of the target-side vocabulary size ; softmax is the softmax function , and [ i ] denotes the i th component in a vector . 2 Furthermore , h t = f ( h t-1 , c(x , y < t ) ) is defined by a recurrent function over both the previous hidden state h t-1 and the context c(x , y < t ) .3 Note that both h t and c(x , y < t ) have dimension d for all t. In this paper , we develop our model on top of the neural machine translation approach of ( Bahdanau et al . , 2014 ) , and we refer the reader this paper for a complete description of the model , for example , the definitons of f and c. The proposed method could just as easily been implemented on top of any other RNN models such as that in ( Sutskever et al . , 2014 ) . In this paper , we investigate the issue of unbalanced outputs suffered by recurrent neural networks , and empirically show its existence in the context of machine translation . To address this issue , we propose an easy to implement agreement model that extends the method of ( Liu et al . , 2016 ) from simple sequence-to-sequence learning tasks to machine translation . On two challenging JP-EN and CH-EN translation tasks , our approach was empirically shown to be effective in addressing the issue ; by generating balanced outputs , it was able to consistently outperform a respectable NMT baseline on all test sets , delivering gains of up to 1.4 BLEU points . To put these results in the broader context of machine translation research , our approach ( even without special handling of unknown words ) achieved gains of up to 5.6 BLEU points over strong phrase-based and hierarchical phrase-based Moses baselines , with the help of an ensemble technique .", "challenge": "Recurrent neural network-based machine translation models suffer from unbalanced outputs, meaning quality degrades as they generate outputs.", "approach": "They propose to apply a generic hyperparameter-free agreement model for neural machine translation which can be used in any neural models.", "outcome": "The neural machine translation models with the agreement model generate better balanced outputs and significantly outperform baseline models."} +{"id": "H05-1020", "document": "We approached the problem as learning how to order documents by estimated relevance with respect to a user query . Our support vector machines based classifier learns from the relevance judgments available with the standard test collections and generalizes to new , previously unseen queries . For this , we have designed a representation scheme , which is based on the discrete representation of the local ( lw ) and global ( gw ) weighting functions , thus is capable of reproducing and enhancing the properties of such popular ranking functions as tf.idf , BM25 or those based on language models . Our tests with the standard test collections have demonstrated the capability of our approach to achieve the performance of the best known scoring functions solely from the labeled examples and without taking advantage of knowing those functions or their important properties or parameters . Our work is motivated by the objective to bring closer numerous achievements in the domains of machine learning and classification to the classical task of ad-hoc information retrieval ( IR ) , which is ordering documents by the estimated degree of relevance to a given query . Although used with striking success for text categorization , classification-based approaches ( e.g. those based on support vector machines , Joachims , 2001 ) have been relatively abandoned when trying to improve ad hoc retrieval in favor of empirical ( e.g. vector space , Salton & McGill , 1983 ) or generative ( e.g. language models ; Zhai & Lafferty 2001 ; Song & Croft ; 1999 ) , which produce a ranking function that gives each document a score , rather than trying to learn a classifier that would help to discriminate between relevant and irrelevant documents and order them accordingly . A generative model needs to make assumptions that the query and document words are sampled from the same underlying distributions and that the distributions have certain forms , which entail specific smoothing techniques ( e.g. popular Dirichlet-prior ) . A discriminative ( classifier-based ) model , on the other side , does not need to make any assumptions about the forms of the underlying distributions or the criteria for the relevance but instead , learns to predict to which class a certain pattern ( document ) belongs to based on the labeled training examples . Thus , an important advantage of a discriminative approach for the information retrieval task , is its ability to explicitly utilize the relevance judgments existing with standard test collections in order to train the IR algorithms and possibly enhance retrieval accuracy for the new ( unseen ) queries . Cohen , Shapire and Singer ( 1999 ) noted the differences between ordering and classification and presented a two-stage model to learn ordering . The first stage learns a classifier for preference relations between objects using any suitable learning mechanism ( e.g. support vector machines ; Vapnik , 1998 ) . The second stage converts preference relations into a rank order . Although the conversion may be NP complete in a general case , they presented efficient approximations . We limited our first study reported here to linear classifiers , in which conversion can be performed by simple ordering according to the score of each document . However , approaching the problem as \" learning how to order things \" allowed us to design our sampling and training mechanisms in a novel and , we believe , more powerful way . Our classifier learns how to compare every pair of documents with respect to a given query , based on the relevance indicating features that the documents may have . As it is commonly done in information retrieval , the features are derived from the word overlap between the query and documents . According to Nallapati ( 2004 ) , the earliest formulation of the classic IR problem as a classification ( discrimination ) problem was suggested by Robertson and Sparck Jones ( 1976 ) , however performed well only when the relevance judgments were available for the same query but not generalizing well to new queries . Fuhr and Buckley ( 1991 ) used polynomial regression to estimate the coefficients in a linear scoring function combining such well-known features as a weighted term frequency , document length and query length . They tested their \" description-oriented \" approach on the standard small-scale collections ( Cranfield , NPL , INSPEC , CISI , CACM ) to achieve the relative change in the average precision ranging from -17 % to + 33 % depending on the collection tested and the implementation parameters . Gey ( 1994 ) applied logistic regression in a similar setting with the following results : Cranfield +12 % , CACM +7.9 % , CISI -4.4 % , however he did not test them on new ( unseen by the algorithm ) queries , hypothesizing that splitting documents into training and testing collections would not be possible since \" a large number of queries is necessary in order to train for a decent logistic regression approach to document retrieval . \" Instead , he applied a regression trained on Cranfield to CISI collection but with a negative effect . Recently , the approaches based on learning have reported several important breakthroughs . Fan et al . ( 2004 ) applied genetic programming in order to learn how to combine various terms into the optimal ranking function that outperformed the popular Okapi formula on robust retrieval test collection . Nallapati ( 2004 ) made a strong argument in favor of discriminative models and trained an SVM-based classifier to combine 6 different components ( terms ) from the popular ranking functions ( such as tf.idf and language models ) to achieve better than the language model performance in 2 out of 16 test cases ( figure 3 in Nallapati , 2004 ) , not statistically distinguishable in 8 cases and only 80 % of the best performance in 6 cases . There have been studies using past relevance judgements to optimize retrieval . For example , Joachims ( 2002 ) applied Support Vector Machines to learn linear ranking function from user click-throughs while interfacing with a search engine . In this study , we have developed a representation scheme , which is based on the discretization of the global ( corpus statistics ) and local ( document statistics ) weighting of term overlaps between queries and documents . We have empirically shown that this representation is flexible enough to learn the properties of the popular ranking functions : tf.idf , BM25 and the language models . The major difference of our work from Fan et al . ( 2004 ) or Nallapati ( 2004 ) or works on fusion ( e.g. Vogt & Cottrell , 1999 ) is that we did not try to combine several known ranking functions ( or their separate terms ) into one , but rather we learn the weighting functions directly through discretization . Discretization allows representing a continuous function by a set of values at certain points . These values are learned by a machine learning technique to optimize certain criteria , e.g. average precision . Another important motivation behind using discretization was to design a representation with high dimensionality of features in order to combine our representation scheme with Support Vector Machines ( SVM ) ( Vapnik , 1998 ) , which are known to work well with a large number of features . SVM contains a large class of neural nets , radial margin separation ( RBF ) nets , and polynomial classifiers as special cases . They have been delivering superior performance in classification tasks in general domains , e.g. in face recognition ( Hearst , 1998 ) , and in text categorization ( Joachims , 2001 ) . Another important distinction of this work from the prior research is that we train our classifier not to predict the absolute relevance of a document d with respect to a query q , but rather to predict which of the two documents d1 , d2 is more relevant to the query q. The motivation for this distinction was that all the popular evaluation metrics in information retrieval ( e.g. average precision ) are based on document ranking rather than classification accuracy . This affected our specially designed sampling procedure which we empirically discovered to be crucial for successful learning . We have also empirically established that our combination of the representation scheme , learning mechanism and sampling allows learning from the past relevance judgments in order to successfully generalize to the new ( unseen ) queries . When the representation was created without any knowledge of the top ranking functions and their parameters , our approach reached the known top performance solely through the learning process . When our representation was taking advantage of functions that are known to perform well and their parameters , the resulting combination was able to slightly exceed the top performance on large test collections . The next section formalizes our Discretization Based Learning ( DBL ) approach to Information Retrieval , followed by empirical results and conclusions . We explored learning how to rank documents with respect to a given query using linear Support Vector Machines and discretization-based representation . Our approach represents a family of discriminative approaches , currently studied much less than heuristic ( tf.idf , bm25 ) or generative approaches ( language models ) . Our experiments indicate that learning from relevant judgments available with the standard test collections and generalizing to new queries is not only feasible but can be a source of improvement . When tested with a popular standard collection , our approach achieved the performance of the best well-known techniques ( BM25 and language models ) , which have been developed as a result of extensive past experiments and elaborate theoretical modeling . When combined with the best performing ranking functions , our approach added a small ( 2 - 3 % ) , but statistically significant , improvement . Although practical significance of this study may be limited at the moment since it does not demonstrate a dramatic increase in retrieval performance in large test collections , we believe our findings have important theoretical contributions since they indicate that the power of discriminative approach is comparable to the best known analytical or heuristic apporaches . This work also lays the foundation for extending the discriminative approach to \" richer \" representations , such as those using word n-grams , grammatical relations between words , and the structure of documents . Our results also indicate that gw.lw family , which includes practically all the popular \" bag of words \" ranking formulas such as tf.idf , BM25 or language models , has almost reached its upper limit and other classes of representations and ranking formulas need to be explored in order to accomplish significant performance break-troughs . Of course , using only few test cases ( topics sets and collections ) is a limitation of this current study , which we are going to address in our future research . We view our approach as a complement , rather than competitive , to the analytical approaches such as language models . Our approach can be also used as an explorative tool in order to identify important relevance-indicating features , which can be later modeled analytically . We believe that our work and the ones referred in this paper may bring many of the achievements made in a more general area of classification and machine learning closer to the task of rank ordered information retrieval , thus making retrieval engines more helpful in reducing the information overload and meeting people 's needs .", "challenge": "Even with the recent successes of classification models, their applications to information retrieval have been abandoned.", "approach": "They propose to use a machine learning-based discriminative model to obtain richer features further can be used for information retrieval.", "outcome": "They show that the proposed approach can reach the performance of well-studied heuristics such as TF-IDF and generalizes well to unseen queries."} +{"id": "D18-1204", "document": "Conventional solutions to automatic related work summarization rely heavily on humanengineered features . In this paper , we develop a neural data-driven summarizer by leveraging the seq2seq paradigm , in which a joint context-driven attention mechanism is proposed to measure the contextual relevance within full texts and a heterogeneous bibliography graph simultaneously . Our motivation is to maintain the topic coherency between a related work section and its target document , where both the textual and graphic contexts play a big role in characterizing the relationship among scientific publications accurately . Experimental results on a large dataset show that our approach achieves a considerable improvement over a typical seq2seq summarizer and five classical summarization baselines . In scientific fields , scholars need to contextualize their contribution to help readers acquire an understanding of their research papers . For this purpose , the related work section of an article serves as a pivot to connect prior domain knowledge , in which the innovation and superiority of current work are displayed by a comparison with previous studies . While citation prediction can assist in drafting a reference collection ( Nallapati et al . , 2008 ) , consuming all these papers is still a laborious job , where authors must read every source document carefully and locate the most relevant content cautiously . As a solution in saving authors ' efforts , automatic related work summarization is essentially a topic-biased multi-document problem ( Cong and Kan , 2010 ) , which relies heavily on humanengineered features to retrieve snippets from the references . Most recently , neural networks enable a data-driven architecture sequence-to-sequence ( seq2seq ) for natural language generation ( Bahdanau et al . , 2014 ( Bahdanau et al . , , 2016 ) ) , where an encoder reads a sequence of words / sentences into a context vector , from which a decoder yields a sequence of specific outputs . Nonetheless , compared to scenarios like machine translation with an end-to-end nature , aligning a related work section to its source documents is far more challenging . To address the summarization alignment , former studies try to apply an attention mechanism to measure the saliency / novelty of each candidate word / sentence ( Tan et al . , 2017 ) , with the aim of locating the most representative content to retain primary coverage . However , toward summarizing a related work section , authors should be more creative when organizing text streams from the reference collection , where the selected content ought to highlight the topic bias of current work , rather than retell each reference in a compressed but balanced fashion . This motivates us to introduce the contextual relevance and characterize the relationship among scientific publications accurately . Generally speaking , for a pair of documents , a larger lexical overlap often implies a higher similarity in their research backgrounds . Yet such a hypothesis is not always true when sampling content from multiple relevant topics . Take \" DSSM \" 1 as an example , from viewpoint of the abstract similarity , those references investigating \" Information Retrieval \" , \" Latent Semantic Model \" or \" Clickthrough Data Mining \" could be of more importance in correlation and should be greatly sampled for the related work section . But in reality , this article spends a bit larger chunk of texts ( about 58 % ) to elaborate \" Deep Learning \" during the literature review , which is quite difficult for machines to grasp the contextual relevance therein . In addi-tion , other situations like emerging new concepts also suffer from the terminology variation or paraphrasing in varying degrees . In this study , we utilize a heterogeneous bibliography graph to embody the relationship within a scalable scholarly database . Over the recent past , there is a surge of interest in exploiting diverse relations to analyze bibliometrics , ranging from literature recommendation ( Yu et al . , 2015 ) to topic evolvement ( Jensen et al . , 2016 ) . In a graphical sense , interconnected papers transfer the credit among each other directly / indirectly through various patterns , such as paper citation , author collaboration , keyword association and releasing on series of venues , which constitutes the graphic context for outlining concerned topics . Unfortunately , a variety of edge types may pollute the information inquiry , where a slice of edges are not so important as the others on sampling content . Meanwhile , most existing solutions in mining heterogeneous graphs depend on the human supervision , e.g. , hyperedge ( Bu et al . , 2010 ) and metapath ( Swami et al . , 2017 ) . This is usually not easy to access due to the complexity of graph schemas . Our contribution is threefold : First , we explore the edge-type usefulness distribution ( EUD ) on a heterogeneous bibliography graph , which enables the relationship discovery ( between any pair of papers ) for sampling the interested information . Second , we develop a novel seq2seq summarizer for the automatic related work summarization , where a joint context-driven attention mechanism is proposed to measure the contextual relevance within both textual and graphic contexts . Third , we conduct experiments on 8,080 papers with native related work sections , and experimental results show that our approach outperforms a typical seq2seq summarizer and five classical summarization baselines significantly . In this paper , we highlight the contextual relevance for the automatic related work summarization , and analyze the graphic context to characterize the relationship among scientific publications accurately . We develop a neural data-driven summarizer by leveraging the seq2seq paradigm , where a joint context-driven attention mechanism is proposed to measure the contextual relevance within full texts and a heterogeneous bibliography graph simultaneously . Extensive experiments demonstrate the validity of the proposed attention mechanism , and the superiority of our approach over six representative summarization baselines . In future work , an appealing direction is to organize the selected sentences in a logical fashion , e.g. , by leveraging a topic hierarchy tree to determine the arrangement of the related work section ( Cong and Kan , 2010 ) . We also would like to take the citation sentences of each reference into consideration , which is another concise and universal data source for scientific summarization ( Chen and Hai , 2016 ; Cohan and Goharian , 2017 ) . At the end of this paper , we believe that extractive methods are by no means the final solutions for literature review generation due to plagiarism concerns , and we are going to put forward a fully abstractive version in further studies .", "challenge": "Existing topic-based multi-document approaches on automatic related work summarization rely heavily on human-engineered features to retrieve snippets from the references.", "approach": "They propose a neural data-driven seq2seq summarizer with a joint context-driven attention mechanism which measures contextual relevances within full texts and a heterogeneous bibliography graph.", "outcome": "The proposed model outperforms a typical seq2seq and five classical baseline models on experiments with 8080 papers with native related work sections."} +{"id": "2022.acl-long.286", "document": "Obtaining human-like performance in NLP is often argued to require compositional generalisation . Whether neural networks exhibit this ability is usually studied by training models on highly compositional synthetic data . However , compositionality in natural language is much more complex than the rigid , arithmeticlike version such data adheres to , and artificial compositionality tests thus do not allow us to determine how neural models deal with more realistic forms of compositionality . In this work , we re-instantiate three compositionality tests from the literature and reformulate them for neural machine translation ( NMT ) . Our results highlight that : i ) unfavourably , models trained on more data are more compositional ; ii ) models are sometimes less compositional than expected , but sometimes more , exemplifying that different levels of compositionality are required , and models are not always able to modulate between them correctly ; iii ) some of the non-compositional behaviours are mistakes , whereas others reflect the natural variation in data . Apart from an empirical study , our work is a call to action : we should rethink the evaluation of compositionality in neural networks and develop benchmarks using real data to evaluate compositionality on natural language , where composing meaning is not as straightforward as doing the math . 1 Although the successes of deep neural networks in natural language processing ( NLP ) are astounding and undeniable , they are still regularly criticised for lacking the powerful generalisation capacities that characterise human intelligence . A frequently mentioned concept in such critiques is compositionality : the ability to build up the meaning of a complex expression by combining the meanings of its parts ( e.g. Partee , 1984 ) . Compositionality is assumed to play an essential role in how humans understand language , but whether neural networks also exhibit this property has since long been a topic of vivid debate ( e.g. Fodor and Pylyshyn , 1988 ; Smolensky , 1990 ; Marcus , 2003 ; Nefdt , 2020 ) . Studies about the compositional abilities of neural networks consider almost exclusively models trained on synthetic datasets , in which compositionality can be ensured and isolated ( e.g. Lake and Baroni , 2018 ; Hupkes et al . , 2020 ) . 2 In such tests , the interpretation of expressions is computed completely locally : every subpart is evaluated independently -without taking into account any external context -and the meaning of the whole expression is then formed by combining the meanings of its parts in a bottom-up fashion . This protocol matches the type of compositionality observed in arithmetic : the meaning of ( 3 + 5 ) is always 8 , independent of the context it occurs in . However , as exemplified by the sub-par performance of symbolic models that allow only strict , local protocols , compositionality in natural domains is far more intricate than this rigid , arithmeticlike variant of compositionality . Natural language seems very compositional , but at the same time , it is riddled with cases that are difficult to interpret with a strictly local interpretation of compositionality . Sometimes , the meaning of an expression does not derive from its parts ( e.g. for idioms ) , but the parts themselves are used compositionally in other contexts . In other cases , the meaning of an expression does depend on its parts in a compositional way , but arriving at this meaning requires a more global approach because the meanings of the parts need to be disambiguated by information from elsewhere . For instance , consider the meaning of homonyms ( \" these dates are perfect for our dish / wedding \" ) , potentially idiomatic expressions ( \" the child kicked the bucket off the pavement \" ) , or scope ambiguities ( \" every human likes a cat \" ) . This paradoxical tension between local and global forms of compositionality inspired many debates on the compositionality of natural language . Likewise , it impacts the evaluation of compositionality in NLP models . On the one hand , local compositionality seems necessary for robust and reliable generalisation . Yet , at the same time , global compositionality is needed to appropriately address the full complexity of language , which makes evaluating compositionality of state-of-the-art models ' in the wild ' a complicated endeavour . In this work , we face this challenge head-on . We concentrate on the domain of neural machine translation ( NMT ) , which is paradigmatically close to the tasks typically considered for compositionality tests , where the target represents the ' meaning ' of the input . 3 Furthermore , MT is an important domain of NLP , for which compositional generalisation is important to produce more robust translations and train adequate models for low-resource languages ( see , e.g. Chaabouni et al . , 2021 ) . As an added advantage , compositionality is traditionally well studied and motivated for MT ( Rosetta , 1994 ; Janssen and Partee , 1997 ; Janssen , 1998 ) . We reformulate three theoretically grounded tests from Hupkes et al . ( 2020 ) : systematicity , substitutivity and overgeneralisation . Since accuracycommonly used in artificial compositionality testsis not a suitable evaluation metric for MT , we base our evaluations on the extent to which models behave consistently , rather than correctly . In our tests for systematicity and substitutivity , we consider whether processing is local ; in our overgeneralisation test , we consider how models treat idioms that are assumed to require global processing . Our results indicate that models often do not behave compositionally under the local interpretation , but exhibit behaviour that is too local in other cases . In other words , models have the ability to process phrases both locally and globally but do not always correctly modulate between them . We further show that some inconsistencies reflect variation in natural language , whereas others are true compositional mistakes , exemplifying the need for both local and global compositionality as well as illustrating the need for tests that encompass them both . With our study , we contribute to ongoing questions about the compositional abilities of neural networks , and we provide nuance to the nature of this question when natural language is concerned : how local should the compositionality of models for natural language actually be ? Aside from an empirical study , our work is also a call to action : we should rethink the evaluation of compositionality in neural networks and develop benchmarks using real data to evaluate compositionality on natural language , where composing meaning is not as straightforward as doing the math . In conclusion , with this work , we contribute to the question of how compositional models trained on natural data are , and we argue that MT is a suitable and relevant testing ground to ask this question . Focusing on the balance between local and global forms of compositionality , we formulate three different compositionality tests and discuss the issues and considerations that come up when considering compositionality in the context of natural data . Our tests indicate that models show both local and global processing , but not necessarily for the right samples . Furthermore , they underscore the difficulty of separating helpful and harmful types of non-compositionality , stressing the need to rethink the evaluation of compositionality using natural language , where composing meaning is not as straightforward as doing the math . The Npeople Vtransitive the N sl people . E.g. The poet criticises the king .", "challenge": "Existing studies on compositional generalization of neural models use synthetic data which limits the scope of analysis only to local expressions without global contexts considered.", "approach": "They reformulate three theoretically grounded compositionality tests composed of systematicity, substitutivity for local and overgeneralisation for global on how models treat idioms.", "outcome": "They find that models trained on more data are more compositional but less compositional than expected and make non-compositional behaviours by mistake."} +{"id": "D14-1007", "document": "This paper proposes a Markov Decision Process and reinforcement learning based approach for domain selection in a multidomain Spoken Dialogue System built on a distributed architecture . In the proposed framework , the domain selection problem is treated as sequential planning instead of classification , such that confirmation and clarification interaction mechanisms are supported . In addition , it is shown that by using a model parameter tying trick , the extensibility of the system can be preserved , where dialogue components in new domains can be easily plugged in , without re-training the domain selection policy . The experimental results based on human subjects suggest that the proposed model marginally outperforms a non-trivial baseline . Due to growing demand for natural humanmachine interaction , over the last decade Spoken Dialogue Systems ( SDS ) have been increasingly deployed in various commercial applications ranging from traditional call centre automation ( e.g. AT&T \" Lets Go ! \" bus information system ( Williams et al . , 2010 ) ) to mobile personal assistants and knowledge navigators ( e.g. Apple 's Siri R , Google Now TM , Microsoft Cortana , etc . ) or voice interaction for smart household appliance control ( e.g. Samsung Evolution Kit for Smart TVs ) . Furthermore , latest progress in openvocabulary Automatic Speech Recognition ( ASR ) is pushing SDS from traditional single-domain information systems towards more complex multidomain speech applications , of which typical examples are those voice assistant mobile applications . Recent advances in SDS have shown that statistical approaches to dialogue management can result in marginal improvement in both the naturalness and the task success rate for domainspecific dialogues ( Lemon and Pietquin , 2012 ; Young et al . , 2013 ) . State-of-the-art statistical SDS treat the dialogue problem as a sequential decision making process , and employ established planning models , such as Markov Decision Processes ( MDPs ) ( Singh et al . , 2002 ) or Partially Observable Markov Decision Processes ( POMDPs ) ( Thomson and Young , 2010 ; Young et al . , 2010 ; Williams and Young , 2007 ) , in conjunction with reinforcement learning techniques ( Jur\u010d\u00ed\u010dek et al . , 2011 ; Jur\u010d\u00ed\u010dek et al . , 2012 ; Ga\u0161i\u0107 et al . , 2013a ) to seek optimal dialogue policies that maximise long-term expected ( discounted ) rewards and are robust to ASR errors . However , to the best of our knowledge , most of the existing multi-domain SDS in public use are rule-based ( e.g. ( Gruber et al . , 2012 ; Mirkovic and Cavedon , 2006 ) ) . The application of statistical models in multi-domain dialogue systems is still preliminary . Komatani et al . ( 2006 ) and Nakano et al . ( 2011 ) utilised a distributed architecture ( Lin et al . , 1999 ) to integrate expert dialogue systems in different domains into a unified framework , where a central controller trained as a data-driven classifier selects a domain expert at each turn to address user 's query . Alternatively , Hakkani-T\u00fcr et al . ( 2012 ) adopted the well-known Information State mechanism ( Traum and Larsson , 2003 ) to construct a multi-domain SDS and proposed a discriminative classification model for more accurate state updates . More recently , Ga\u0161i\u0107 et al . ( 2013b ) proposed that by a simple expansion of the kernel function in Gaussian Process ( GP ) reinforcement learning ( Engel et al . , 2005 ; Ga\u0161i\u0107 et al . , 2013a ) , one can adapt pre-trained dialogue policies to handle unseen slots for SDS in extended domains . In this paper , we use a voice assistant applica- Figure 1 : The distributed architecture of the voice assistant system ( a simplified illustration ) . tion ( similar to Apple 's Siri but in Chinese language ) as an example to demonstrate a novel MDP-based approach for central interaction management in a complex multi-domain dialogue system . The voice assistant employs a distributed architecture similar to ( Lin et al . , 1999 ; Komatani et al . , 2006 ; Nakano et al . , 2011 ) , and handles mixed interactions of multi-turn dialogues across different domains and single-turn queries powered by a collection of information access services ( such as web search , Question Answering ( QA ) , etc . ) . In our system , the dialogues in each domain are managed by an individual domain expert SDS , and the single-turn services are used to handle those so-called out-of-domain requests . We use featurised representations to summarise the current dialogue states in each domain ( see Section 3 for more details ) , and let the central controller ( the MDP model ) choose one of the following system actions at each turn : ( 1 ) addressing user 's query based on a domain expert , ( 2 ) treating it as an out-of-domain request , ( 3 ) asking user to confirm whether he / she wants to continue a domain expert 's dialogue or to switch to out-of-domain services , and ( 4 ) clarifying user 's intention between two domains . The Gaussian Process Temporal Difference ( GPTD ) algorithm ( Engel et al . , 2005 ; Ga\u0161i\u0107 et al . , 2013a ) is adopted here for policy optimisation based on human subjects , where a parameter tying trick is applied to preserve the extensibility of the system , such that new domain experts ( dialogue systems ) can be flexibly plugged in without the need of re-training the central controller . Comparing to the previous classification-based methods ( Komatani et al . , 2006 ; Nakano et al . , 2011 ) , the proposed approach not only has the advantage of action selection in consideration of long-term rewards , it can also yield more robust policies that allow clarifications and confirmations to mitigate ASR and Spoken Language Understanding ( SLU ) errors . Our human evaluation results show that the proposed system with a trained MDP policy achieves significantly better naturalness in domain switching tasks than a non-trivial baseline with a hand-crafted policy . The remainder of this paper is organised as follows . Section 2 defines the terminology used throughout the paper . Section 3 briefly overviews the distributed architecture of our system . The MDP model and the policy optimisation algorithm are introduced in Section 4 and Section 5 , respectively . After this , experimental settings and evaluation results are described in Section 6 . Finally , we discuss some possible improvements in Section 7 and conclude ourselves in Section 8 . In this paper , we introduce an MDP framework for learning domain selection policies in a complex multi-domain SDS . Standard problem formulation is modified with tied model parameters , so that the entire system is extensible and new domain experts can be easily integrated without re-training the policy . This expectation is confirmed by empirical experiments with human subjects , where the proposed system marginally beats a non-trivial baseline and demonstrates proper extensibility . Several possible improvements are discussed , which will be the central arc of our future research . Thomas Robert Gruber , Adam John Cheyer , Dag", "challenge": "Although existing works show that statistical approaches to dialogue management tasks outperform rule-based counterparts, they are not used in real world applications.", "approach": "They propose to use Markov Decision Process and reinforcement learning for domain selection in multi-domain spoken dialogue systems to achieve extensibility in domains.", "outcome": "The proposed system outperforms rule-based baseline systems with a hand-crafted policy on human-based evaluation of naturalness in domain switching tasks."} +{"id": "D07-1034", "document": "In this paper , we address a unique problem in Chinese language processing and report on our study on extending a Chinese thesaurus with region-specific words , mostly from the financial domain , from various Chinese speech communities . With the larger goal of automatically constructing a Pan-Chinese lexical resource , this work aims at taking an existing semantic classificatory structure as leverage and incorporating new words into it . In particular , it is important to see if the classification could accommodate new words from heterogeneous data sources , and whether simple similarity measures and clustering methods could cope with such variation . We use the cosine function for similarity and test it on automatically classifying 120 target words from four regions , using different datasets for the extraction of feature vectors . The automatic classification results were evaluated against human judgement , and the performance was encouraging , with accuracy reaching over 85 % in some cases . Thus while human judgement is not straightforward and it is difficult to create a Pan-Chinese lexicon manually , it is observed that combining simple clustering methods with the appropriate data sources appears to be a promising approach toward its automatic construction . Large-scale semantic lexicons are important resources for many natural language processing ( NLP ) tasks . For a significant world language such as Chinese , it is especially critical to capture the substantial regional variation as an important part of the lexical knowledge , which will be useful for many NLP applications , including natural language understanding , information retrieval , and machine translation . Existing Chinese lexical resources , however , are often based on language use in one particular region and thus lack the desired comprehensiveness . Toward this end , Tsou and Kwong ( 2006 ) proposed a comprehensive Pan-Chinese lexical resource , based on a large and unique synchronous Chinese corpus as an authentic source for lexical acquisition and analysis across various Chinese speech communities . To allow maximum versatility and portability , it is expected to document the core and universal substances of the language on the one hand , and also the more subtle variations found in different communities on the other . Different Chinese speech communities might share lexical items in the same form but with different meanings . For instance , the word \u5c45\u5c4b refers to general housing in Mainland China but specifically to housing under the Home Ownership Scheme in Hong Kong ; and while the word \u4f4f\u623f is similar to \u5c45\u5c4b to mean general housing in Mainland China , it is rarely seen in the Hong Kong context . Hence , the current study aims at taking an existing Chinese thesaurus , namely the Tongyici Cilin \u540c\u7fa9\u8a5e\u8a5e\u6797 , as leverage and extending it with lexical items specific to individual Chinese speech communities . In particular , the feasibility depends on the following issues : ( 1 ) Can lexical items from various Chinese speech communities , that is , from such heterogeneous sources , be classified as effectively with methods shown to work for clustering closely related words from presumably the same , or homogenous , source ? ( 2 ) Could existing semantic classificatory structures accommodate concepts and expressions specific to individual Chinese speech communities ? Measuring similarity will make sense only if the feature vectors of the two words under comparison are directly comparable . There is usually no problem if both words and their contextual features are from the same data source . Since Tongyici Cilin ( or simply Cilin hereafter ) is based on the vocabulary used in Mainland China , it is not clear how often these words will be found in data from other places , and even if they are found , how well the feature vectors extracted could reflect the expected usage or sense . Our hypothesis is that it will be more effective to classify new words from Mainland China with respect to Cilin categories , than to do the same on new words from regions outside Mainland China . Furthermore , if this hypothesis holds , one would need to consider separate mechanisms to cluster heterogeneous regionspecific words in the Pan-Chinese context . Thus in the current study we sampled 30 target words specific to each of Beijing , Hong Kong , Singapore , and Taipei , from the financial domain ; and used the cosine similarity function to classify them into one or more of the semantic categories in Cilin . The automatic classification results were compared with a simple baseline method , against human judgement as the gold standard . In general , an accuracy of up to 85 % could be reached with the top 15 candidates considered . It turns out that our hypothesis is supported by the Taipei test data , whereas the data heterogeneity effect is less obvious in Hong Kong and Singapore test data , though the effect on individual test items varies . In Section 2 , we will briefly review related work and highlight the innovations of the current study . In Sections 3 and 4 , we will describe the materials used and the experimental setup respectively . Results will be presented and discussed with future directions in Section 5 , followed by a conclusion in Section 6 . In this paper , we have reported our study on a unique problem in Chinese language processing , namely extending a Chinese thesaurus with new words from various Chinese speech communities , including Beijing , Hong Kong , Singapore and Taipei . The critical issues include whether the existing classificatory structure could accommodate concepts and expressions specific to various Chinese speech communities , and whether the difference in textual sources might pose difficulty in using conventional similarity measures for the automatic classification . Our experiments , using the cosine function to measure similarity and testing with various sources for extracting contextual vectors , suggest that the classification performance might depend on the compatibility between the words in the thesaurus and the sources from which the target words are drawn . Evaluated against human judgement , an accuracy of over 85 % was reached in some cases , which were much higher than the baseline and were very encouraging in general . While human judgement is not straightforward and it is difficult to create a Pan-Chinese lexicon manually , combining simple classification methods with the appropriate data sources seems to be a promising approach toward its automatic construction .", "challenge": "Although there are varieties in the Chinese language, current lexical resources based on one variant leave questions of compatibility to others when to be extended.", "approach": "They sample 30 finance-related words from several Chinese languages and evaluate if a classifier can match them to correct categories in well studied variant.", "outcome": "The evaluation with human judgements shows that classification accuracy reaches up to 85% and also finds that the performance depends on word sources."} +{"id": "D11-1013", "document": "This paper presents a domain-assisted approach to organize various aspects of a product into a hierarchy by integrating domain knowledge ( e.g. , the product specifications ) , as well as consumer reviews . Based on the derived hierarchy , we generate a hierarchical organization of consumer reviews on various product aspects and aggregate consumer opinions on these aspects . With such organization , user can easily grasp the overview of consumer reviews . Furthermore , we apply the hierarchy to the task of implicit aspect identification which aims to infer implicit aspects of the reviews that do not explicitly express those aspects but actually comment on them . The experimental results on 11 popular products in four domains demonstrate the effectiveness of our approach . With the rapidly expanding e-commerce , most retail Web sites encourage consumers to write reviews to express their opinions on various aspects of products . Huge collections of consumer reviews are now available on the Web . These reviews have become an important resource for both consumers and firms . Consumers commonly seek quality information from online consumer reviews prior to purchasing a product , while many firms use online reviews as an important resource in their product development , marketing , and consumer relationship management . However , the reviews are disorganized , leading to the difficulty in information navigation and knowledge acquisition . It is impractical for user to grasp the overview of consumer reviews and opinions on various aspects of a product from such enormous reviews . Among hundreds of product aspects , it is also inefficient for user to browse consumer reviews and opinions on a specific aspect . Thus , there is a compelling need to organize consumer reviews , so as to transform the reviews into a useful knowledge structure . Since the hierarchy can improve information representation and accessibility ( Cimiano , 2006 ) , we propose to organize the aspects of a product into a hierarchy and generate a hierarchical organization of consumer reviews accordingly . Towards automatically deriving an aspect hierarchy from the reviews , we could refer to traditional hierarchy generation methods in ontology learning , which first identify concepts from the text , then determine the parent-child relations between these concepts using either pattern-based or clusteringbased methods ( Murthy et al . , 2010 ) . However , pattern-based methods usually suffer from inconsistency of parent-child relationships among the concepts , while clustering-based methods often result in low accuracy . Thus , by directly utilizing these methods to generate an aspect hierarchy from consumer reviews , the resulting hierarchy is usually inaccurate , leading to unsatisfactory review organization . On the other hand , domain knowledge of products is now available on the Web . For example , there are more than 248,474 product specifications in the product selling Web site CNet.com ( Beckham , 2005 ) . These product specifications cover some product aspects and provide coarse-grained parentchild relations among these aspects . Such domain knowledge is useful to help organize the product as-140 pects into a hierarchy . However , the initial hierarchy obtained from domain knowledge usually can not fit the review data well . For example , the initial hierarchy is usually too coarse and may not cover the specific aspects commented in the reviews , while some aspects in the hierarchy may not be of interests to users in the reviews . Motivated by the above observations , we propose in this paper to organize the product aspects into a hierarchy by simultaneously exploiting the domain knowledge ( e.g. , the product specification ) and consumer reviews . With derived aspect hierarchy , we generate a hierarchical organization of consumer reviews on various aspects and aggregate consumer opinions on these aspects . Figure 1 illustrates a sample of hierarchical review organization for the product \" iPhone 3 G \" . With such organization , users can easily grasp the overview of product aspects as well as conveniently navigate the consumer reviews and opinions on any aspect . For example , users can find that 623 reviews , out of 9,245 reviews , are about the aspect \" price \" , with 241 positive and 382 negative reviews . Given a collection of consumer reviews on a specific product , we first automatically acquire an initial aspect hierarchy from domain knowledge and identify the aspects from the reviews . Based on the initial hierarchy , we develop a multi-criteria optimization approach to construct an aspect hierarchy to contain all the identified aspects . Our approach incrementally inserts the aspects into the initial hierarchy based on inter-aspect semantic distance , a metric used to measure the semantic relation among aspects . In order to derive reliable semantic distance , we propose to leverage external hierarchies , sampled from WordNet and Open Directory Project , to assist semantic distance learning . With resultant aspect hierarchy , the consumer reviews are then organized to their corresponding aspect nodes in the hierarchy . We then perform sentiment classification to determine consumer opinions on these aspects . Furthermore , we apply the hierarchy to the task of implicit aspect identification . This task aims to infer implicit aspects of the reviews that do not explicitly express those aspects but actually comment on them . For example , the implicit aspect of the review \" It is so expensive \" is \" price . \" Most existing aspect identification approaches rely on the appearance of aspect terms , and thus are not able to handle implicit aspect problem . Based on our aspect hierarchy , we can infer the implicit aspects by clustering the reviews into their corresponding aspect nodes in the hierarchy . We conduct experiments on 11 popular products in four domains . More details of the corpus are discussed in Section 4 . The experimental results demonstrate the effectiveness of our approach . The main contributions of this work can be summarized as follows : 1 ) We propose to hierarchically organize consumer reviews according to an aspect hierarchy , so as to transfer the reviews into a useful knowledge structure . 2 ) We develop a domain-assisted approach to generate an aspect hierarchy by integrating domain knowledge and consumer reviews . In order to derive reliable semantic distance between aspects , we propose to leverage external hierarchies to assist semantic distance learning . 3 ) We apply the aspect hierarchy to the task of implicit aspect identification , and achieve satisfactory performance . The rest of this paper is organized as follows . Our approach is elaborated in Section 2 and applied to implicit aspect identification in Section 3 . Section 4 presents the evaluations , while Section 5 reviews 141 related work . Finally , Section 6 concludes this paper with future works . In this paper , we have developed a domain-assisted approach to generate product aspect hierarchy by integrating domain knowledge and consumer reviews . Based on the derived hierarchy , we can generate a hierarchical organization of consumer reviews as well as consumer opinions on the aspects . With such organization , user can easily grasp the overview of consumer reviews , as well as seek consumer reviews and opinions on any specific aspect by navigating through the hierarchy . We have further applied the hierarchy to the task of implicit aspect identification . We have conducted evaluations on 11 different products in four domains . The experimental results have demonstrated the effectiveness of our approach . In the future , we will explore other linguistic features to learn the semantic distance between aspects , as well as apply our approach to other applications .", "challenge": "Product review data is not structured hindering users from understanding its overview, and existing methods that build hierarchical structures suffer from inconsistency or low accuracy.", "approach": "They propose to organize product aspects and formulize them into a hierarchical structure by exploiting existing domain knowledge-bases such as WordNet to navigate users.", "outcome": "The proposed method that converts reviews into hierarchical structures exhibits effectiveness in experiments with 11 popular products from four domains."} +{"id": "P14-1006", "document": "We present a novel technique for learning semantic representations , which extends the distributional hypothesis to multilingual data and joint-space embeddings . Our models leverage parallel data and learn to strongly align the embeddings of semantically equivalent sentences , while maintaining sufficient distance between those of dissimilar sentences . The models do not rely on word alignments or any syntactic information and are successfully applied to a number of diverse languages . We extend our approach to learn semantic representations at the document level , too . We evaluate these models on two cross-lingual document classification tasks , outperforming the prior state of the art . Through qualitative analysis and the study of pivoting effects we demonstrate that our representations are semantically plausible and can capture semantic relationships across languages without parallel data . Distributed representations of words provide the basis for many state-of-the-art approaches to various problems in natural language processing today . Such word embeddings are naturally richer representations than those of symbolic or discrete models , and have been shown to be able to capture both syntactic and semantic information . Successful applications of such models include language modelling ( Bengio et al . , 2003 ) , paraphrase detection ( Erk and Pad\u00f3 , 2008 ) , and dialogue analysis ( Kalchbrenner and Blunsom , 2013 ) . Within a monolingual context , the distributional hypothesis ( Firth , 1957 ) forms the basis of most approaches for learning word representations . In this work , we extend this hypothesis to multilingual data and joint-space embeddings . We present a novel unsupervised technique for learning semantic representations that leverages parallel corpora and employs semantic transfer through compositional representations . Unlike most methods for learning word representations , which are restricted to a single language , our approach learns to represent meaning across languages in a shared multilingual semantic space . We present experiments on two corpora . First , we show that for cross-lingual document classification on the Reuters RCV1 / RCV2 corpora ( Lewis et al . , 2004 ) , we outperform the prior state of the art ( Klementiev et al . , 2012 ) . Second , we also present classification results on a massively multilingual corpus which we derive from the TED corpus ( Cettolo et al . , 2012 ) . The results on this task , in comparison with a number of strong baselines , further demonstrate the relevance of our approach and the success of our method in learning multilingual semantic representations over a wide range of languages . To summarize , we have presented a novel method for learning multilingual word embeddings using parallel data in conjunction with a multilingual objective function for compositional vector models . This approach extends the distributional hypothesis to multilingual joint-space representations . Coupled with very simple composition functions , vectors learned with this method outperform the state of the art on the task of cross-lingual document classification . Further experiments and analysis support our hypothesis that bilingual signals are a useful tool for learning distributed representations by enabling models to abstract away from mono-lingual surface realisations into a deeper semantic space .", "challenge": "Existing methods for learning word representations are restricted to a single language.", "approach": "They extend the distributional hypothesis to multilingual data and joint-space embeddings and use parallel data to obtain multilingual word representations.", "outcome": "Multilingual word vectors learned using the proposed model outperform existing methods on cross-lingual document classification tasks without word-level alignment or syntactic information in diverse languages."} +{"id": "D11-1018", "document": "When translating among languages that differ substantially in word order , machine translation ( MT ) systems benefit from syntactic preordering-an approach that uses features from a syntactic parse to permute source words into a target-language-like order . This paper presents a method for inducing parse trees automatically from a parallel corpus , instead of using a supervised parser trained on a treebank . These induced parses are used to preorder source sentences . We demonstrate that our induced parser is effective : it not only improves a state-of-the-art phrase-based system with integrated reordering , but also approaches the performance of a recent preordering method based on a supervised parser . These results show that the syntactic structure which is relevant to MT pre-ordering can be learned automatically from parallel text , thus establishing a new application for unsupervised grammar induction . Recent work in statistical machine translation ( MT ) has demonstrated the effectiveness of syntactic preordering : an approach that permutes source sentences into a target-like order as a pre-processing step , using features of a source-side syntactic parse ( Collins et al . , 2005 ; Xu et al . , 2009 ) . Syntactic pre-ordering is particularly effective at applying structural transformations , such as the ordering change from a subject-verb-object ( SVO ) language like English to a subject-object-verb ( SOV ) language like Japanese . However , state-of-the-art pre-ordering methods require a supervised syntactic parser to provide structural information about each sentence . We propose a method that learns both a parsing model and a reordering model directly from a word-aligned parallel corpus . Our approach , which we call Structure Induction for Reordering ( STIR ) , requires no syntactic annotations to train , but approaches the performance of a recent syntactic pre-ordering method in a large-scale English-Japanese MT system . STIR predicts a pre-ordering via two pipelined models : ( 1 ) parsing and ( 2 ) tree reordering . The first model induces a binary parse , which defines the space of possible reorderings . In particular , only trees that properly separate verbs from their object noun phrases will license an SVO to SOV transformation . The second model locally permutes this tree . Our approach resembles work with binary synchronous grammars ( Wu , 1997 ) , but is distinct in its emphasis on monolingual parsing as a first phase , and in selecting reorderings without the aid of a target-side language model . The parsing model is trained to maximize the conditional likelihood of trees that license the reorderings implied by observed word alignments in a parallel corpus . This objective differs from those of previous grammar induction models , which typically focus on succinctly explaining the observed source language corpus via latent hierarchical structure ( Pereira and Schabes , 1992 ; Klein and Manning , 2002 ) . Our convex objective allows us to train a feature-rich log-linear parsing model , even without supervised treebank data . Focusing on pre-ordering for MT leads to a new perspective on the canonical NLP task of grammar induction-one which marries the wide-spread scientific interest in unsupervised parsing models with a clear application and extrinsic evaluation methodology . To support this perspective , we highlight several avenues of future research throughout the paper . We evaluate STIR in a large-scale English-Japanese machine translation system . We measure how closely our predicted reorderings match those implied by hand-annotated word alignments . STIR approaches the performance of the state-of-the-art pre-ordering method described in Genzel ( 2010 ) , which learns reordering rules for supervised treebank parses . STIR gives a translation improvement of 3.84 BLEU over a standard phrase-based system with an integrated reordering model . We have demonstrated that induced parses suffice for pre-ordering . We hope that future work in grammar induction will also consider pre-ordering as an extrinsic evaluation .", "challenge": "Applying syntactic pre-ordering to statistical machine translation is effective especially for language pairs with different word orders, however; current methods require a supervised syntactic parser.", "approach": "they propose an unsupervised method that can build parsers for pre-ordering using a word-aligned parallel corpus without syntactic annotations.", "outcome": "The pre-ordering parsers obtained by the proposed method can perform better reordering than state-of-the-art methods and are also effective with machine translation systems."} +{"id": "P15-1168", "document": "Recently , neural network models for natural language processing tasks have been increasingly focused on for their ability of alleviating the burden of manual feature engineering . However , the previous neural models can not extract the complicated feature compositions as the traditional methods with discrete features . In this paper , we propose a gated recursive neural network ( GRNN ) for Chinese word segmentation , which contains reset and update gates to incorporate the complicated combinations of the context characters . Since GRNN is relative deep , we also use a supervised layer-wise training method to avoid the problem of gradient diffusion . Experiments on the benchmark datasets show that our model outperforms the previous neural network models as well as the state-of-the-art methods . Unlike English and other western languages , Chinese do not delimit words by white-space . Therefore , word segmentation is a preliminary and important pre-process for Chinese language processing . Most previous systems address this problem by treating this task as a sequence labeling problem and have achieved great success . Due to the nature of supervised learning , the performance of these models is greatly affected by the design of features . These features are explicitly represented by the different combinations of context characters , which are based on linguistic intuition and statistical information . However , the number of features could be so large that the result models are too large to use in practice and prone to overfit on training corpus . Figure 1 : Illustration of our model for Chinese word segmentation . The solid nodes indicate the active neurons , while the hollow ones indicate the suppressed neurons . Specifically , the links denote the information flow , where the solid edges denote the acceptation of the combinations while the dashed edges means rejection of that . As shown in the right figure , we receive a score vector for tagging target character \" \u5730 \" by incorporating all the combination information . Recently , neural network models have been increasingly focused on for their ability to minimize the effort in feature engineering . Collobert et al . ( 2011 ) developed a general neural network architecture for sequence labeling tasks . Following this work , many methods ( Zheng et al . , 2013 ; Pei et al . , 2014 ; Qi et al . , 2014 ) applied the neural network to Chinese word segmentation and achieved a performance that approaches the state-of-the-art methods . However , these neural models just concatenate the embeddings of the context characters , and feed them into neural network . Since the concatenation operation is relatively simple , it is difficult to model the complicated features as the traditional discrete feature based models . Although the complicated interactions of inputs can be modeled by the deep neural network , the previous neural model shows that the deep model can not outperform the one with a single non-linear model . Therefore , the neural model only captures the interactions by the simple transition matrix and the single non-linear transformation . These dense features extracted via these simple interactions are not nearly as good as the substantial discrete features in the traditional methods . In this paper , we propose a gated recursive neural network ( GRNN ) to model the complicated combinations of characters , and apply it to Chinese word segmentation task . Inspired by the success of gated recurrent neural network ( Chung et al . , 2014 ) , we introduce two kinds of gates to control the combinations in recursive structure . We also use the layer-wise training method to avoid the problem of gradient diffusion , and the dropout strategy to avoid the overfitting problem . Figure 1 gives an illustration of how our approach models the complicated combinations of the context characters . Given a sentence \" \u96e8 ( Rainy ) \u5929 ( Day ) \u5730\u9762 ( Ground ) \u79ef\u6c34 ( Accumulated water ) \" , the target character is \" \u5730 \" . This sentence is very complicated because each consecutive two characters can be combined as a word . To predict the label of the target character \" \u5730 \" under the given context , GRNN detects the combinations recursively from the bottom layer to the top . Then , we receive a score vector of tags by incorporating all the combination information in network . The contributions of this paper can be summarized as follows : \u2022 We propose a novel GRNN architecture to model the complicated combinations of the context characters . GRNN can select and preserve the useful combinations via reset and update gates . These combinations play a similar role in the feature engineering of the traditional methods with discrete features . \u2022 We evaluate the performance of Chinese word segmentation on PKU , MSRA and CTB6 benchmark datasets which are commonly used for evaluation of Chinese word segmentation . Experiment results show that our model outperforms other neural network models , and achieves state-of-the-art performance . In this paper , we propose a gated recursive neural network ( GRNN ) to explicitly model the combinations of the characters for Chinese word segmentation task . Each neuron in GRNN can be regarded as a different combination of the input characters . Thus , the whole GRNN has an ability to simulate the design of the sophisticated features in traditional methods . Experiments show that our proposed model outperforms the state-of-the-art methods on three popular benchmark datasets . Despite Chinese word segmentation being a specific case , our model can be easily generalized and applied to other sequence labeling tasks . In future work , we would like to investigate our proposed GRNN on other sequence labeling tasks .", "challenge": "Existing neural network-based models with embedding concatenation cannot process complex discrete features, and larger models have problems such as overfitting, and underperforming simple linear systems.", "approach": "They propose a gated recursive neural network with two novel gating mechanisms trained with layer-wise training method and dropout to mitigate gradient diffusion and overfitting.", "outcome": "The proposed gated recursive neural network outperforms traditional linear models with feature engineering and also neural network-based models on the task of Chinese word segmentation."} +{"id": "2022.acl-long.215", "document": "In contrast to recent advances focusing on highlevel representation learning across modalities , in this work we present a self-supervised learning framework that is able to learn a representation that captures finer levels of granularity across different modalities such as concepts or events represented by visual objects or spoken words . Our framework relies on a discretized embedding space created via vector quantization that is shared across different modalities . Beyond the shared embedding space , we propose a Cross-Modal Code Matching objective that forces the representations from different views ( modalities ) to have a similar distribution over the discrete embedding space such that cross-modal objects / actions localization can be performed without direct supervision . We show that the proposed discretized multi-modal finegrained representation ( e.g. , pixel / word / frame ) can complement high-level summary representations ( e.g. , video / sentence / waveform ) for improved performance on cross-modal retrieval tasks . We also observe that the discretized representation uses individual clusters to represent the same semantic concept across modalities . Toddlers acquire much of their knowledge through grounded learning -visual concepts can be acquired through language , and language acquisition emerges through visual interaction . Inspired by this type of grounded learning , a rich body of representation learning research ( Harwath et al . , 2018 ; Miech et al . , 2020 ; Alayrac et al . , 2020 ; Monfort et al . , 2021 ; Luo et al . , 2021 ) has been exploring the potential to learn from multi-modal data such as video-text , video-audio , and image-audio pairs . These works typically focus on learning a joint embedding space between different modalities , in which high-level summary representations are extracted as embedding vectors . These embedding vectors often represent entire video clips , spoken utterances , or sentences as single vectors , and can be useful on tasks such as cross-modal data retrieval , e.g. , finding the most similar visual scene according to a spoken language description . The predominant approach to learning these embedding vectors is to use modality-independent encoders , and while this has been successful for downstream retrieval tasks , it makes it difficult to compare the activations of the encoders from different modalities . Further , the space of continuous embedding vectors is unbounded , which makes interpreting the learned representations challenging . To this end , we propose to jointly learn highlevel embedding vector representations with a finegrained discrete embedding space that is shared across different modalities . The discrete embedding space enables model interpretability since there are a finite number of embedding vectors which are shared across modalities . Besides the shared embedding space , we propose a Cross-Modal Code Matching ( CMCM ) objective that guides the embedding space to capture cross-modal correspondences of concepts , actions , and words . This not only improves downstream performance on retrieval , but also allows us to better interpret what the model recognized through cross-modal grounded learning . To verify the effectiveness of our proposed learning framework , we conducted experiments in several cross-modal domains , including video-text , video-audio , and image-audio . We found consistent improvements over baseline models , verifying that the gain was not restricted to the particular choice of network architecture , input modalities , or dataset . We also demonstrate the interpretability of the fine-grained discrete representations by showing the cross-modal relations between the embedding vectors and semantic concepts appearing in the input modalities . Our approach also enables cross-modal concept localization without requiring any labels during training . Figure 1 : An overview of the proposed framework . The proposed shared discrete embedding space ( green region , described in Section 2.2 ) is based on a cross-modal representation learning paradigm ( blue / yellow regions , described in Section 2.1 ) . The proposed Cross-Modal Code Matching L CMCM objective is detailed in Section 2.3 and Figure 2 . In this paper , we proposed a framework for crossmodal representation learning with a discrete embedding space that is shared amongst different modalities and enables model interpretability . We also propose a Cross-Modal Code Matching objective that encourages models to represent crossmodel semantic concepts in the embedding space . Combining our discrete embedding space and objective with existing cross-modal representation learning models improves retrieval performance on video-text , video-audio , and image-audio datasets . We also analyze the shared embedding space and find that semantically related video and audio inputs tend to use the same codewords .", "challenge": "Existing multi-modal representation learning approaches aim to obtain high-level summary embeddings that are useful for retrieval but difficult to compare encoder activations from different modalities.", "approach": "They propose a semi-supervised learning framework with a discrete embedding space shared across modalities by vector quantization and an objective to match representations between modalities.", "outcome": "The proposed representations can complement high-level representations for retrieval and they use individual clusters to represent the same semantic concept across modalities demonstrating their interpretability."} +{"id": "D14-1035", "document": "PCFGs with latent annotations have been shown to be a very effective model for phrase structure parsing . We present a Bayesian model and algorithms based on a Gibbs sampler for parsing with a grammar with latent annotations . For PCFG-LA , we present an additional Gibbs sampler algorithm to learn annotations from training data , which are parse trees with coarse ( unannotated ) symbols . We show that a Gibbs sampling technique is capable of parsing sentences in a wide variety of languages and producing results that are on-par with or surpass previous approaches . Our results for Kinyarwanda and Malagasy in particular demonstrate that low-resource language parsing can benefit substantially from a Bayesian approach . Despite great progress over the past two decades on parsing , relatively little work has considered the problem of creating accurate parsers for low-resource languages . Existing work in this area focuses primarily on approaches that use some form of cross-lingual bootstrapping to improve performance . For instance , Hwa et al . ( 2005 ) use a parallel Chinese / English corpus and an English dependency grammar to induce an annotated Chinese corpus in order to train a Chinese dependency grammar . Kuhn ( 2004b ) also considers the benefits of using multiple languages to induce a monolingual grammar , making use of a measure for data reliability in order to weight training data based on confidence of annotation . Bootstrapping approaches such as these achieve markedly improved results , but they are dependent on the existence of a parallel bilingual corpus . Very few such corpora are readily available , particularly for low-resource languages , and creating such corpora obviously presents a challenge for many practical applications . Kuhn ( 2004a ) shows some of the difficulty in handling low-resource languages by examining various tasks using Q'anjob'al as an example . Another approach is that of Bender et al . ( 2002 ) , who take a more linguistically-motivated approach by making use of linguistic universals to seed newly developed grammars . This substantially reduces the effort by making it unnecessary to learn the basic parameters of a language , but it lacks the robustness of grammars learned from data . Recent work on Probabilistic Context-Free Grammars with latent annotations ( PCFG-LA ) ( Matsuzaki et al . , 2005 ; Petrov et al . , 2006 ) have shown them to be effective models for syntactic parsing , especially when less training material is available ( Liang et al . , 2009 ; Shindo et al . , 2012 ) . The coarse nonterminal symbols found in vanilla PCFGs are refined by latent variables ; these latent annotations can model subtypes of grammar symbols that result in better grammars and enable better estimates of grammar productions . In this paper , we provide a Gibbs sampler for learning PCFG-LA models and show its effectiveness for parsing lowresource languages such as Malagasy and Kinyawanda . Previous PCFG-LA work focuses on the problem of parameter estimation , including expectationmaximization ( EM ) ( Matsuzaki et al . , 2005 ; Petrov et al . , 2006 ) , spectral learning ( Cohen et al . , 2012 ; Cohen et al . , 2013 ) , and variational inference ( Liang et al . , 2009 ; Wang and Blunsom , 2013 ) . Regardless of inference method , previous work has used the same method to parse new sentences : a Viterbi parse under a new sentence-specific PCFG obtained from an approximation of the original grammar ( Matsuzaki et al . , 2005 ) . Here , we provide an alternative approach to parsing new sentences : an extension of the Gibbs sampling algorithm of Johnson et al . ( 2007 ) , which learns rule probabilities in an unsupervised PCFG . We use a Gibbs sampler to collect sampled trees theoretically distributed from the true posterior distribution in order to parse . Priors in a Bayesian model can control the sparsity of grammars ( which the insideoutside algorithm fails to do ) , while naturally incorporating smoothing into the model ( Johnson et al . , 2007 ; Liang et al . , 2009 ) . We also build a Bayesian model for parsing with a treebank , and incorporate information from training data as a prior . Moreover , we extend the Gibbs sampler to learn and parse PCFGs with latent annotations . Learning the latent annotations is a compute-intensive process . We show how a small amount of training data can be used to bootstrap : after running a large number of sampling iterations on a small set , the resulting parameters are used to seed a smaller number of iterations on the full training data . This allows us to employ more latent annotations while maintaining reasonable training times and still making full use of the available training data . To determine the cross-linguistic applicability of these methods , we evaluate on a wide variety of languages with varying amounts of available training data . We use English and Chinese as examples of languages with high data availability , while Italian , Malagasy , and Kinyarwanda provide examples of languages with little available data . We find that our technique comes near state of the art results on large datasets , such as those for Chinese and English , and it provides excellent results on limited datasets -both artificially limited in the case of English , and naturally limited in the case of Italian , Malagasy , and Kinyarwanda . This , combined with its ability to run off-the-shelf on new languages without any supporting materials such as parallel corpora , make it a valuable technique for the parsing of low-resource languages . Our experiments demonstrate that sampling vanilla PCFGs , as well as PCFGs with latent annotations , is feasible with the use of a Gibbs sampler technique and produces results that are in line with previous parsers on controlled test sets . Our results also show that our methods are effective on a wide variety of languages-including two low-resource languageswith no language-specific model modifications needed . Additionally , although not a uniform winner , the Gibbs-PCFG shows a propensity for performing well on naturally small corpora ( here , KIN / MLG ) . The exact reason for this remains slightly unclear , but the fact that a similar advantage is not found for extremely small versions of large corpora indicates that our approach may be particularly well-suited for application in real low-resource environments as opposed to a sim- Having established this procedure and its relative tolerance for low amounts of data , we would like to extend the model to make use of partial bracketing information instead of complete trees , perhaps in the form of Fragmentary Unlabeled Dependency Grammar annotations ( Schneider et al . , 2013 ) . This would allow the sampling procedure to potentially operate using corpora with lighter annotations than full trees , making initial annotation effort not quite as heavy and potentially increasing the amount of available data for low-resource languages . Additionally , using the expert partial annotations to help restrict the sample space could provide good gains in terms of training time .", "challenge": "Existing parsing methods for low-resource languages either use bootstrapping which requires parallel corpus that is not often available or a linguistically-motivated approach which lacks robustness.", "approach": "They present a Gibbs sampler-based Bayesian approach with a grammar with latent annotations and its extension which learns rules from probabilities in an unsupervised PCFG.", "outcome": "The proposed technique performs on par or surpasses previous approaches in many languages but especially performs well in Kinyarwanda and Malagasy with small corpora."} +{"id": "P11-2109", "document": "In the face of sparsity , statistical models are often interpolated with lower order ( backoff ) models , particularly in Language Modeling . In this paper , we argue that there is a relation between the higher order and the backoff model that must be satisfied in order for the interpolation to be effective . We show that in n-gram models , the relation is trivially held , but in models that allow arbitrary clustering of context ( such as decision tree models ) , this relation is generally not satisfied . Based on this insight , we also propose a generalization of linear interpolation which significantly improves the performance of a decision tree language model . A prominent use case for Language Models ( LMs ) in NLP applications such as Automatic Speech Recognition ( ASR ) and Machine Translation ( MT ) is selection of the most fluent word sequence among multiple hypotheses . Statistical LMs formulate the problem as the computation of the model 's probability to generate the word sequence w 1 w 2 . . . w m \u2261 w m 1 , assuming that higher probability corresponds to more fluent hypotheses . LMs are often represented in the following generative form : p(w m 1 ) = m i=1 p(w i |w i-1 1 ) In the following discussion , we will refer to the function p(w i |w i-1 1 ) as a language model . Note the context space for this function , w i-1 1 is arbitrarily long , necessitating some independence assumption , which usually consists of reducing the relevant context to n -1 immediately preceding tokens : p(w i |w i-1 1 ) \u2248 p(w i |w i-1 i-n+1 ) These distributions are typically estimated from observed counts of n-grams w i i-n+1 in the training data . The context space is still far too large ; therefore , the models are recursively smoothed using lower order distributions . For instance , in a widely used n-gram LM , the probabilities are estimated as follows : p(w i |w i-1 i-n+1 ) = \u03c1(w i |w i-1 i-n+1 ) + ( 1 ) \u03b3(w i-1 i-n+1 ) \u2022 p(w i |w i-1 i-n+2 ) where \u03c1 is a discounted probability 1 . In addition to n-gram models , there are many other ways to estimate probability distributions p(w i |w i-1 i-n+1 ) ; in this work , we are particularly interested in models involving decision trees ( DTs ) . As in n-gram models , DT models also often utilize interpolation with lower order models ; however , there are issues concerning the interpolation which arise from the fact that decision trees permit arbitrary clustering of context , and these issues are the main subject of this paper . The main contribution of this paper is the insight that in the standard recursive backoff there is an implied relation between the backoff and the higher order models , which is essential for adequate performance . When this relation is not satisfied other interpolation methods should be employed ; hence , we propose a generalization of linear interpolation that significantly outperforms the standard form in such a scenario .", "challenge": "They show that current statistical language models assume a relation between the backoff and the higher models although it is not always guaranteed.", "approach": "They propose a generalization of linear interpolation for cases when the required relation by current methods is not satisfied.", "outcome": "The proposed method significantly improves the performance of a decision tree language model in concerned cases."} +{"id": "D13-1137", "document": "In this paper , we present a recursive neural network ( RNN ) model that works on a syntactic tree . Our model differs from previous RNN models in that the model allows for an explicit weighting of important phrases for the target task . We also propose to average parameters in training . Our experimental results on semantic relation classification show that both phrase categories and task-specific weighting significantly improve the prediction accuracy of the model . We also show that averaging the model parameters is effective in stabilizing the learning and improves generalization capacity . The proposed model marks scores competitive with state-of-the-art RNN-based models . Recursive Neural Network ( RNN ) models are promising deep learning models which have been applied to a variety of natural language processing ( NLP ) tasks , such as sentiment classification , compound similarity , relation classification and syntactic parsing ( Hermann and Blunsom , 2013 ; Socher et al . , 2012 ; Socher et al . , 2013 ) . RNN models can represent phrases of arbitrary length in a vector space of a fixed dimension . Most of them use minimal syntactic information ( Socher et al . , 2012 ) . Recently , Hermann and Blunsom ( 2013 ) proposed a method for leveraging syntactic information , namely CCG combinatory operators , to guide composition of phrases in RNN models . While their models were successfully applied to binary sentiment classification and compound similarity tasks , there are questions yet to be answered , e.g. , whether such enhancement is beneficial in other NLP tasks as well , and whether a similar improvement can be achieved by using syntactic information of more commonly available types such as phrase categories and syntactic heads . In this paper , we present a supervised RNN model for a semantic relation classification task . Our model is different from existing RNN models in that important phrases can be explicitly weighted for the task . Syntactic information used in our model includes part-of-speech ( POS ) tags , phrase categories and syntactic heads . POS tags are used to assign vector representations to word-POS pairs . Phrase categories are used to determine which weight matrices are chosen to combine phrases . Syntactic heads are used to determine which phrase is weighted during combining phrases . To incorporate task-specific information , phrases on the path between entity pairs are further weighted . The second contribution of our work is the introduction of parameter averaging into RNN models . In our preliminary experiments , we observed that the prediction performance of the model often fluctuates significantly between training iterations . This fluctuation not only leads to unstable performance of the resulting models , but also makes it difficult to fine-tune the hyperparameters of the model . Inspired by Swersky et al . ( 2010 ) , we propose to average the model parameters in the course of training . A recent technique for deep learning models of similar vein is dropout ( Hinton et al . , 2012 ) , but averaging is simpler to implement . Our experimental results show that our model per-Figure 1 : A recursive representations of a phrase \" a word vector \" with POS tags of the words ( DT , NN and NN respectively ) . For example , the two word-POS pairs \" word NN \" and \" vector NN \" with a syntactic category N are combined to represent the phrase \" word vector \" . forms better than standard RNN models . By averaging the model parameters , our model achieves performance competitive with the MV-RNN model in Socher et al . ( 2012 ) , without using computationally expensive word-dependent matrices . We have presented an averaged RNN model for semantic relation classification . Our experimental results show that syntactic information such as phrase categories and heads improves the performance , and the task-specific weighting is also beneficial . The results also demonstrate that averaging the model parameters not only stabilizes the learning but also improves the generalization capacity of the model . As future work , we plan to combine deep learning models with richer information such as predicateargument structures .", "challenge": "Incorporating syntactic information into RNN models to guide the composition of phrases is shown effective for classification tasks but not studied for other tasks.", "approach": "They propose a supervised RNN model which uses syntactic information to explicitly weight important phrases for a semantic relation classification task.", "outcome": "Experiments show that using syntactic information such as phrase categories and heads, and also task-specific weighting contribute to improve performance."} +{"id": "P04-1017", "document": "Coreferential information of a candidate , such as the properties of its antecedents , is important for pronoun resolution because it reflects the salience of the candidate in the local discourse . Such information , however , is usually ignored in previous learning-based systems . In this paper we present a trainable model which incorporates coreferential information of candidates into pronoun resolution . Preliminary experiments show that our model will boost the resolution performance given the right antecedents of the candidates . We further discuss how to apply our model in real resolution where the antecedents of the candidate are found by a separate noun phrase resolution module . The experimental results show that our model still achieves better performance than the baseline . In recent years , supervised machine learning approaches have been widely explored in reference resolution and achieved considerable success ( Ge et al . , 1998 ; Soon et al . , 2001 ; Ng and Cardie , 2002 ; Strube and Muller , 2003 ; Yang et al . , 2003 ) . Most learning-based pronoun resolution systems determine the reference relationship between an anaphor and its antecedent candidate only from the properties of the pair . The knowledge about the context of anaphor and antecedent is nevertheless ignored . However , research in centering theory ( Sidner , 1981 ; Grosz et al . , 1983 ; Grosz et al . , 1995 ; Tetreault , 2001 ) has revealed that the local focusing ( or centering ) also has a great effect on the processing of pronominal expressions . The choices of the antecedents of pronouns usually depend on the center of attention throughout the local discourse segment ( Mitkov , 1999 ) . To determine the salience of a candidate in the local context , we may need to check the coreferential information of the candidate , such as the existence and properties of its antecedents . In fact , such information has been used for pronoun resolution in many heuristicbased systems . The S-List model ( Strube , 1998 ) , for example , assumes that a co-referring candidate is a hearer-old discourse entity and is preferred to other hearer-new candidates . In the algorithms based on the centering theory ( Brennan et al . , 1987 ; Grosz et al . , 1995 ) , if a candidate and its antecedent are the backwardlooking centers of two subsequent utterances respectively , the candidate would be the most preferred since the CONTINUE transition is always ranked higher than SHIFT or RETAIN . In this paper , we present a supervised learning-based pronoun resolution system which incorporates coreferential information of candidates in a trainable model . For each candidate , we take into consideration the properties of its antecedents in terms of features ( henceforth backward features ) , and use the supervised learning method to explore their influences on pronoun resolution . In the study , we start our exploration on the capability of the model by applying it in an ideal environment where the antecedents of the candidates are correctly identified and the backward features are optimally set . The experiments on MUC-6 ( 1995 ) and MUC-7 ( 1998 ) corpora show that incorporating coreferential information of candidates boosts the system performance significantly . Further , we apply our model in the real resolution where the antecedents of the candidates are provided by separate noun phrase resolution modules . The experimental results show that our model still outperforms the baseline , even with the low recall of the non-pronoun resolution module . The remaining of this paper is organized as follows . Section 2 discusses the importance of the coreferential information for candidate evaluation . Section 3 introduces the baseline learning framework . Section 4 presents and evaluates the learning model which uses backward fea-tures to capture coreferential information , while Section 5 proposes how to apply the model in real resolution . Section 6 describes related research work . Finally , conclusion is given in Section 7 . In this paper we have proposed a model which incorporates coreferential information of candi-dates to improve pronoun resolution . When evaluating a candidate , the model considers its adjacent antecedent by describing its properties in terms of backward features . We first examined the effectiveness of the model by applying it in an optimal environment where the closest antecedent of a candidate is obtained correctly . The experiments show that it boosts the success rate of the baseline system for both MUC-6 ( 4.7 % ) and MUC-7 ( 3.5 % ) . Then we proposed how to apply our model in the real resolution where the antecedent of a non-pronoun is found by an additional non-pronoun resolution module . Our model can still produce Success improvement ( 4.7 % for MUC-6 and 1.8 % for MUC-7 ) against the baseline system , despite the low recall of the non-pronoun resolution module . In the current work we restrict our study only to pronoun resolution . In fact , the coreferential information of candidates is expected to be also helpful for non-pronoun resolution . We would like to investigate the influence of the coreferential factors on general NP reference resolution in our future work .", "challenge": "Existing models only use pair-based features but not context for pronoun resolution regardless of its utility.", "approach": "They propose a supervised approach that uses coreferential information of candidates from the local context.", "outcome": "The proposed method improves over the baselines both in ideal and practical setups."} +{"id": "2022.acl-long.30", "document": "Modelling prosody variation is critical for synthesizing natural and expressive speech in endto-end text-to-speech ( TTS ) systems . In this paper , a cross-utterance conditional VAE ( CUC-VAE ) is proposed to estimate a posterior probability distribution of the latent prosody features for each phoneme by conditioning on acoustic features , speaker information , and text features obtained from both past and future sentences . At inference time , instead of the standard Gaussian distribution used by VAE , CUC-VAE allows sampling from an utterance-specific prior distribution conditioned on cross-utterance information , which allows the prosody features generated by the TTS system to be related to the context and is more similar to how humans naturally produce prosody . The performance of CUC-VAE is evaluated via a qualitative listening test for naturalness , intelligibility and quantitative measurements , including word error rates and the standard deviation of prosody attributes . Experimental results on LJ-Speech and LibriTTS data show that the proposed CUC-VAE TTS system improves naturalness and prosody diversity with clear margins . Recently , abundant research have been performed on modelling variations other than the input text in synthesized speech such as background noise , speaker information , and prosody , as those directly influence the naturalness and expressiveness of the generated audio . Prosody , as the focus of this paper , collectively refers to the stress , intonation , and rhythm in speech , and has been an increasingly popular research aspect in end-to-end TTS systems ( van den Oord et al . , 2016 ; Wang et al . , 2017 ; Stanton et al . , 2018 ; Elias et al . , 2021 ; Chen et al . , 2021 ) . Some previous work captured prosody features ex-plicitly using either style tokens or variational autoencoders ( VAEs ) ( Kingma and Welling , 2014 ; Hsu et al . , 2019a ) which encapsulate prosody information into latent representations . Recent work achieved fine-grained prosody modelling and control by extracting prosody features at phoneme or word-level ( Lee and Kim , 2019 ; Sun et al . , 2020a , b ) . However , the VAE-based TTS system lacks control over the latent space where the sampling is performed from a standard Gaussian prior during inference . Therefore , recent research ( Dahmani et al . , 2019 ; Karanasou et al . , 2021 ) employed a conditional VAE ( CVAE ) ( Sohn et al . , 2015 ) to synthesize speech from a conditional prior . Meanwhile , pre-trained language model ( LM ) such as bidirectional encoder representation for Transformers ( BERT ) ( Devlin et al . , 2019 ) has also been applied to TTS systems ( Hayashi et al . , 2019 ; Kenter et al . , 2020 ; Jia et al . , 2021 ; Futamata et al . , 2021 ; Cong et al . , 2021 ) to estimate prosody attributes implicitly from pre-trained text representations within the utterance or the segment . Efforts have been devoted to include cross-utterance information in the input features to improve the prosody modelling of auto-regressive TTS ( Xu et al . , 2021 ) . To generate more expressive prosody , while maintaining high fidelity in synthesized speech , a cross-utterance conditional VAE ( CUC-VAE ) component is proposed , which is integrated into and jointly optimised with FastSpeech 2 ( Ren et al . , 2021 ) , a commonly used non-autoregressive end-toend TTS system . Specifically , the CUC-VAE TTS system consists of cross-utterance embedding ( CUembedding ) and cross-utterance enhanced CVAE ( CU-enhanced CVAE ) . The CU-embedding takes BERT sentence embeddings from surrounding utterances as inputs and generates phoneme-level CUembedding using a multi-head attention ( Vaswani et al . , 2017 ) layer where attention weights are derived from the encoder output of each phoneme as well as the speaker information . The CU-enhanced CVAE is proposed to improve prosody variation and to address the inconsistency between the standard Gaussian prior , which the VAE-based TTS system is sampled from , and the true prior of speech . Specifically , the CU-enhanced CVAE is a fine-grained VAE that estimates the posterior of latent prosody features for each phoneme based on acoustic features , cross-utterance embedding , and speaker information . It improves the encoder of standard VAE with an utterance-specific prior . To match the inference with training , the utterancespecific prior , jointly optimised with the system , is conditioned on the output of CU-embedding . Latent prosody features are sampled from the derived utterance-specific prior instead of a standard Gaussian prior during inference . The proposed CUC-VAE TTS system was evaluated on the LJ-Speech read English data and the LibriTTS English audiobook data . In addition to the sample naturalness measured via subjective listening tests , the intelligibility is measured using word error rate ( WER ) from an automatic speech recognition ( ASR ) system , and diversity in prosody was measured by calculating standard deviations of prosody attributes among all generated audio samples of an utterance . Experimental results showed that the system with CUC-VAE achieved a much better prosody diversity while improving both the naturalness and intelligibility compared to the standard FastSpeech 2 baseline and two variants . The rest of this paper is organised as follows . Section 2 introduces the background and related work . Section 3 illustrates the proposed CUC-VAE TTS system . Experimental setup and results are shown in Section 4 and Section 5 , with conclusions in Section 6 . In this paper , a non-autoregressive CUC-VAE TTS system was proposed to synthesize speech with better naturalness and more prosody diversity . CUC-VAE TTS system estimated the posterior distribution of latent prosody features for each phone based on cross-utterance information in addition to the acoustic features and speaker information . The generated audio was sampled from an utterancespecific prior distribution , approximated based on cross-utterance information . Experiments were conducted to evaluate the proposed CUC-VAE TTS system with metrics including MOS , preference rate , WER , and the standard deviation of prosody attributes . Experiment results showed that the proposed CUC-VAE TTS system improved both the naturalness and prosody diversity in the generated audio samples , which outperformed the baseline in all metrics with clear margins .", "challenge": "Variational autoencoder-based text-to-speech systems encapsulate prosody information into latent representations lacking control over prosody at inference time that contribute to naturalness and expressiveness.", "approach": "They propose to estimate a posterior probability distribution of latent prosody features conditioned on acoustic features, speaker information, and text features to generate expressive prosody.", "outcome": "The proposed system improves naturalness and prosody diversity over the FastSpeech baseline and two variants on a qualitative listening test with LJ-Speech and LibriTTS data."} +{"id": "2022.acl-long.204", "document": "Responsing with image has been recognized as an important capability for an intelligent conversational agent . Yet existing works only focus on exploring the multimodal dialogue models which depend on retrieval-based methods , but neglecting generation methods . To fill in the gaps , we first present a new task : multimodal dialogue response generation ( MDRG)given the dialogue context , one model needs to generate a text or an image as response . Learning such a MDRG model often requires multimodal dialogues containing both texts and images which are difficult to obtain . Motivated by the challenge in practice , we consider MDRG under a natural assumption that only limited training examples are available . Under such a low-resource setting , we devise a novel conversational agent , Divter , in order to isolate parameters that depend on multimodal dialogues from the entire generation model . By this means , the major part of the model can be learned from a large number of text-only dialogues and textimage pairs respectively , then the whole parameters can be well fitted using just a few training examples . Extensive experiments demonstrate our method achieves state-of-the-art results in both automatic and human evaluation , and can generate informative text and high-resolution image responses . With the development of instant messaging technology in the recent decades , the intermediary of online conversation has also changed from pure text to a variety of visual modalities ( e.g. , image , gif animation , short video ) . Similar to communicating by the messenger tools ( e.g. , Facebook , WhatsApp , WeChat ) in reality , an excellent intelligent conversational agent should not only be able to converse freely with plain text , but also have the ability to perceive and share the real visual physical world . Although recently some large-scale pre-trained text-only dialogue generation models , such as Di-aloGPT ( Zhang et al . , 2020 ) , Blender ( Roller et al . , 2021 ) , Meena ( Adiwardana et al . , 2020 ) , have shown excellent performance , they still can not rely exclusively on plain text to completely simulate the rich experience of visual perception . Recently , various vision-language tasks have been introduced and attracted widespread attention , such as visual question answering ( Ren et al . , 2015 ; Lu et al . , 2016 ; Anderson et al . , 2018 ; Li et al . , 2019a ; Huang et al . , 2020 ) , image captioning ( Xu et al . , 2015 ; Anderson et al . , 2016 ; Ghanimifard and Dobnik , 2019 ; Cornia et al . , 2020 ) , image-grounded dialogue ( Das et al . , 2017 ; Yang et al . , 2021 ; Agarwal et al . , 2020 ; Qi et al . , 2020 ; Chen et al . , 2021 ; Liang et al . , 2021 ) . Specifically , in human conversations , the images can easily show rich visual perception , which is hard to be expressed by plain text . As the example shown in Figure 1 , images are required in at least three circumstances : ( i ) the other speaker has little knowledge ( e.g. , colorful Burano , in the 1st image ) of the objects only you had seen ; ( ii ) to share more details ( e.g. , red wine and pasta , in the 2nd image ) of the objects even you have common knowledge of them ; ( iii ) to express your emotions ( e.g. , happy , in the 3rd image ) about a specific event . An existing related task is photo sharing ( Zang et al . , 2021 ) , which aims to select and share the image based on the textual context , is a challenging task that requires models to understand the background story which complemented by human imaginations , rather than to locate related visual objects or explicitly mention main visible content in the image as the previous works do . Zang et al . ( 2021 ) propose a retrieval-based method to resolve the above challenge . However , the performance of the retrieval-based method is limited in specific domains by the size of the pre-constructed conversational history repository , especially for long-tail contexts that are not covered in the history , where the set of image responses of a retrieval system is also fixed . On the other hand , a better way is to generate a new one accordingly . In this paper , we formulate a new problem : Multimodal Dialogue Response Generation ( MDRG ) , that is , given the dialogue context , the model should not only generate a pure text response but also have the capacity to generate a multimodal response ( e.g. , containing both image and text ) . We argue that there are still some hindrances to application , since ( 1 ) the sophisticated neural end-to-end architecture will overfit to very few well-annotated training data ( e.g. , a few existing 10k multimodal dialogues ) . Evidence is that when discussing the topics outside the training data domain , its performance drops dramatically ; and ( 2 ) as human effort is expensive , it is not easy to collect enough training data for a new domain . Based on the above facts , we take a step further to extend the assumption of MDRG to a low-resource setting where only a few multimodal dialogues are available . To tackle the above challenges , our key idea is to make parameters that rely on multimodal dialogues small and independent by disentangling textual response generation and image response generation , and thus we can learn the major part of the generation model from text-only dialogues and < image description , image > pairs that are much easier to be obtained . Specifically , we present Divter , a novel conversational agent powered by large-scale visual world experiences . As shown in Figure 2 , our Divter is made up of two Transformer-based ( Vaswani et al . , 2017a ) components : a multimodal dialogue response generator , and a text-to-image translator . Divter takes the dialogue context as input , then generates a textual sequence which may contains a text response or a textual image description or both of them . The text-to-image translator takes above image description as condition , then generates a realistic and consistent high resolution image . Both components are independent with the opposite knowledge , and thus can be pre-trained using a large number of text-only dialogues and the < image description , image > pairs respectively . The end-to-end Divter depends on the multimodal dialogues constructed as the tuple : ( dialogue context , text response / < image description , image > ) , but the joint learning and estimation of the two components just require a few training examples depending on specific domains . Contributions of this work are three-fold : \u2022 To the best of our knowledge , it is the first work on the multimodal dialogue response generation . We explore the task under a lowresource setting where only a few multimodal dialogues are assumed available . \u2022 We present Divter , a novel conversational agent which can effectively understand dialogue context and generate informative text and high-resolution image responses . \u2022 Extensive experiments on PhotoChat Corpus ( Zang et al . , 2021 ) indicate the effectiveness of Divter , it achieves a significant improvement with pure text dialogue generation model and retrieval-based image sharing method . 2 Related Work In this paper , we explore multimodal dialogue response generation under a low-resource setting . To overcome the challenges from the new task and insufficient training data , we propose Divter , a neural conversational agent which incorporates text-toimage generation into text-only dialogue response generation , in which most parameters do not rely on the training data any more and can be estimated from large scale textual open domain dialogues and < image description , image > pairs . Extensive experiments demonstrate Divter achieves state-of-the-art results in automatic and human evaluation . In the future , we will explore more efficient methods to inject more modalities into response generation .", "challenge": "While responses with images are important for conversational agents, existing methods only depend on retrieval-based methods but not generation, and obtaining multimodal dialogues is difficult.", "approach": "They propose a multimodal task and an agent which can learn a response generator and text-to-image translator, and use a few samples to join them.", "outcome": "The proposed model achieves state-of-the-art results in automatic and human evaluation and can generate informative text and high-resolution images on PhotoChat Corpus."} +{"id": "N19-1257", "document": "Aspect-based sentiment analysis involves the recognition of so called opinion target expressions ( OTEs ) . To automatically extract OTEs , supervised learning algorithms are usually employed which are trained on manually annotated corpora . The creation of these corpora is labor-intensive and sufficiently large datasets are therefore usually only available for a very narrow selection of languages and domains . In this work , we address the lack of available annotated data for specific languages by proposing a zero-shot cross-lingual approach for the extraction of opinion target expressions . We leverage multilingual word embeddings that share a common vector space across various languages and incorporate these into a convolutional neural network architecture for OTE extraction . Our experiments with 5 languages give promising results : We can successfully train a model on annotated data of a source language and perform accurate prediction on a target language without ever using any annotated samples in that target language . Depending on the source and target language pairs , we reach performances in a zero-shot regime of up to 77 % of a model trained on target language data . Furthermore , we can increase this performance up to 87 % of a baseline model trained on target language data by performing cross-lingual learning from multiple source languages . In recent years , there has been an increasing interest in developing sentiment analysis models that predict sentiment at a more fine-grained level than at the level of a complete document . A paradigm coined as Aspect-based Sentiment Analysis ( ABSA ) addresses this need by defining the sentiment expressed in a text relative to an opinion target ( also called aspect ) . Consider the following example from a restaurant review : \" Moules were excellent , lobster ravioli was VERY salty ! \" In this example , there are two sentiment statements , one positive and one negative . The positive one is indicated by the word \" excellent \" and is expressed towards the opinion target \" Moules \" . The second , negative sentiment , is indicated by the word \" salty \" and is expressed towards the \" lobster ravioli \" . A key task within this fine-grained sentiment analysis consists of identifying so called opinion target expressions ( OTE ) . To automatically extract OTEs , supervised learning algorithms are usually employed which are trained on manually annotated corpora . In this paper , we are concerned with how to transfer classifiers trained on one domain to another domain . In particular , we focus on the transfer of models across languages to alleviate the need for multilingual training data . We propose a model that is capable of accurate zero-shot cross-lingual OTE extraction , thus reducing the reliance on annotated data for every language . Similar to Upadhyay et al . ( 2018 ) , our model leverages multilingual word embeddings ( Smith et al . , 2017 ; Lample et al . , 2018 ) that share a common vector space across various languages . The shared space allows us to transfer a model trained on source language data to predict OTEs in a target language for which no ( i.e. zero-shot setting ) or only small amounts of data are available , thus allowing to apply our model to under-resourced languages . Our main contributions can be summarized as follows : \u2022 We present the first approach for zero-shot cross-lingual opinion target extraction and achieve up to 87 % of the performance of a monolingual baseline . \u2022 We investigate the benefit of using multi-ple source languages for cross-lingual learning and show that we can improve by 6 to 8 points in F 1 -Score compared to a model trained on a single source language . \u2022 We investigate the benefit of augmenting the zero-shot approach with additional data points from the target language . We observe that we can save hundreds of annotated data points by employing a cross-lingual approach . \u2022 We compare two methods for obtaining cross-lingual word embeddings on the task . In this work , we presented a method for crosslingual and zero-shot extraction of opinion target expressions which we evaluated on 5 languages . Our approach uses multilingual word embeddings that are aligned into a single vector space to allow for cross-lingual transfer of models . Using English as a source language in a zeroshot setting , our approach was able to reach an F 1score of 0.50 for Spanish and 0.46 for Dutch . This corresponds to relative performances of 74 % and 77 % compared to a baseline system trained on target language data . By using multiple source languages , we increased the zero-shot performance to F 1 -scores of 0.58 and 0.53 , respectively , which correspond to 85 % and 87 % in relative terms . We investigated the benefit of augmenting the zeroshot approach with additional data points from the target language . Here , we observed that we can save several hundreds of annotated data points by employing a cross-lingual approach . Among the 5 considered languages , Turkish seemed to benefit the least from cross-lingual learning in all experiments . The reason for this might be that Turkish is the only agglutinative language in the dataset . Further , we compared two approaches for aligning multilingual word embeddings in a single vector space and found their results to vary for individual language pairs but to be comparable overall . Lastly , we compared our multilingual model with the state-of-the-art for all languages and saw that we achieve competitive performances for some languages and even present the best system for Russian and Turkish .", "challenge": "Existing methods for opinion target expression recognition use supervised algorithms which require expensive manually annotated corpora unable to scale in languages and domains.", "approach": "They propose to apply a zero-shot cross-lingual approach by training a convolutional neural network with multilingual word embeddings only on the source language.", "outcome": "The proposed approach achieves up to 77% of the model trained on the target language and 87% when trained on multiple source languages."} +{"id": "2020.emnlp-main.298", "document": "Neural models have achieved remarkable success on relation extraction ( RE ) benchmarks . However , there is no clear understanding which type of information affects existing RE models to make decisions and how to further improve the performance of these models . To this end , we empirically study the effect of two main information sources in text : textual context and entity mentions ( names ) . We find that ( i ) while context is the main source to support the predictions , RE models also heavily rely on the information from entity mentions , most of which is type information , and ( ii ) existing datasets may leak shallow heuristics via entity mentions and thus contribute to the high performance on RE benchmarks . Based on the analyses , we propose an entity-masked contrastive pre-training framework for RE to gain a deeper understanding on both textual context and type information while avoiding rote memorization of entities or use of superficial cues in mentions . We carry out extensive experiments to support our views , and show that our framework can improve the effectiveness and robustness of neural models in different RE scenarios . All the code and datasets are released at https://github.com / thunlp/ RE-Context-or-Names . Relation extraction ( RE ) aims at extracting relational facts between entities from text , e.g. , extracting the fact ( SpaceX , founded by , Elon Musk ) from the sentence in Figure 1 . Utilizing the structured knowledge captured by RE , we can construct or complete knowledge graphs ( KGs ) , and eventually support downstream applications like question answering ( Bordes et al . , 2014 ) , dialog systems ( Madotto et al . , 2018 ) engines ( Xiong et al . , 2017 ) . With the recent advance of deep learning , neural relation extraction ( NRE ) models ( Socher et al . , 2012 ; Liu et al . , 2013 ; Baldini Soares et al . , 2019 ) have achieved the latest state-of-the-art results and some of them are even comparable with human performance on several public RE benchmarks . The success of NRE models on current RE benchmarks makes us wonder which type of information these models actually grasp to help them extract correct relations . The analysis of this problem may indicate the nature of these models and reveal their remaining problems to be further explored . Generally , in a typical RE setting , there are two main sources of information in text that might help RE models classify relations : textual context and entity mentions ( names ) . From human intuition , textual context should be the main source of information for RE . Researchers have reached a consensus that there exist interpretable patterns in textual context that express relational facts . For example , in Figure 1 , \" ... be founded ... by ... \" is a pattern for the relation founded by . The early RE systems ( Huffman , 1995 ; Califf and Mooney , 1997 ) formalize patterns into string templates and determine relations by matching these templates . The later neural models ( Socher et al . , 2012 ; Liu et al . , 2013 ) prefer to encode patterns into distributed representations and then predict relations via representation matching . Compared with rigid string templates , distributed representations used in neural models are more generalized and perform better . Besides , entity mentions also provide much information for relation classification . As shown in Figure 1 , we can acquire the types of entities from their mentions , which could help to filter out those impossible relations . Besides , if these entities can be linked to KGs , models can introduce external knowledge from KGs to help RE ( Zhang et al . , 2019 ; Peters et al . , 2019 ) . Moreover , for pre-trained language models , which are widely adopted for recent RE models , there may be knowledge about entities inherently stored in their parameters after pre-training ( Petroni et al . , 2019 ) . In this paper , we carry out extensive experiments to study to what extent RE models rely on the two information sources . We find out that : ( 1 ) Both context and entity mentions are crucial for RE . As shown in our experiments , while context is the main source to support classification , entity mentions also provide critical information , most of which is the type information of entities . ( 2 ) Existing RE benchmarks may leak shallow cues via entity mentions , which contribute to the high performance of existing models . Our experiments show that models still can achieve high performance only given entity mentions as input , suggesting that there exist biased statistical cues from entity mentions in these datasets . The above observations demonstrate how existing models work on RE datasets , and suggest a way to further improve RE models : we should enhance them via better understanding context and utilizing entity types , while preventing them from simply memorizing entities or exploiting biased cues in mentions . From these points , we investigate an entity-masked contrastive pre-training framework for RE . We use Wikidata to gather sentences that may express the same relations , and let the model learn which sentences are close and which are not in relational semantics by a contrastive objective . In this process , we randomly mask entity mentions to avoid being biased by them . We show its effectiveness across several settings and benchmarks , and suggest that better pre-training technique is a reliable direction towards better RE . In this paper , we thoroughly study how textual context and entity mentions affect RE models respectively . Experiments and case studies prove that ( i ) both context and entity mentions ( mainly as type information ) provide critical information for relation extraction , and ( ii ) existing RE datasets may leak superficial cues through entity mentions and models may not have the strong abilities to understand context as we expect . From these points , we propose an entity-masked contrastive pre-training framework for RE to better understand textual context and entity types , and experimental results prove the effectiveness of our method . In the future , we will continue to explore better RE pre-training techniques , especially with a focus on open relation extraction and relation discovery . These problems require models to encode good relational representation with limited or even zero annotations , and we believe that our pre-trained RE models will make a good impact in the area .", "challenge": "Which textual information models are used for relation extraction tasks remain unknown hindering to improvement of their performance.", "approach": "They perform an analysis and use the findings to propose an entity-masked contrastive pretraining to avoid models memorizing entities or using biased cues in mentions.", "outcome": "They found that existing benchmarks leak superficial cues from entity mentions, and the proposed model can achieve robust performance without relying on these features."} +{"id": "D19-1547", "document": "As a promising paradigm , interactive semantic parsing has shown to improve both semantic parsing accuracy and user confidence in the results . In this paper , we propose a new , unified formulation of the interactive semantic parsing problem , where the goal is to design a modelbased intelligent agent . The agent maintains its own state as the current predicted semantic parse , decides whether and where human intervention is needed , and generates a clarification question in natural language . A key part of the agent is a world model : it takes a percept ( either an initial question or subsequent feedback from the user ) and transitions to a new state . We then propose a simple yet remarkably effective instantiation of our framework , demonstrated on two text-to-SQL datasets ( WikiSQL and Spider ) with different state-of-the-art base semantic parsers . Compared to an existing interactive semantic parsing approach that treats the base parser as a black box , our approach solicits less user feedback but yields higher run-time accuracy . 1 Natural language interfaces that allow users to query data and invoke services without programming have been identified as a key application of semantic parsing ( Berant et al . , 2013 ; Thomason et al . , 2015 ; Dong and Lapata , 2016 ; Zhong et al . , 2017 ; Campagna et al . , 2017 ; Su et al . , 2017 ) . However , existing semantic parsing technologies often fall short when deployed in practice , facing several challenges : ( 1 ) user utterances can be inherently ambiguous or vague , making it difficult to get the correct result in one shot , ( 2 ) the accuracy of state-of-the-art semantic parsers are still not high enough for real use , and ( 3 ) it is hard for users to validate the semantic parsing results , especially with mainstream neural network models that are known for the lack of interpretability . In response to these challenges , interactive semantic parsing has been proposed recently as a practical solution , which includes human users in the loop to resolve utterance ambiguity , boost system accuracy , and improve user confidence via human-machine collaboration ( Li and Jagadish , 2014 ; He et al . , 2016 ; Chaurasia and Mooney , 2017 ; Su et al . , 2018 ; Gur et al . , 2018 ; Yao et al . , 2019 ) . For example , Gur et al . ( 2018 ) built the DialSQL system to detect errors in a generated SQL query and request user selection on alternative options via dialogues . Similarly , Chaurasia and Mooney ( 2017 ) and Yao et al . ( 2019 ) enabled semantic parsers to ask users clarification questions while generating an If-Then program . Su et al . ( 2018 ) showed that users overwhelmingly preferred an interactive system over the noninteractive counterpart for natural language interfaces to web APIs . While these recent studies successfully demonstrated the value of interactive semantic parsing in practice , they are often bound to a certain type of formal language or dataset , and the designs are thus ad-hoc and not easily generalizable . For example , DialSQL only applies to SQL queries on the WikiSQL dataset ( Zhong et al . , 2017 ) , and it is non-trivial to extend it to other formal languages ( e.g. , \u03bb-calculus ) or even just to more complex SQL queries beyond the templates used to construct the dataset . Aiming to develop a general principle for building interactive semantic parsing systems , in this work we propose model-based interactive semantic parsing ( MISP ) , where the goal is to design a model-based intelligent agent ( Russell and Norvig , 2009 ) that can interact with users to complete a semantic parsing task . Taking an utterance ( e.g. , a natural language question ) as input , the agent forms the semantic parse ( e.g. , a SQL query ) in steps , potentially soliciting user feedback in some steps to correct parsing errors . As illustrated in Figure 1 , a MISP agent maintains its state as the current semantic parse and , via an error detector , decides whether and where human intervention is needed ( the action ) . This action is performed by a question generator ( the actuator ) , which generates and presents to the user a humanunderstandable question . A core component of the agent is a world model ( Ha and Schmidhuber , 2018 ) ( hence model-based ) , which incorporates user feedback from the environment and transitions to a new agent state ( e.g. , an updated semantic parse ) . This process repeats until a terminal state is reached . Such a design endows a MISP agent with three crucial properties of interactive semantic parsing : ( 1 ) being introspective of the reasoning process and knowing when it may need human supervision , ( 2 ) being able to solicit user feedback in a human-friendly way , and ( 3 ) being able to incorporate user feedback ( through state transitions controlled by the world model ) . The MISP framework provides several advantages for designing an interactive semantic parser compared to the existing ad-hoc studies . For instance , the whole problem is conceptually reduced to building three key components ( i.e. , the world model , the error detector , and the actuator ) , and can be handled and improved separately . While each component may need to be tailored to the specific task , the general framework remains unchanged . In addition , the formulation of a modelbased intelligent agent can facilitate the application of other machine learning techniques like reinforcement learning . To better demonstrate the advantages of the MISP framework , we propose a simple yet re-markably effective instantiation for the text-to-SQL task . We show the effectiveness of the framework based on three base semantic parsers ( SQLNet , SQLova and SyntaxSQLNet ) and two datasets ( WikiSQL and Spider ) . We empirically verified that with a small amount of targeted , testtime user feedback , interactive semantic parsers improve the accuracy by 10 % to 15 % absolute . Compared to an existing interactive semantic parsing system , DialSQL ( Gur et al . , 2018 ) , our approach , despite its much simpler yet more general system design , achieves better parsing accuracy by asking only half as many questions . This work proposes a new and unified framework for the interactive semantic parsing task , named MISP , and instantiates it successfully on the textto-SQL task . We outline several future directions to further improve MISP-SQL and develop MISP systems for other semantic parsing tasks : Improving Agent Components . The flexibility of MISP allows improving on each agent compo-nent separately . Take the error detector for example . One can augment the probability-based error detector in MISP-SQL with probability calibration , which has been shown useful in aligning model confidence with its reliability ( Guo et al . , 2017 ) . One can also use learning-based approaches , such as a reinforced decision policy ( Yao et al . , 2019 ) , to increase the rate of identifying wrong and solvable predictions . Lifelong Learning for Semantic Parsing . Learning from user feedback is a promising solution for lifelong semantic parser improvement ( Iyer et al . , 2017 ; Padmakumar et al . , 2017 ; Labutov et al . , 2018 ) . However , this may lead to a non-stationary environment ( e.g. , changing state transition ) from the perspective of the agent , making its training ( e.g. , error detector learning ) unstable . In the context of dialog systems , Padmakumar et al . ( 2017 ) suggests that this effect can be mitigated by jointly updating the dialog policy and the semantic parser batchwisely . We leave exploring this aspect in our task to future work . Scaling Up . It is important for MISP agents to scale to larger backend data sources ( e.g. , knowledge bases like Freebase or Wikidata ) . To this end , one can improve MISP from at least three aspects : ( 1 ) using more intelligent interaction designs ( e.g. , free-form text as user feedback ) to speed up the hypothesis space searching globally , ( 2 ) strengthening the world model to nail down a smaller set of plausible hypotheses based on both the initial question and user feedback , and ( 3 ) training the agent to learn to improve the parsing accuracy while minimizing the number of required human interventions over time .", "challenge": "Interactive semantic parsing systems can take user utterances to improve their outputs but are currently limited in languages or datasets due to their ad-hoc nature.", "approach": "They propose to approach interactive semantic parsing with a mode-based agent interacting with users as needed to perform error-less parsing.", "outcome": "The proposed simple agent parser outperforms existing baseline parsers while keeping the interactions with users in half."} +{"id": "P06-1031", "document": "This paper proposes a method for detecting errors in article usage and singular plural usage based on the mass count distinction . First , it learns decision lists from training data generated automatically to distinguish mass and count nouns . Then , in order to improve its performance , it is augmented by feedback that is obtained from the writing of learners . Finally , it detects errors by applying rules to the mass count distinction . Experiments show that it achieves a recall of 0.71 and a precision of 0.72 and outperforms other methods used for comparison when augmented by feedback . Although several researchers ( Kawai et al . , 1984 ; McCoy et al . , 1996 ; Schneider and McCoy , 1998 ; Tschichold et al . , 1997 ) have shown that rulebased methods are effective to detecting grammatical errors in the writing of learners of English , it has been pointed out that it is hard to write rules for detecting errors concerning the articles and singular plural usage . To be precise , it is hard to write rules for distinguishing mass and count nouns which are particularly important in detecting these errors ( Kawai et al . , 1984 ) . The major reason for this is that whether a noun is a mass noun or a count noun greatly depends on its meaning or its surrounding context ( refer to Allan ( 1980 ) and Bond ( 2005 ) for details of the mass count distinction ) . The above errors are very common among Japanese learners of English ( Kawai et al . , 1984 ; Izumi et al . , 2003 ) . This is perhaps because the Japanese language does not have a mass count distinction system similar to that of English . Thus , it is favorable for error detection systems aiming at Japanese learners to be capable of detecting these errors . In other words , such systems need to somehow distinguish mass and count nouns . This paper proposes a method for distinguishing mass and count nouns in context to complement the conventional rules for detecting grammatical errors . In this method , first , training data , which consist of instances of mass and count nouns , are automatically generated from a corpus . Then , decision lists for distinguishing mass and count nouns are learned from the training data . Finally , the decision lists are used with the conventional rules to detect the target errors . The proposed method requires a corpus to learn decision lists for distinguishing mass and count nouns . General corpora such as newspaper articles can be used for the purpose . However , a drawback to it is that there are differences in character between general corpora and the writing of non-native learners of English ( Granger , 1998 ; Chodorow and Leacock , 2000 ) . For instance , Chodorow and Leacock ( 2000 ) point out that the word concentrate is usually used as a noun in a general corpus whereas it is a verb 91 % of the time in essays written by non-native learners of English . Consequently , the differences affect the performance of the proposed method . In order to reduce the drawback , the proposed method is augmented by feedback ; it takes as feedback learners ' essays whose errors are corrected by a teacher of English ( hereafter , referred to as the feedback corpus ) . In essence , the feedback corpus could be added to a general corpus to generate training data . Or , ideally training data could be generated only from the feedback corpus just as from a general corpus . However , this causes a serious problem in practice since the size of the feedback corpus is normally far smaller than that of a general corpus . To make it practical , this paper discusses the problem and explores its solution . The rest of this paper is structured as follows . Section 2 describes the method for detecting the target errors based on the mass count distinction . Section 3 explains how the method is augmented by feedback . Section 4 discusses experiments conducted to evaluate the proposed method . This paper has proposed a feedback-augmented method for distinguishing mass and count nouns to complement the conventional rules for detecting grammatical errors . The experiments have shown that the proposed method detected 71 % of the target errors in the writing of Japanese learners of English with a precision of 72 % when it was augmented by feedback . From the results , we conclude that the feedback-augmented method is effective to detecting errors concerning the articles and singular plural usage in the writing of Japanese learners of English . Although it is not taken into account in this paper , the feedback corpus contains further useful information . For example , we can obtain training data consisting of instances of errors by comparing the feedback corpus with its original corpus . Also , comparing it with the results of detection , we can know performance of each rule used in the detection , which make it possible to increase or decrease their log-likelihood ratios according to their performance . We will investigate how to exploit these sources of information in future work .", "challenge": "Although they are common, writing rules for detecting errors concerning the articles and singular plural usage especially for distinguishing mass and count nouns is hard.", "approach": "They propose to complement conventional rules with decision lists learned from automatically generated training data and feedback obtained from the writing of learners.", "outcome": "The proposed method outperforms other methods by achieving a recall of 0.71 and a precision of 0.72 when augmented by feedback."} +{"id": "P16-1031", "document": "Sentiment classification aims to automatically predict sentiment polarity ( e.g. , positive or negative ) of user generated sentiment data ( e.g. , reviews , blogs ) . Due to the mismatch among different domains , a sentiment classifier trained in one domain may not work well when directly applied to other domains . Thus , domain adaptation for sentiment classification algorithms are highly desirable to reduce the domain discrepancy and manual labeling costs . To address the above challenge , we propose a novel domain adaptation method , called Bi-Transferring Deep Neural Networks ( BTDNNs ) . The proposed BTDNNs attempts to transfer the source domain examples to the target domain , and also transfer the target domain examples to the source domain . The linear transformation of BTDNNs ensures the feasibility of transferring between domains , and the distribution consistency between the transferred domain and the desirable domain is constrained with a linear data reconstruction manner . As a result , the transferred source domain is supervised and follows similar distribution as the target domain . Therefore , any supervised method can be used on the transferred source domain to train a classifier for sentiment classification in a target domain . We conduct experiments on a benchmark composed of reviews of 4 types of Amazon products . Experimental results show that our proposed approach significantly outperforms the several baseline methods , and achieves an accuracy which is competitive with the state-of-the-art method for domain adaptation . With the rise of social media ( e.g. , blogs and social networks etc . ) , more and more user generated sentiment data have been shared on the Web ( Pang et al . , 2002 ; Pang and Lee , 2008 ; Liu , 2012 ; Zhou et al . , 2011 ) . They exist in the form of user reviews on shopping or opinion sites , in posts of blogs / questions or customer feedbacks . This has created a surge of research in sentiment classification ( or sentiment analysis ) , which aims to automatically determine the sentiment polarity ( e.g. , positive or negative ) of user generated sentiment data ( e.g. , reviews , blogs , questions ) . Machine learning algorithms have been proved promising and widely used for sentiment classification ( Pang et al . , 2002 ; Pang and Lee , 2008 ; Liu , 2012 ) . However , the performance of these models relies on manually labeled training data . In many practical cases , we may have plentiful labeled data in the source domain , but very few or no labeled data in the target domain with a different data distribution . For example , we may have many labeled books reviews , but we are interested in detecting the polarity of electronics reviews . Reviews for different products might have different vocabularies , thus classifiers trained on one domain often fail to produce satisfactory results when transferring to another domain . This has motivated much research on cross-domain ( domain adaptation ) sentiment classification which transfers the knowledge from the source domain to the target domain ( Thomas et al . , 2006 ; Snyder and Barzilay , 2007 ; Blitzer et al . , 2006 ; Blitzer et al . , 2007 ; Daume III , 2007 ; Li and Zong , 2008 ; Li et al . , 2009 ; Pan et al . , 2010 ; Kumar et al . , 2010 ; Glorot et al . , 2011 ; Chen et al . , 2011a ; Chen et al . , 2012 ; Li et al . , 2012 ; Xia et al . , 2013a ; Li et al . , 2013 ; Zhou et al . , 2015a ; Zhuang et al . , 2015 ) . Depending on whether the labeled data are available for the target domain , cross-domain sen-timent classification can be divided into two categories : supervised domain adaptation and unsupervised domain adaptation . In scenario of supervised domain adaptation , labeled data is available in the target domain but the number is usually too small to train a good sentiment classifier , while in unsupervised domain adaptation only unlabeled data is available in the target domain , which is more challenging . This work focuses on the unsupervised domain adaptation problem of which the essence is how to employ the unlabeled data of target domain to guide the model learning from the labeled source domain . The fundamental challenge of cross-domain sentiment classification lies in that the source domain and the target domain have different data distribution . Recent work has investigated several techniques for alleviating the domain discrepancy : instance-weight adaptation ( Huang et al . , 2007 ; Jiang and Zhai , 2007 ; Li and Zong , 2008 ; Mansour et al . , 2009 ; Dredze et al . , 2010 ; Chen et al . , 2011b ; Chen et al . , 2011a ; Chen et al . , 2012 ; Li et al . , 2013 ; Xia et al . , 2013a ) and feature representation adaptation ( Thomas et al . , 2006 ; Snyder and Barzilay , 2007 ; Blitzer et al . , 2006 ; Blitzer et al . , 2007 ; Li et al . , 2009 ; Pan et al . , 2010 ; Zhou et al . , 2015a ; Zhuang et al . , 2015 ) . The first kind of methods assume that some training data in the source domain are very useful for the target domain and these data can be used to train models for the target domain after re-weighting . In contrast , feature representation approaches attempt to develop an adaptive feature representation that is effective in reducing the difference between domains . Recently , some efforts have been initiated on learning robust feature representations with deep neural networks ( DNNs ) in the context of crossdomain sentiment classification ( Glorot et al . , 2011 ; Chen et al . , 2012 ) . Glorot et al . ( 2011 ) proposed to learn robust feature representations with stacked denoising auto-encoders ( SDAs ) ( Vincent et al . , 2008 ) . Denoising auto-encoders are onelayer neural networks that are optimized to reconstruct input data from partial and random corruption . These denoisers can be stacked into deep learning architectures . The outputs of their intermediate layers are then used as input features for SVMs ( Fan et al . , 2008 ) . Chen et al . ( 2012 ) proposed a marginalized SDA ( mSDA ) that addressed the two crucial limitations of SDAs : high computational cost and lack of scalability to highdimensional features . However , these methods learn the unified domain-invariable feature representations by combining the source domain data and that of the target domain data together , which can not well characterize the domain-specific features as well as the commonality of domains . To this end , we propose a Bi-Transferring Deep Neural Networks ( BTDNNs ) which can transfer the source domain examples to the target domain and also transfer the target domain examples to the source domain , as shown in Figure 1 . In BTDNNs , the linear transformation makes the feasibility of transferring between domains , and the linear data reconstruction manner ensures the distribution consistency between the transferred domain and the desirable domain . Specifically , our BTDNNs has one common encoder f c , two decoders g s and g t which can map an example to the source domain and the target domain respectively . As a result , the source domain can be transferred to the target domain along with its sentiment label , and any supervised method can be used on the transferred source domain to train a classifier for sentiment classification in the target domain , as the transferred source domain data share the similar distribution as the target domain . Experimental results show that the proposed approach significantly outperforms several baselines , and achieves an accuracy which is competitive with the state-of-the-art method for cross-domain sentiment classification . The remainder of this paper is organized as follows . Section 2 introduces the related work . Section 3 describes our proposed bi-transferring deep neural networks ( BTDNNs ) . Section 4 presents the experimental results . In Section 5 , we conclude with ideas for future research . In this paper , we propose a novel Bi-Transferring Deep Neural Networks ( BTDNNs ) for crossdomain sentiment classification . The proposed BTDNNs attempts to transfer the source domain examples to the target domain , and also transfer the target domain examples to the source domain . The linear transformation of BTDNNs ensures the feasibility of transferring between domains , and the distribution consistency between the transferred domain and the desirable domain is constrained with a linear data reconstruction manner . Experimental results show that BTDNNs significantly outperforms the several baselines , and achieves an accuracy which is competitive with the state-of-the-art method for sentiment classification adaptation . There are some ways in which this research could be continued . First , since deep learning may obtain better generalization on large-scale data sets ( Bengio , 2009 ) , a straightforward path of the future research is to apply the proposed BTDNNs for domain adaptation on a much larger industrial-strength data set of 22 domains ( Glorot et al . , 2011 ) . Second , we will try to investigate the use of the proposed approach for other kinds of data set , such as 20 newsgroups and Reuters-21578 ( Li et al . , 2012 ; Zhuang et al . , 2013 ) .", "challenge": "Sentiment classifiers suffer from out-of-domain inputs and labelled data for supervised domain adaption is not often available limiting the applicability of such a model.", "approach": "They propose a cross-domain adaptation method which can map examples between source and target domains with their labels by a linear transformation.", "outcome": "The proposed model outperforms several baseline methods and is competitive with the state-of-the-art method on sentiment classification tasks from 4 types of Amazon products."} +{"id": "E99-1026", "document": "This paper describes a dependency structure analysis of Japanese sentences based on the maximum entropy models . Our model is created by learning the weights of some features from a training corpus to predict the dependency between bunsetsus or phrasal units . The dependency accuracy of our system is 87.2 % using the Kyoto University corpus . We discuss the contribution of each feature set and the relationship between the number of training data and the accuracy . Dependency structure analysis is one of the basic techniques in Japanese sentence analysis . The Japanese dependency structure is usually represented by the relationship between phrasal units called ' bunsetsu . ' The analysis has two conceptual steps . In the first step , a dependency matrix is prepared . Each element of the matrix represents how likely one bunsetsu is to depend on the other . In the second step , an optimal set of dependencies for the entire sentence is found . In this paper , we will mainly discuss the first step , a model for estimating dependency likelihood . So far there have been two different approaches to estimating the dependency likelihood , One is the rule-based approach , in which the rules are created by experts and likelihoods are calculated by some means , including semiautomatic corpusbased methods but also by manual assignment of scores for rules . However , hand-crafted rules have the following problems . \u2022 They have a problem with their coverage . Because there are many features to find correct dependencies , it is difficult to find them manually . \u2022 They also have a problem with their consistency , since many of the features compete with each other and humans can not create consistent rules or assign consistent scores . \u2022 As syntactic characteristics differ across different domains , the rules have to be changed when the target domain changes . It is costly to create a new hand-made rule for each domain . At / other approach is a fully automatic corpusbased approach . This approach has the potential to overcome the problems of the rule-based approach . It automatically learns the likelihoods of dependencies from a tagged corpus and calculates the best dependencies for an input sentence . We take this approach . This approach is taken by some other systems ( Collins , 1996 ; Fujio and Matsumoto , 1998 ; Haruno et ah , 1998 ) . The parser proposed by Ratnaparkhi ( Ratnaparkhi , 1997 ) is considered to be one of the most accurate parsers in English . Its probability estimation is based on the maximum entropy models . We also use the maximum entropy model . This model learns the weights of given features from a training corpus . The weights are calculated based on the frequencies of the features in the training data . The set of features is defined by a human . In our model , we use features of bunsetsu , such as character strings , parts of speech , and inflection types of bunsetsu , as well as information between bunsetsus , such as the existence of punctuation , and the distance between bunsetsus . The probabilities of dependencies are estimated from the model by using those features in input sentences . We assume that the overall dependencies in a whole sentence can be determined as the product of the probabilities of all the dependencies in the sentence . Now , we briefly describe the algorithm of dependency analysis . It is said that Japanese dependencies have the following characteristics . ( 1 ) Dependencies are directed from left to right ( 2 ) Dependencies do not cross ( 3 ) A bunsetsu , except for the rightmost one , depends on only one bunsetsu ( 4 ) In many cases , the left context is not necessary to determine a dependency 1 The analysis method proposed in this paper is designed to utilize these features . Based on these properties , we detect the dependencies in a sentence by analyzing it backwards ( from right to left ) . In the past , such a backward algorithm has been used with rule-based parsers ( e.g. , ( Fujita , 1988 ) ) . We applied it to our statistically based approach . Because of the statistical property , we can incorporate a beam search , an effective way of limiting the search space in a backward analysis . This paper described a Japanese dependency structure analysis based on the maximum entropy model . Our model is created by learning the weights of some features from a training corpus to predict the dependency between bunsetsus or phrasal units . The probabilities of dependencies between bunsetsus are estimated by this model . The dependency accuracy of our system was 87.2 % using the Kyoto University corpus . In our experiments without the feature sets shown in Tables 1 and 2 , we found that some basic and combined features strongly contribute to improve the accuracy . Investigating the relationship between the number of training data and the accuracy , we found that good accuracy can be achieved even with a very small set of training data . We believe that the maximum entropy framework has suitable characteristics for overcoming the data sparseness problem . There are several future directions . In particular , we are interested in how to deal with coordinate structures , since that seems to be the largest problem at the moment .", "challenge": "Existing models that estimate dependency likelihood for Japanese use hand-crafted rules which have few problems such as low coverage, consistency, and domain scalability.", "approach": "They propose to use a maximum entropy model which leans weights from a training corpus using human-defined features of bunsetsu, Japanese phrasal units.", "outcome": "The proposed system achieves 87.2% accuracy on the Kyoto University corpus, and they find that basic and combined features contribute to improvements."} +{"id": "D17-1230", "document": "In this paper , drawing intuition from the Turing test , we propose using adversarial training for open-domain dialogue generation : the system is trained to produce sequences that are indistinguishable from human-generated dialogue utterances . We cast the task as a reinforcement learning ( RL ) problem where we jointly train two systems , a generative model to produce response sequences , and a discriminator-analagous to the human evaluator in the Turing test-to distinguish between the human-generated dialogues and the machine-generated ones . The outputs from the discriminator are then used as rewards for the generative model , pushing the system to generate dialogues that mostly resemble human dialogues . In addition to adversarial training we describe a model for adversarial evaluation that uses success in fooling an adversary as a dialogue evaluation metric , while avoiding a number of potential pitfalls . Experimental results on several metrics , including adversarial evaluation , demonstrate that the adversarially-trained system generates higher-quality responses than previous baselines . Open domain dialogue generation ( Ritter et al . , 2011 ; Sordoni et al . , 2015 ; Xu et al . , 2016 ; Wen et al . , 2016 ; Li et al . , 2016b ; Serban et al . , 2016c Serban et al . , , 2017 ) ) aims at generating meaningful and coherent dialogue responses given the dialogue history . Prior systems , e.g. , phrase-based machine translation systems ( Ritter et al . , 2011 ; Sordoni et al . , 2015 ) or end-to-end neural systems ( Shang et al . , 2015 ; Vinyals and Le , 2015 ; Li et al . , 2016a ; Yao et al . , 2015 ; Luan et al . , 2016 ) approximate such a goal by predicting the next dialogue utterance given the dialogue history using the maximum likelihood estimation ( MLE ) objective . Despite its success , this over-simplified training objective leads to problems : responses are dull , generic ( Sordoni et al . , 2015 ; Serban et al . , 2016a ; Li et al . , 2016a ) , repetitive , and short-sighted ( Li et al . , 2016d ) . Solutions to these problems require answering a few fundamental questions : what are the crucial aspects that characterize an ideal conversation , how can we quantitatively measure them , and how can we incorporate them into a machine learning system ? For example , Li et al . ( 2016d ) manually define three ideal dialogue properties ( ease of answering , informativeness and coherence ) and use a reinforcement-learning framework to train the model to generate highly rewarded responses . Yu et al . ( 2016b ) use keyword retrieval confidence as a reward . However , it is widely acknowledged that manually defined reward functions ca n't possibly cover all crucial aspects and can lead to suboptimal generated utterances . A good dialogue model should generate utterances indistinguishable from human dialogues . Such a goal suggests a training objective resembling the idea of the Turing test ( Turing , 1950 ) . We borrow the idea of adversarial training ( Goodfellow et al . , 2014 ; Denton et al . , 2015 ) in computer vision , in which we jointly train two models , a generator ( a neural SEQ2SEQ model ) that defines the probability of generating a dialogue sequence , and a discriminator that labels dialogues as human-generated or machine-generated . This discriminator is analogous to the evaluator in the Turing test . We cast the task as a reinforcement learning problem , in which the quality of machinegenerated utterances is measured by its ability to fool the discriminator into believing that it is a human-generated one . The output from the discriminator is used as a reward to the generator , pushing it to generate utterances indistinguishable from human-generated dialogues . The idea of a Turing test-employing an evaluator to distinguish machine-generated texts from human-generated ones-can be applied not only to training but also testing , where it goes by the name of adversarial evaluation . Adversarial evaluation was first employed in Bowman et al . ( 2016 ) to evaluate sentence generation quality , and preliminarily studied for dialogue generation by Kannan and Vinyals ( 2016 ) . In this paper , we discuss potential pitfalls of adversarial evaluations and necessary steps to avoid them and make evaluation reliable . Experimental results demonstrate that our approach produces more interactive , interesting , and non-repetitive responses than standard SEQ2SEQ models trained using the MLE objective function . In this paper , drawing intuitions from the Turing test , we propose using an adversarial training approach for response generation . We cast the model in the framework of reinforcement learning and train a generator based on the signal from a discriminator to generate response sequences indistinguishable from human-generated dialogues . We observe clear performance improvements on multiple metrics from the adversarial training strategy . The adversarial training model should theoretically benefit a variety of generation tasks in NLP . Unfortunately , in preliminary experiments applying the same training paradigm to machine translation , we did not observe a clear performance boost . We conjecture that this is because the adversarial training strategy is more beneficial to tasks in which there is a big discrepancy between the distributions of the generated sequences and the reference target sequences . In other words , the adversarial approach is more beneficial on tasks in which entropy of the targets is high . Exploring this relationship further is a focus of our future work . Zhou Yu , Ziyu Xu , Alan W Black , and Alex I Rudnicky . 2016b . Strategy and policy learning for nontask-oriented conversational systems . In 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue . page 404 . Yuan Zhang , Regina Barzilay , and Tommi Jaakkola . 2017 . Aspect-augmented adversarial networks for domain adaptation . arXiv preprint arXiv:1701.00188 .", "challenge": "Existing approaches to replace the over-simplified objective for open domain dialogue generation use manually defined reward functions which cannot cover all crucial aspects.", "approach": "They propose the Turing test-inspired adversarial training scheme where the generative model is trained to fool the discriminator as a reinforcement learning problem.", "outcome": "The proposed model can generate more interactive, interesting, and non-repetitive responses than previous baselines on several metrics including adversarial evaluation."} +{"id": "D07-1086", "document": "Query segmentation is the process of taking a user 's search-engine query and dividing the tokens into individual phrases or semantic units . Identification of these query segments can potentially improve both document-retrieval precision , by first returning pages which contain the exact query segments , and document-retrieval recall , by allowing query expansion or substitution via the segmented units . We train and evaluate a machine-learned query segmentation system that achieves 86 % segmentationdecision accuracy on a gold standard set of segmented noun phrase queries , well above recently published approaches . Key enablers of this high performance are features derived from previous natural language processing work in noun compound bracketing . For example , token association features beyond simple N-gram counts provide powerful indicators of segmentation . Billions of times every day , people around the world communicate with Internet search engines via a small text box on a web page . The user provides a sequence of words to the search engine , and the search engine interprets the query and tries to return web pages that not only contain the query tokens , but that are also somehow about the topic or idea that the query terms describe . Recent years have seen a widespread recognition that the user is indeed providing natural language text to the search engine ; query tokens are not independent , unordered symbols to be matched on a web document but rather ordered words and phrases with syntactic relationships . For example , Zhai ( 1997 ) pointed out that indexing on single-word symbols is not able to distinguish a search for \" bank terminology \" from one for \" terminology bank . \" The reader can submit these queries to a current search engine to confirm that modern indexing does recognize the effect of token order on query meaning in some way . Accurately interpreting query semantics also depends on establishing relationships between the query tokens . For example , consider the query \" two man power saw . \" There are a number of possible interpretations of this query , and these can be expressed through a number of different segmentations or bracketings of the query terms : One simple way to make use of these interpretations in search would be to put quotation marks around the phrasal segments to require the search engine to only find pages with exact phrase matches . If , as seems likely , the searcher is seeking pages about the large , mechanically-powered two-man saws used by lumberjacks and sawyers to cut big trees , then the first segmentation is correct . Indeed , a phrasal search for \" two man power saw \" on Google does find the device of interest . So does the second interpretation , but along with other , less-relevant pages discussing competitions involving \" two-man handsaw , two-woman handsaw , power saw log bucking , etc . \" The top document returned for the third interpretation , meanwhile , describes a man on a rampage at a subway station with two cordless power saws , while the fourth interpretation finds pages about topics ranging from hockey 's thrilling two-man power play advantage to the man power situation during the Second World War . Clearly , choosing the right segmentation means finding the right documents faster . Query segmentation can also help if insufficient pages are returned for the original query . A technique such as query substitution or expansion ( Jones et al . , 2006 ) can be employed using the segmented units . For example , we could replace the sexist \" two man \" modifier with the politically-correct \" two person \" phrase in order to find additional relevant documents . Without segmentation , expanding via the individual words \" two , \" \" man , \" \" power , \" or \" saw \" could produce less sensible results . In this paper , we propose a data-driven , machinelearned approach to query segmentation . Similar to previous segmentation approaches described in Section 2 , we make a decision to segment or not to segment between each pair of tokens in the query . Unlike previous work , we view this as a classification task where the decision parameters are learned discriminatively from gold standard data . In Section 3 , we describe our approach and the features we use . Section 4 describes our labelled data , as well as the specific tools used for our experiments . Section 5 provides the results of our evaluation , and shows the strong gains in performance possible using a wide set of features within a discriminative framework . We have developed a novel approach to search query segmentation and evaluated this approach on actual user queries , reducing error by 56 % over a recent comparison approach . Gains in performance were made possible by both leveraging recent progress in feature engineering for noun compound bracketing , as well as using a flexible , discriminative incorporation of association information , beyond the decisionboundary tokens . We have created and made available a set of manually-segmented user queries , and thus provided a new testing platform for other researchers in this area . Our initial formulation of query segmentation as a structured learning problem , and our leveraging of association statistics beyond the decision boundary , also provides powerful tools for noun compound bracketing researchers to both move beyond three-word compounds and to adopt discriminative feature weighting techniques . The positive results achieved on this important application should encourage further inter-disciplinary collaboration between noun compound interpretation and information retrieval researchers . For example , analysing the semantics of multiword expressions may allow for more-focused query expansion ; knowing to expand \" bank manager \" to include pages describing a \" manager of the bank , \" but not doing the same for non-compositional phrases like \" real estate \" or \" private investigator , \" requires exactly the kind of techniques being developed in the noun compound interpretation community . Thus for query expansion , as for query segmentation , work in natural language processing has the potential to make a real and immediate impact on search-engine technology . The next step in this research is to directly investigate how query segmentation affects search performance . For such an evaluation , we would need to know , for each possible segmentation ( including no segmentation ) , the document retrieval performance . This could be the proportion of returned documents that are deemed to be relevant to the original query . Exactly such an evaluation was recently used by Kumaran and Allan ( 2007 ) for the related task of query contraction . Of course , a dataset with queries and retrieval scores may serve for more than evaluation ; it may provide the examples used by the learning module . That is , the parameters of the contraction or segmentation scoring function could be discriminatively set to optimize the retrieval of the training set queries . A unified framework for query contraction , segmentation , and expansion , all based on discriminatively optimizing retrieval performance , is a very appealing future research direction . In this framework , the size of the training sets would not be limited by human annotation resources , but by the number of queries for which retrieved-document relevance judgments are available . Generating more training examples would allow the use of more powerful , finer-grained lexical features for classification .", "challenge": "Because users provide natural language texts to the search engine, choosing the right segmentations helps for faster retrieval, and when original queries return insufficient pages.", "approach": "They create a manually-segmented user query dataset and propose a classification-based query segmentation system which uses features such as token associations beyond simple N-gram counts.", "outcome": "The proposed system achieves 86% segmentation-decision accuracy on a set of segmented noun phrase queries which is 56% fewer errors than a comparison approach."} +{"id": "P10-1079", "document": "In recent years , research in natural language processing has increasingly focused on normalizing SMS messages . Different well-defined approaches have been proposed , but the problem remains far from being solved : best systems achieve a 11 % Word Error Rate . This paper presents a method that shares similarities with both spell checking and machine translation approaches . The normalization part of the system is entirely based on models trained from a corpus . Evaluated in French by 10-fold-cross validation , the system achieves a 9.3 % Word Error Rate and a 0.83 BLEU score . Introduced a few years ago , Short Message Service ( SMS ) offers the possibility of exchanging written messages between mobile phones . SMS has quickly been adopted by users . These messages often greatly deviate from traditional spelling conventions . As shown by specialists ( Thurlow and Brown , 2003 ; Fairon et al . , 2006 ; Bieswanger , 2007 ) , this variability is due to the simultaneous use of numerous coding strategies , like phonetic plays ( 2m1 read ' demain ' , \" tomorrow \" ) , phonetic transcriptions ( kom instead of ' comme ' , \" like \" ) , consonant skeletons ( tjrs for ' toujours ' , \" always \" ) , misapplied , missing or incorrect separators ( j esper for ' j'esp\u00e8re ' , \" I hope \" ; j'croibi1k , instead of ' je crois bien que ' , \" I am pretty sure that \" ) , etc . These deviations are due to three main factors : the small number of characters allowed per text message by the service ( 140 bytes ) , the constraints of the small phones ' keypads and , last but not least , the fact that people mostly communicate between friends and relatives in an informal register . Whatever their causes , these deviations considerably hamper any standard natural language processing ( NLP ) system , which stumbles against so many Out-Of-Vocabulary words . For this reason , as noted by Sproat et al . ( 2001 ) , an SMS normalization must be performed before a more conventional NLP process can be applied . As defined by Yvon ( 2008 ) , \" SMS normalization consists in rewriting an SMS text using a more conventional spelling , in order to make it more readable for a human or for a machine . \" The SMS normalization we present here was developed in the general framework of an SMSto-speech synthesis system1 . This paper , however , only focuses on the normalization process . Evaluated in French , our method shares similarities with both spell checking and machine translation . The machine translation-like module of the system performs the true normalization task . It is entirely based on models learned from an SMS corpus and its transcription , aligned at the character-level in order to get parallel corpora . Two spell checking-like modules surround the normalization module . The first one detects unambiguous tokens , like URLs or phone numbers , to keep them out of the normalization . The second one , applied on the normalized parts only , identifies non-alphabetic sequences , like punctuations , and labels them with the corresponding token . This greatly helps the system 's print module to follow the basic rules of typography . This paper is organized as follows . Section 2 proposes an overview of the state of the art . Section 3 presents the general architecture of our system , while Section 4 focuses on how we learn and combine our normalization models . Section 5 evaluates the system and compares it to previous works . Section 6 draws conclusions and considers some future possible improvements of the method . In this paper , we presented an SMS normalization framework based on finite-state machines and developed in the context of an SMS-to-speech synthesis system . With the intention to avoid wrong modifications of special tokens and to handle word boundaries as easily as possible , we designed a method that shares similarities with both spell checking and machine translation . Our Evaluated by ten-fold cross-validation , the system seems efficient , and the performance in terms of BLEU score and WER are quite encouraging . However , the SER remains too high , which emphasizes the fact that the system needs several improvements . First of all , the model should take phonetic similarities into account , because SMS messages contain a lot of phonetic plays . The phonetic model , for instance , should know that o , au , eau , . . . , aux can all be pronounced [ o ] , while \u00e8 , ais , ait , . . . , aient are often pronounced [ E ] . However , unlike Kobus et al . ( 2008a ) , we feel that this model must avoid the normalization step in which the graphemic sequence is converted into phonemes , because this conversion prevents the next steps from knowing which graphemes were in the initial sequence . Instead , we propose to learn phonetic similarities from a dictionary of words with phonemic transcriptions , and to build graphemes-to-graphemes rules . These rules could then be automatically weighted , by learning their frequencies from our aligned corpora . Furthermore , this model should be able to allow for timbre variation , like [ e]-[E ] , in order to allow similarities between graphemes frequently confused in French , like ai ( [ e ] ) and ais / ait / aient ( [ E ] ) . Last but not least , the graphemes-tographemes rules should be contextualized , in order to reduce the complexity of the model . It would also be interesting to test the impact of another lexical language model , learned on non-SMS sentences . Indeed , the lexical model must be learned from sequences of standard written forms , an obvious prerequisite that involves a major drawback when the corpus is made of SMS sentences : the corpus must first be transcribed , an expensive process that reduces the amount of data on which the model will be trained . For this reason , we propose to learn a lexical model from non-SMS sentences . However , the corpus of external sentences should still share two important features with the SMS language : it should mimic the oral language and be as spontaneous as possible . With this in mind , our intention is to gather sentences from Internet forums . But not just any forum , because often forums share another feature with the SMS language : their language is noisy . Thus , the idea is to choose a forum asking its members to pay attention to spelling mistakes and grammatical errors , and to avoid the use of the SMS language .", "challenge": "Non-traditional-spelling conventions in SMS messages require a normalization process before an NLP process can be applied but the current best system performs poorly.", "approach": "They propose a machine translation-like corpus-based model surrounded by two spell checking-like modules composed of unambiguous token detection and non-alphabetic sequence identification.", "outcome": "The proposed method efficiently achieves a 9.3% Word Error Rate and 0.83 BLEU score in French by 10-fold-cross validation."} +{"id": "2021.acl-long.512", "document": "Beam search is a go-to strategy for decoding neural sequence models . The algorithm can naturally be viewed as a subset optimization problem , albeit one where the corresponding set function does not reflect interactions between candidates . Empirically , this leads to sets often exhibiting high overlap , e.g. , strings may differ by only a single word . Yet in use-cases that call for multiple solutions , a diverse or representative set is often desired . To address this issue , we propose a reformulation of beam search , which we call determinantal beam search . Determinantal beam search has a natural relationship to determinantal point processes ( DPPs ) , models over sets that inherently encode intra-set interactions . By posing iterations in beam search as a series of subdeterminant maximization problems , we can turn the algorithm into a diverse subset selection process . In a case study , we use the string subsequence kernel to explicitly encourage n-gram coverage in text generated from a sequence model . We observe that our algorithm offers competitive performance against other diverse set generation strategies in the context of language generation , while providing a more general approach to optimizing for diversity . The decoding of neural sequence models is a fundamental component of many tasks in NLP . Yet , many proposed decoding methods aim to produce only a single solution ; further , decoding strategies that provide a set , such as beam search , admit high overlap between solutions . Such approaches fail to reflect that for many NLP tasks,1 there can be multiple correct solutions-or that we may desire a diverse set of solutions . As it stands , standard beam search chooses items based purely on individual scores , with no means for encoding interaction between candidates ; this is the limitation which we attempt to address in this work . We derive determinantal beam search , a novel generalization of beam search that casts subset selection as the subdeterminant optimization problem . Specifically , we formulate each iteration of beam search as a subdeterminant maximization problem parameterized by a positive semi-definite matrix that encodes interactions between the possible candidates ; standard beam search is recovered by a specific diagonal matrix . This framing creates a natural paradigm for taking the relationships between candidates during the decoding process , and can thus assign higher scores to diversified sets ; we show how this approach relates to k-determinantal point processes ( DPPs ) . Given the wealth of research on efficient kernel computation ( Rousu and Shawe-Taylor , 2005 ; Farhan et al . , 2017 ) and DPP inference strategies ( Li et al . , 2016 ; Han et al . , 2017 ; Chen et al . , 2018 ) , we find the impact on runtime to be quite reasonable in comparison to standard decoding techniques . In a case study on neural machine translation ( NMT ) , we demonstrate how to make use of the string subsequence kernel ( Lodhi et al . , 2002 ) to encode the notion of n-gram diversity in the language generation process , allowing us to derive an elegant diverse beam search . Under this scheme , we observe that determinantal beam search generates more diverse sets than standard beam search with minimal trade-off in terms of BLEU . We see improved performance over stochastic beam search ( SBS ; Kool et al . , 2019 ) , which is reported to encourage diversity , and a slight improvement over Vijayakumar et al . ( 2018 ) 's diverse beam search ( DBS ) while providing a more general approach to optimizing for intra-set diversity . We propose determinantal beam search ( DetBS ): a new way of framing beam search that allows us to optimize set generation for diversity and coverage rather than simply individual scores . Formally , we redefine beam search as an iterative subdeterminant maximization problems where we select the approximately maximizing set according to the PSD matrix parameterizing our score function . This gives us the ability to encode the notion of intra-set diversity into the beam search optimization problem . We discuss and experiment with efficient methods for inference and kernel computation that make DetBS an efficient decoding strategy in practice . We use DetBS in the context of language generation , where we explicitly encourage n-gram coverage through the string subsequence kernel . In our NMT experiments , we find DetBS generates much more diverse sets of strings than standard beam search and stochastic beam search with a small tradeoff in median BLEU . We observe competitive performance compared with diverse beam search .", "challenge": "Beam search does not reflect the interactions between candidates leading to high overlaps while there are use-cases that call for multiple solutions.", "approach": "They propose determinantal beam search which models over sets while encoding intra-set interactions to generate subsets with high diversity and coverage.", "outcome": "The proposed algorithm performs competitively in BLEU to baseline methods including the stochastic beam search on the machine translation task while optimizing for diversity."} +{"id": "P08-1107", "document": "This paper describes a computational approach to resolving the true referent of a named mention of a person in the body of an email . A generative model of mention generation is used to guide mention resolution . Results on three relatively small collections indicate that the accuracy of this approach compares favorably to the best known techniques , and results on the full CMU Enron collection indicate that it scales well to larger collections . The increasing prevalence of informal text from which a dialog structure can be reconstructed ( e.g. , email or instant messaging ) , raises new challenges if we are to help users make sense of this cacophony . Large collections offer greater scope for assembling evidence to help with that task , but they pose additional challenges as well . With well over 100,000 unique email addresses in the CMU version of the Enron collection ( Klimt and Yang , 2004 ) , common names ( e.g. , John ) might easily refer to any one of several hundred people . In this paper , we associate named mentions in unstructured text ( i.e. , the body of an email and/or the subject line ) to modeled identities . We see at least two direct applications for this work : ( 1 ) helping searchers who are unfamiliar with the contents of an email collection ( e.g. , historians or lawyers ) better understand the context of emails that they find , and ( 2 ) augmenting more typical social networks ( based on senders and recipients ) with additional links based on references found in unstructured text . Most approaches to resolving identity can be decomposed into four sub-problems : ( 1 ) finding a reference that requires resolution , ( 2 ) identifying candidates , ( 3 ) assembling evidence , and ( 4 ) choosing among the candidates based on the evidence . For the work reported in this paper , we rely on the user to designate references requiring resolution ( which we model as a predetermined set of mention-queries for which the correct referent is known ) . Candidate identification is a computational expedient that permits the evidence assembly effort to be efficiently focused ; we use only simple techniques for that task . Our principal contributions are the approaches we take to evidence generation ( leveraging three ways of linking to other emails where evidence might be found : reply chains , social interaction , and topical similarity ) and our approach to choosing among candidates ( based on a generative model of reference production ) . We evaluate the effectiveness of our approach on four collections , three of which have previously reported results for comparison , and one that is considerably larger than the others . The remainder of this paper is as follows . Section 2 surveys prior work . Section 3 then describes our approach to modeling identity and ranking candidates . Section 4 presents results , and Section 5 concludes . We have presented an approach to mention resolution in email that flexibly makes use of expanding contexts to accurately resolve the identity of a given mention . Our approach focuses on four naturally occurring contexts in email , including a message , a thread , other emails with senders and/or recipients in common , and other emails with significant topical content in common . Our approach outperforms previously reported techniques and it scales well to larger collections . Moreover , our results serve to highlight the importance of social context when resolving mentions in social media , which is an idea that deserves more attention generally . In future work , we plan to extend our test collection with mention queries that must be resolved in the \" long tail \" of the identity distribution where less evidence is available . We are also interested in exploring iterative approaches to jointly resolving mentions .", "challenge": "Increase of the number of informal textual communication such as email or instant messaging cacophony for users.", "approach": "They propose a computational approach to the referent of named mentions in the body of an email by decomposing it into three different modules.", "outcome": "Experiments with three small and one large datasets show that the proposed approach performs on par with existing models and scales well."} +{"id": "D15-1092", "document": "Recently , neural network based sentence modeling methods have achieved great progress . Among these methods , the recursive neural networks ( RecNNs ) can effectively model the combination of the words in sentence . However , RecNNs need a given external topological structure , like syntactic tree . In this paper , we propose a gated recursive neural network ( GRNN ) to model sentences , which employs a full binary tree ( FBT ) structure to control the combinations in recursive structure . By introducing two kinds of gates , our model can better model the complicated combinations of features . Experiments on three text classification datasets show the effectiveness of our model . Recently , neural network based sentence modeling approaches have been increasingly focused on for their ability to minimize the efforts in feature engineering , such as Neural Bag-of-Words ( NBoW ) , Recurrent Neural Network ( RNN ) ( Mikolov et al . , 2010 ) , Recursive Neural Network ( RecNN ) ( Pollack , 1990 ; Socher et al . , 2013b ; Socher et al . , 2012 ) and Convolutional Neural Network ( CNN ) ( Kalchbrenner et al . , 2014 ; Hu et al . , 2014 ) . Among these methods , recursive neural networks ( RecNNs ) have shown their excellent abilities to model the word combinations in sentence . However , RecNNs require a pre-defined topological structure , like parse tree , to encode sentence , which limits the scope of its application . Cho et al . ( 2014 ) proposed the gated recursive convolutional neural network ( grConv ) by utilizing the directed acyclic graph ( DAG ) structure instead of parse tree * Corresponding author . ( GRNNs ) . Left is a GRNN using a directed acyclic graph ( DAG ) structure . Right is a GRNN using a full binary tree ( FBT ) structure . ( The green nodes , gray nodes and white nodes illustrate the positive , negative and neutral sentiments respectively . ) to model sentences . However , DAG structure is relatively complicated . The number of the hidden neurons quadraticly increases with the length of sentences so that grConv can not effectively deal with long sentences . Inspired by grConv , we propose a gated recursive neural network ( GRNN ) for sentence modeling . Different with grConv , we use the full binary tree ( FBT ) as the topological structure to recursively model the word combinations , as shown in Figure 1 . The number of the hidden neurons linearly increases with the length of sentences . Another difference is that we introduce two kinds of gates , reset and update gates ( Chung et al . , 2014 ) , to control the combinations in recursive structure . With these two gating mechanisms , our model can better model the complicated combinations of features and capture the long dependency interactions . In our previous works , we have investigated several different topological structures ( tree and directed acyclic graph ) to recursively model the semantic composition from the bottom layer to the top layer , and applied them on Chinese word segmentation ( Chen et al . , 2015a ) and dependency parsing ( Chen et al . , 2015b ) tasks . However , these structures are not suitable for modeling sentences . \u2026 Softmax(W s \u00d7 u i + b s ) u i P(\u2022|x i ; \u03b8 ) w 2 w 1 ( i ) ( i ) \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 w 3 w 4 w 5 ( i ) ( i ) ( i ) 0 0 0 Figure 2 : Architecture of Gated Recursive Neural Network ( GRNN ) . In this paper , we adopt the full binary tree as the topological structure to reduce the model complexity . Experiments on the Stanford Sentiment Treebank dataset ( Socher et al . , 2013b ) and the TREC questions dataset ( Li and Roth , 2002 ) show the effectiveness of our approach . 2 Gated Recursive Neural Network In this paper , we propose a gated recursive neural network ( GRNN ) to recursively summarize the meaning of sentence . GRNN uses full binary tree as the recursive topological structure instead of an external syntactic tree . In addition , we introduce two kinds of gates to model the complicated combinations of features . In future work , we would like to investigate the other gating mechanisms for better modeling the feature combinations .", "challenge": "Despite the high performance, recursive neural networks requiring an external topological structure limit their applications and also quadraticly increase the computational costs with text lengths.", "approach": "They propose a gated recursive neural network that uses a full binary tree to model recursive structures and also gating mechanisms for more complex modelling.", "outcome": "They show that the proposed system model has more complex structures on classification tasks from the Stanford Sentiment Treebank dataset."} +{"id": "H05-1039", "document": "We explore a hybrid approach for Chinese definitional question answering by combining deep linguistic analysis with surface pattern learning . We answer four questions in this study : 1 ) How helpful are linguistic analysis and pattern learning ? 2 ) What kind of questions can be answered by pattern matching ? 3 ) How much annotation is required for a pattern-based system to achieve good performance ? 4 ) What linguistic features are most useful ? Extensive experiments are conducted on biographical questions and other definitional questions . Major findings include : 1 ) linguistic analysis and pattern learning are complementary ; both are required to make a good definitional QA system ; 2 ) pattern matching is very effective in answering biographical questions while less effective for other definitional questions ; 3 ) only a small amount of annotation is required for a pattern learning system to achieve good performance on biographical questions ; 4 ) the most useful linguistic features are copulas and appositives ; relations also play an important role ; only some propositions convey vital facts . Due to the ever increasing large amounts of online textual data , learning from textual data is becoming more and more important . Traditional document retrieval systems return a set of relevant documents and leave the users to locate the specific information they are interested in . Question answering , which combines traditional document retrieval and information extraction , solves this problem directly by returning users the specific answers . Research in textual question answering has made substantial advances in the past few years ( Voorhees , 2004 ) . Most question answering research has been focusing on factoid questions where the goal is to return a list of facts about a concept . Definitional questions , however , remain largely unexplored . Definitional questions differ from factoid questions in that the goal is to return the relevant \" answer nuggets \" of information about a query . Identifying such answer nuggets requires more advanced language processing techniques . Definitional QA systems are not only interesting as a research challenge . They also have the potential to be a valuable complement to static knowledge sources like encyclopedias . This is because they create definitions dynamically , and thus answer definitional questions about terms which are new or emerging ( Blair-Goldensoha et al . , 2004 ) . One success in factoid question answering is pattern based systems , either manually constructed ( Soubbotin and Soubbotin , 2002 ) or machine learned ( Cui et al . , 2004 ) . However , it is unknown whether such pure pattern based systems work well on definitional questions where answers are more diverse . Deep linguistic analysis has been found useful in factoid question answering ( Moldovan et al . , 2002 ) and has been used for definitional questions ( Xu et al . , 2004 ; Harabagiu et al . , 2003 ) . Linguistic analy-sis is useful because full parsing captures long distance dependencies between the answers and the query terms , and provides more information for inference . However , merely linguistic analysis may not be enough . First , current state of the art linguistic analysis such as parsing , co-reference , and relation extraction is still far below human performance . Errors made in this stage will propagate and lower system accuracy . Second , answers to some types of definitional questions may have strong local dependencies that can be better captured by surface patterns . Thus we believe that combining linguistic analysis and pattern learning would be complementary and be beneficial to the whole system . Work in combining linguistic analysis with patterns include Weischedel et al . ( 2004 ) and Jijkoun et al . ( 2004 ) where manually constructed patterns are used to augment linguistic features . However , manual pattern construction critically depends on the domain knowledge of the pattern designer and often has low coverage ( Jijkoun et al . , 2004 ) . Automatic pattern derivation is more appealing ( Ravichandran and Hovy , 2002 ) . In this work , we explore a hybrid approach to combining deep linguistic analysis with automatic pattern learning . We are interested in answering the following four questions for Chinese definitional question answering : How helpful are linguistic analysis and pattern learning in definitional question answering ? If pattern learning is useful , what kind of question can pattern matching answer ? How much human annotation is required for a pattern based system to achieve reasonable performance ? We have explored a hybrid approach for definitional question answering by combining deep linguistic analysis and surface pattern learning . For the first time , we have answered four questions regarding Chinese definitional QA : deep linguistic analysis and automatic pattern learning are complementary and may be combined ; patterns are powerful in answering biographical questions ; only a small amount of annotation ( 2 days ) is required to obtain good performance in a biographical QA system ; copulas and appositions are the most useful linguistic features ; relation extraction also helps . Answering \" What-is \" questions is more challenging than answering \" Who-is \" questions . To improve the performance on \" What-is \" questions , we could divide \" What-is \" questions into finer classes such as organization , location , disease , and general substance , and process them specifically . Our current pattern matching is based on simple POS tagging which captures only limited syntactic information . We generalize words to their corresponding POS tags . Another possible improvement is to generalize using automatically derived word clusters , which provide semantic information .", "challenge": "Although there are some open questions such as how helpful are linguistic analysis and pattern learning, the definitional question is under explored.", "approach": "They evaluate a hybrid approach for Chinese definitional question answering by combining deep linguistic analysis with surface pattern learning.", "outcome": "They report findings such as linguistic analysis can complement pattern learning, and that a small amount of annotation is sufficient for a biographical question answering."} +{"id": "D10-1038", "document": "This work concerns automatic topic segmentation of email conversations . We present a corpus of email threads manually annotated with topics , and evaluate annotator reliability . To our knowledge , this is the first such email corpus . We show how the existing topic segmentation models ( i.e. , Lexical Chain Segmenter ( LCSeg ) and Latent Dirichlet Allocation ( LDA ) ) which are solely based on lexical information , can be applied to emails . By pointing out where these methods fail and what any desired model should consider , we propose two novel extensions of the models that not only use lexical information but also exploit finer level conversation structure in a principled way . Empirical evaluation shows that LCSeg is a better model than LDA for segmenting an email thread into topical clusters and incorporating conversation structure into these models improves the performance significantly . With the ever increasing popularity of emails and web technologies , it is very common for people to discuss issues , events , agendas or tasks by email . Effective processing of the email contents can be of great strategic value . In this paper , we study the problem of topic segmentation for emails , i.e. , grouping the sentences of an email thread into a set of coherent topical clusters . Adapting the standard definition of topic ( Galley et al . , 2003 ) to conversations / emails , we consider a topic is something about which the participant(s ) discuss or argue or express their opinions . For example , in the email thread shown in Figure 1 , according to the majority of our annotators , participants discuss three topics ( e.g. , ' telecon cancellation ' , ' TAG document ' , and ' responding to I18N ' ) . Multiple topics seem to occur naturally in social interactions , whether synchronous ( e.g. , chats , meetings ) or asynchronous ( e.g. , emails , blogs ) conversations . In multi-party chat ( Elsner and Charniak , 2008 ) report an average of 2.75 discussions active at a time . In our email corpus , we found an average of 2.5 topics per thread . Topic segmentation is often considered a prerequisite for other higher-level conversation analysis and applications of the extracted structure are broad , encompassing : summarization ( Harabagiu and Lacatusu , 2005 ) , information extraction and ordering ( Allan , 2002 ) , information retrieval ( Dias et al . , 2007 ) , and intelligent user interfaces ( Dredze et al . , 2008 ) . While extensive research has been conducted in topic segmentation for monologues ( e.g. , ( Malioutov and Barzilay , 2006 ) , ( Choi et al . , 2001 ) ) and synchronous dialogs ( e.g. , ( Galley et al . , 2003 ) , ( Hsueh et al . , 2006 ) ) , none has studied the problem of segmenting asynchronous multi-party conversations ( e.g. , email ) . Therefore , there is no reliable annotation scheme , no standard corpus , and no agreedupon metrics available . Also , it is our key hypothesis that , because of its asynchronous nature , and the use of quotation ( Crystal , 2001 ) , topics in an email thread often do not change in a sequential way . As a result , we do not expect models which have proved successful in monologue or dialog to be as effective when they are applied to email conversations . Our contributions in this paper aim to remedy these problems . First , we present an email corpus annotated with topics and evaluate annotator agreement . Second , we adopt a set of metrics to measure the local and global structural similarity between two annotations from the work on multi-party chat disentanglement ( Elsner and Charniak , 2008 ) . Third , we show how the two state-of-the-art topic segmentation methods ( i.e. , LCSeg and LDA ) which are solely based on lexical information and make strong assumptions on the resulting topic models , can be effectively applied to emails , by having them to consider , in a principled way , a finer level structure of the underlying conversations . Experimental results show that both LCSeg and LDA benefit when they are extended to consider the conversational structure . When comparing the two methods , we found that LCSeg is better than LDA and this advantage is preserved when they are extended to incorporate conversational structure . In this paper we presented an email corpus annotated for topic segmentation . We extended LDA and LC-Seg models by incorporating the fragment quotation graph , a fine-grain model of the conversation , which is based on the analysis of quotations . Empirical evaluation shows that the fragment quotation graph helps both these models to perform significantly better than their basic versions , with LCSeg+FQG being the best performer .", "challenge": "Because of the absence of a corpus, scheme or metrics, there is no work on topic segmentation for asynchronous multi-party conversations.", "approach": "They present the first email corpus with topic annotations and evaluation metrics, then evaluate existing methods and their extensions that utilize conversation structure features.", "outcome": "The existing LDA-based methods with conversation structure features perform well on the new email topic segmentation corpus."} +{"id": "2021.emnlp-main.424", "document": "Scientific literature analysis needs fine-grained named entity recognition ( NER ) to provide a wide range of information for scientific discovery . For example , chemistry research needs to study dozens to hundreds of distinct , fine-grained entity types , making consistent and accurate annotation difficult even for crowds of domain experts . On the other hand , domain-specific ontologies and knowledge bases ( KBs ) can be easily accessed , constructed , or integrated , which makes distant supervision realistic for fine-grained chemistry NER . In distant supervision , training labels are generated by matching mentions in a document with the concepts in the knowledge bases ( KBs ) . However , this kind of KB-matching suffers from two major challenges : incomplete annotation and noisy annotation . We propose CHEMNER , an ontologyguided , distantly-supervised method for finegrained chemistry NER to tackle these challenges . It leverages the chemistry type ontology structure to generate distant labels with novel methods of flexible KB-matching and ontology-guided multi-type disambiguation . It significantly improves the distant label generation for the subsequent sequence labeling model training . We also provide an expertlabeled , chemistry NER dataset with 62 finegrained chemistry types ( e.g. , chemical compounds and chemical reactions ) . Experimental results show that CHEMNER is highly effective , outperforming substantially the stateof-the-art NER methods ( with .25 absolute F1 score improvement ) . Named entity recognition ( NER ) is a fundamental step in scientific literature analysis to build AI-driven systems for molecular discovery , synthetic strategy designing , and manufacturing ( Xie et al . , 2013 ; Szklarczyk et al . , 2015 ; Huang et al . , 2015 ; Szklarczyk et al . , 2017 ; de Almeida et al . , 2019 ) . It aims to locate and classify entity mentions ( e.g. , \" Suzuki-Miyaura cross-coupling reactions \" ) from unstructured text into pre-defined categories ( e.g. , \" coupling reactions \" ) . In the chemistry domain , previous NER studies are mostly focused on one coarse-grained entity type ( i.e. , chemicals ) ( Krallinger et al . , 2015 ; He et al . , 2020 ; Watanabe et al . , 2019 ) and rely on large amounts of manuallyannotated data for training deep learning models ( Chiu and Nichols , 2016 ; Ma and Hovy , 2016 ; Lample et al . , 2016 ; Wang et al . , 2019b ; Devlin et al . , 2019 ; Liu et al . , 2019 ) . In real-world applications , it is important to recognize chemistry entities on diverse and finegrained types ( e.g. , \" inorganic phophorus compounds \" , \" coupling reactions \" and \" catalysts \" ) to provide a wide range of information for scientific discovery . It will need dozens to hundreds of distinct types , making consistent and accurate annotation difficult even for domain experts . On the other hand , the domain-specific ontologies and knowledge bases ( KBs ) can be easily accessed , constructed , or integrated , which makes distant supervision realistic for fine-grained chemistry NER . Still , challenges exist for correctly recognizing the entity boundaries and accurately typing entities with distant supervision . In distant supervision , training labels are generated by matching the mentions in a document with the concepts in the knowledge bases ( KBs ) . However , this kind of KB-matching suffers from two major challenges : ( 1 ) incomplete annotation where a mention in a document can be matched only partially or missed completely due to an incomplete coverage of the KBs ( Figure 1a ) , and ( 2 ) noisy annotation where a mention can be erroneously matched due to the potential matching of multiple entity types in the KBs ( Figure 1b ) . Due to the complex name structures ( e.g. , nested naming structures and long chemical formulas ) of chemical entities , these challenges lead to severe low-precision and low-recall for finegrained chemistry NER with distant supervision . Several studies have attempted to address the incomplete annotation problem in distantlysupervised NER . For example , AutoNER ( Shang et al . , 2018b ) introduces an \" unknown \" type that can be skipped during training to reduce the effect of false negative labeling with distant supervision . BOND ( Liang et al . , 2020 ) leverages the power of pre-trained language models and a self-training approach to iteratively incorporate more training labels and improve the NER performance . However , previous methods assume a high precision and reasonable coverage of KB-matching for distant label generation . For example , the KB-matching on the CoNLL03 dataset ( Liang et al . , 2020 ) reported over 80 % on precision and over 60 % on recall . These methods do not work well with fine-grained chemistry NER that has severe low precision and low recall with KB-matching . Previous studies also largely ignore the noisy annotation problem by simply discarding those multi-labels during the KBmatching process ( Liang et al . , 2020 ) . However , the noisy labels can not be simply ignored for the chemistry entities because they consist of a large portion of distant training labels . We observe that more than 60 % of the entities have multiple labels during KB-matching in the chemistry domain . We propose CHEMNER , an ontology-guided , distantly-supervised NER method for fine-grained chemistry NER . Taking an input corpus , a chemistry type ontology and associated entity dictionaries collected from the KBs , we develop a novel flexible KB-matching method with TF-IDF-based majority voting to resolve the incomplete annota-tion problem . Then we develop a novel ontologyguided multi-type disambiguation method to resolve the noisy annotation problem . Taking the output from the above two steps as distant supervision , we further train a sequence labeling model to cover additional entities . CHEMNER significantly improves the distant label generation for the subsequent NER model training . We also provide an expert-labeled , chemistry NER dataset with 62 finegrained chemistry types ( e.g. , chemical compounds and chemical reactions ) . Experimental results show that CHEMNER is highly effective , achieving substantially better performance ( with .25 absolute F1 score improvement ) compared with the state-ofthe-art NER methods . We have released our data and code to benefit future studies1 . We propose CHEMNER , an ontology-guided , distantly-supervised method for fine-grained chemistry NER . It leverages the chemistry type ontology structure to generate distant labels with novel methods of flexible KB-matching and ontology-guided multi-type disambiguation . We also provide an expert labeled , chemistry NER dataset with 62 finegrained chemistry types ( e.g. , chemical compounds and chemical reactions ) . Experimental results show that CHEMNER is highly effective , outperforming substantially the state-of-the-art NER methods on fine-grained chemistry NER . Although achieving great performance , there is still large room for improvement of CHEMNER . In the future , we plan to further refine and enrich the type ontology and incorporate more information in the dictionaries ( e.g. , chemical structures in the KBs ) for a better NER performance . We also plan to apply our finegrained NER method to other scientific domains .", "challenge": "Existing distant supervision-based knowledge based-matching methods for the chemistry domain suffer from incomplete and noisy annotations due to the complex name structures in the domain.", "approach": "They propose to use type ontology structures to generate distant labels and apply a multi-type disambiguation method to resolve the noisy annotation problem.", "outcome": "The proposed method improves the distant label generation for the subsequent model training and outperforms the state-of-the-art NER methods on the newly expert-labelled chemistry dataset."} +{"id": "N10-1016", "document": "Constrained decoding is of great importance not only for speed but also for translation quality . Previous efforts explore soft syntactic constraints which are based on constituent boundaries deduced from parse trees of the source language . We present a new framework to establish soft constraints based on a more natural alternative : translation boundary rather than constituent boundary . We propose simple classifiers to learn translation boundaries for any source sentences . The classifiers are trained directly on word-aligned corpus without using any additional resources . We report the accuracy of our translation boundary classifiers . We show that using constraints based on translation boundaries predicted by our classifiers achieves significant improvements over the baseline on large-scale Chinese-to-English translation experiments . The new constraints also significantly outperform constituent boundary based syntactic constrains . It has been known that phrase-based decoding ( phrase segmentation / translation / reordering ( Chiang , 2005 ) ) should be constrained to some extent not only for transferring the NP-hard problem ( Knight , 1999 ) into a tractable one in practice but also for improving translation quality . For example , Xiong et al . ( 2008 ) find that translation quality can be significantly improved by either prohibiting reorderings around punctuation or restricting reorderings within a 15-word window . Recently , more linguistically motivated constraints are introduced to improve phrase-based decoding . ( Cherry , 2008 ) and ( Marton and Resnik , 2008 ) introduce syntactic constraints into the standard phrase-based decoding ( Koehn et al . , 2003 ) and hierarchical phrase-based decoding ( Chiang , 2005 ) respectively by using a counting feature which accumulates whenever hypotheses violate syntactic boundaries of source-side parse trees . ( Xiong et al . , 2009 ) further presents a bracketing model to include thousands of context-sensitive syntactic constraints . All of these approaches achieve their improvements by guiding the phrase-based decoder to prefer translations which respect source-side parse trees . One major problem with such constituent boundary based constraints is that syntactic structures of the source language do not necessarily reflect translation structures where the source and target language correspond to each other . In this paper , we investigate building classifiers that directly address the problem of translation boundary , rather than extracting constituent boundary from sourceside parsers built for a different purpose . A translation boundary is a position in the source sequence which begins or ends a translation zone1 spanning multiple source words . In a translation zone , the source phrase is translated as a unit . Reorderings which cross translation zones are not desirable . Inspired by ( Roark and Hollingshead , 2008 ) which introduces classifiers to decide if a word can begin / end a multi-word constituent , we build two discriminative classifiers to tag each word in the source sequence with a binary class label . The first classifier decides if a word can begin a multi-sourceword translation zone ; the second classifier decides if a word can end a multi-source-word translation zone . Given a partial translation covering source sequence ( i , j ) with start word c i and end word c j2 , this translation can be penalized if the first classifier decides that the start word c i can not be a beginning translation boundary or the second classifier decides that the end word c j can not be an ending translation boundary . In such a way , we can guide the decoder to boost hypotheses that respect translation boundaries and therefore the common translation structure shared by the source and target language , rather than the syntactic structure of the source language . We report the accuracy of such classifiers by comparing their outputs with \" gold \" translation boundaries obtained from reference translations on the development set . We integrate translation boundary based constraints into phrase-based decoding and display that they improve translation quality significantly in large-scale experiments . Furthermore , we confirm that they also significantly outperform constituent boundary based syntactic constraints . In this paper , we have presented a simple approach to learn translation boundaries on source sentences . The learned translation boundaries are used to constrain phrase-based decoding in a soft manner . The whole approach has several properties . \u2022 First , it is based on a simple classification task that can achieve considerably high accuracy when taking translation divergences into account using simple models and features . \u2022 Second , the classifier output can be straightforwardly used to constrain phrase-based decoder . \u2022 Finally , we have empirically shown that , to build soft constraints for phrase-based decoding , translation boundary predicted by our classifier is a better choice than constituent boundary deduced from source-side parse tree . Future work in this direction will involve trying different methods to define more informative translation boundaries , such as a boundary to begin / end a swapping . We would also like to investigate new methods to incorporate automatically learned translation boundaries more efficiently into decoding in an attempt to further improve search in both speed and accuracy .", "challenge": "Existing methods guide phrase-based decoders to prefer translations which respect source-side parse trees however syntactic structures of the source do not necessarily reflect translation structure.", "approach": "They propose to use binary classifiers trained on the word-aligned corpus to predict translation boundaries for source sentences and use them as soft constraints.", "outcome": "The proposed classifiers outperform ones with access to gold translation boundaries and a phrase-based translation model with the proposed constraints improves translation quality."} +{"id": "D09-1016", "document": "Information Extraction ( IE ) systems that extract role fillers for events typically look at the local context surrounding a phrase when deciding whether to extract it . Often , however , role fillers occur in clauses that are not directly linked to an event word . We present a new model for event extraction that jointly considers both the local context around a phrase along with the wider sentential context in a probabilistic framework . Our approach uses a sentential event recognizer and a plausible role-filler recognizer that is conditioned on event sentences . We evaluate our system on two IE data sets and show that our model performs well in comparison to existing IE systems that rely on local phrasal context . Information Extraction ( IE ) systems typically use extraction patterns ( e.g. , Soderland et al . ( 1995 ) , Riloff ( 1996 ) , Yangarber et al . ( 2000 ) , Califf and Mooney ( 2003 ) ) or classifiers ( e.g. , Freitag ( 1998 ) , Freitag and McCallum ( 2000 ) , Chieu et al . ( 2003 ) , Bunescu and Mooney ( 2004 ) ) to extract role fillers for events . Most IE systems consider only the immediate context surrounding a phrase when deciding whether to extract it . For tasks such as named entity recognition , immediate context is usually sufficient . But for more complex tasks , such as event extraction , a larger field of view is often needed to understand how facts tie together . Most IE systems are designed to identify role fillers that appear as arguments to event verbs or nouns , either explicitly via syntactic relations or implicitly via proximity ( e.g. , John murdered Tom or the murder of Tom by John ) . But many facts are presented in clauses that do not contain event words , requiring discourse relations or deep structural analysis to associate the facts with event roles . For example , consider the sentences below : Seven people have died . . . and 30 were injured in India after terrorists launched an attack on the Taj Hotel . . . . in Mexico City and its surrounding suburbs in a Swine Flu outbreak . . . . after a tractor-trailer collided with a bus in Arkansas . Two bridges were destroyed . . . in Baghdad last night in a resurgence of bomb attacks in the capital city . . . . and $ 50 million in damage was caused by a hurricane that hit Miami on Friday . . . . to make way for modern , safer bridges that will be constructed early next year . These examples illustrate a common phenomenon in text where information is not explicitly stated as filling an event role , but readers have no trouble making this inference . The role fillers above ( seven people , two bridges ) occur as arguments to verbs that reveal state information ( death , destruction ) but are not event-specific ( i.e. , death and destruction can result from a wide variety of incident types ) . IE systems often fail to extract these role fillers because these systems do not recognize the immediate context as being relevant to the specific type of event that they are looking for . We propose a new model for information extraction that incorporates both phrasal and sentential evidence in a unified framework . Our unified probabilistic model , called GLACIER , consists of two components : a model for sentential event recognition and a model for recognizing plausible role fillers . The Sentential Event Recognizer offers a probabilistic assessment of whether a sentence is discussing a domain-relevant event . The Plausible Role-Filler Recognizer is then conditioned to identify phrases as role fillers based upon the assumption that the surrounding context is discussing a relevant event . This unified probabilistic model allows the two components to jointly make decisions based upon both the local evidence surrounding each phrase and the \" peripheral vision \" afforded by the sentential event recognizer . This paper is organized as follows . Section 2 positions our research with respect to related work . Section 3 presents our unified probabilistic model for information extraction . Section 4 shows experimental results on two IE data sets , and Section 5 discusses directions for future work . We presented a unified model for IE that balances the influence of sentential context with local contextual evidence to improve the performance of event-based IE . Our experimental results showed that using sentential contexts indeed produced better results on two IE data sets . Our current model uses supervised learning , so one direction for future work is to adapt the model for weakly supervised learning . We also plan to incorporate discourse features and investigate even wider contexts to capture broader discourse effects .", "challenge": "Information extraction systems cannot extract role filters which are arguments to verbs but are not event-specific because they do not recognize the contexts as relevant.", "approach": "They propose a unified probabilistic model composed of sentential event recognition and plausible role filter recognition models which run both on phrasal and sentence levels.", "outcome": "The proposed unified model outperforms the existing systems that rely on the local phrasal context of two information extraction datasets."} +{"id": "P11-2027", "document": "This work introduces AM-FM , a semantic framework for machine translation evaluation . Based upon this framework , a new evaluation metric , which is able to operate without the need for reference translations , is implemented and evaluated . The metric is based on the concepts of adequacy and fluency , which are independently assessed by using a cross-language latent semantic indexing approach and an n-gram based language model approach , respectively . Comparative analyses with conventional evaluation metrics are conducted on two different evaluation tasks ( overall quality assessment and comparative ranking ) over a large collection of human evaluations involving five European languages . Finally , the main pros and cons of the proposed framework are discussed along with future research directions . Evaluation has always been one of the major issues in Machine Translation research , as both human and automatic evaluation methods exhibit very important limitations . On the one hand , although highly reliable , in addition to being expensive and time consuming , human evaluation suffers from inconsistency problems due to inter-and intraannotator agreement issues . On the other hand , while being consistent , fast and cheap , automatic evaluation has the major disadvantage of requiring reference translations . This makes automatic evaluation not reliable in the sense that good translations not matching the available references are evaluated as poor or bad translations . The main objective of this work is to propose and evaluate AM-FM , a semantic framework for assessing translation quality without the need for reference translations . The proposed framework is theoretically grounded on the classical concepts of adequacy and fluency , and it is designed to account for these two components of translation quality in an independent manner . First , a cross-language latent semantic indexing model is used for assessing the adequacy component by directly comparing the output translation with the input sentence it was generated from . Second , an n-gram based language model of the target language is used for assessing the fluency component . Both components of the metric are evaluated at the sentence level , providing the means for defining and implementing a sentence-based evaluation metric . Finally , the two components are combined into a single measure by implementing a weighted harmonic mean , for which the weighting factor can be adjusted for optimizing the metric performance . The rest of the paper is organized as follows . Section 2 , presents some background work and the specific dataset that has been used in the experimental work . Section 3 , provides details on the proposed AM-FM framework and the specific metric implementation . Section 4 presents the results of the conducted comparative evaluations . Finally , section 5 presents the main conclusions and relevant issues to be dealt with in future research . This work presented AM-FM , a semantic framework for translation quality assessment . Two comparative evaluations with standard metrics have been conducted over a large collection of humangenerated scores involving different languages . Although the obtained performance is below standard metrics , the proposed method has the main advantage of not requiring reference translations . Notice that a monolingual version of AM-FM is also possible by using monolingual latent semantic indexing ( Landauer et al . , 1998 ) along with a set of reference translations . A detailed evaluation of a monolingual implementation of AM-FM can be found in Banchs and Li ( 2011 ) . As future research , we plan to study the impact of different dataset sizes and vector space model parameters for improving the performance of the AM component of the metric . This will include the study of learning curves based on the amount of training data used , and the evaluation of different vector model construction strategies , such as removing stop-words and considering bigrams and word categories in addition to individual words . Finally , we also plan to study alternative uses of AM-FM within the context of statistical machine translation as , for example , a metric for MERT optimization , or using the AM component alone as an additional feature for decoding , rescoring and/or confidence estimation .", "challenge": "For evaluating machine translation systems, manual evaluation is costly and inconsistent and the automatic counterpart is unreliable because it requires references.", "approach": "They propose a new evaluation framework which assesses adequacy and fluency by using a cross-language latent indexing approach and n-gram based language modelling without references.", "outcome": "The proposed method underperforms two standard metrics on a large collection in different languages and shows however it does not require reference translations."} +{"id": "2021.acl-long.481", "document": "Video Question Answering is a task which requires an AI agent to answer questions grounded in video . This task entails three key challenges : ( 1 ) understand the intention of various questions , ( 2 ) capturing various elements of the input video ( e.g. , object , action , causality ) , and ( 3 ) cross-modal grounding between language and vision information . We propose Motion-Appearance Synergistic Networks ( MASN ) , which embed two crossmodal features grounded on motion and appearance information and selectively utilize them depending on the question 's intentions . MASN consists of a motion module , an appearance module , and a motion-appearance fusion module . The motion module computes the action-oriented cross-modal joint representations , while the appearance module focuses on the appearance aspect of the input video . Finally , the motion-appearance fusion module takes each output of the motion module and the appearance module as input , and performs question-guided fusion . As a result , MASN achieves new state-of-the-art performance on the TGIF-QA and MSVD-QA datasets . We also conduct qualitative analysis by visualizing the inference results of MASN . The code is available at https://github.com/ ahjeongseo / MASN-pytorch . Recently , research in natural language processing and computer vision has made significant progress in artificial intelligence ( AI ) . Thanks to this , visionlanguage tasks such as image captioning ( Xu et al . , 2015 ) , visual question answering ( VQA ) ( Antol et al . , 2015 ; Goyal et al . , 2017 ) , and visual commonsense reasoning ( VCR ) ( Zellers et al . , 2019 ) have been introduced to the research community , along with some benchmark datasets . In particular , video question answering ( video QA ) tasks ( Xu et al . , 2016 ; Jang et al . , 2017 ; Lei et al . , 2018 ; Yu et al . , 2019 ; Choi et al . , 2020 ) have been proposed with the goal of reasoning over higher-level visionlanguage interactions . In contrast to QA tasks based on static images , the questions presented in the video QA dataset vary from frame-level questions regarding the appearance of objects ( e.g. , what is the color of the hat ? ) to questions regarding action and causality ( e.g. , what does the man do after opening a door ? ) . There are three crucial challenges in video QA : ( 1 ) understand the intention of various questions , ( 2 ) capturing various elements of the input video ( e.g. , object , action , and causality ) , and ( 3 ) crossmodal grounding between language and vision information . To tackle these challenges , previous studies ( Li et al . , 2019 ; Jiang et al . , 2020 ; Huang et al . , 2020 ) have mainly explored this task by jointly embedding the features from the pre-trained word embedding model ( Pennington et al . , 2014 ) and the object detection models ( He et al . , 2016 ; Ren et al . , 2016 ) . However , as discussed in ( Gao et al . , 2018 ) , the use of the visual features extracted from the object detection models suffers from motion analysis since the object detection model lacks temporal modeling . To enforce the motion analysis , a few approaches ( Xu et al . , 2017 ; Gao et al . , 2018 ) have employed additional visual features ( Tran et al . , 2015 ) ( i.e. , motion features ) which were widely used in the action recognition domain , but their reasoning capability is still limited . They typically employed recurrent models ( e.g. , LSTM ) to embed a long sequence of the visual features . Due to the problem of long-term dependency in recurrent models ( Bengio et al . , 1993 ) , their proposed methods may fail to learn dependencies between distant features . In this paper , we propose Motion-Appearance The output from the fusion module is used to derive answers . For question features , the word-level representation F Q is integrated with the visual features in the VQ interaction submodule . The last hidden units q from the bi-LSTM are used to combine appearance and motion features . Synergistic Networks ( MASN ) for video question answering which consist of three kinds of modules : the motion module , the appearance module , and the motion-appearance fusion module . As shown in Figure 1 , the motion module and the appearance module aim to embed rich cross-modal representations . These two modules have the same architecture except that the motion module takes the motion features extracted from I3D as visual features and the appearance module utilizes the appearance features extracted from ResNet . Each of these modules first constructs the object graphs via graph convolutional networks ( GCN ) to compute the relationships among objects in each visual feature . Then , the vision-question interaction module performs cross-modal grounding between the output of the GCNs and the question features . The motion module and the appearance module each yield cross-modal representations of the motion and the appearance aspects of the input video respectively . The motion-appearance fusion module finally integrates these two features based on the question features . The main contributions of our paper are as follows . First , we propose Motion-Appearance Synergistic Networks ( MASN ) for video question answering based on three modules , the motion module , the appearance module , and the motionappearance fusion module . Second , we validate MASN on the large-scale video question answering datasets TGIF-QA , MSVD-QA , and MSRVTT-QA . MASN achieves the new state-of-the-art performance on TGIF-QA and MSVD-QA . We perform ablation studies to validate the effectiveness of our proposed methods . Finally , we conduct a qualitative analysis of MASN by visualizing inference results . In this paper , we proposed a Motion-Appearance Synergistic Networks to fuse and create a synergy between motion and appearance features . Through the Motion and Appearance modules , MASN manages to find motion and appearance clues to solve the question , while modulating the amount of information used of each type through the Fusion module . Experimental results on three benchmark datasets show the effectiveness of our proposed MASN architecture compared to other models .", "challenge": "Existing methods for Video Question Answering lack temporal modelling, reasoning capability, or an ability to learn dependencies between distant features.", "approach": "They propose a motion module and an appearance module which embeds cross-modal representations and a motion-appearance fusion module to integrate them based on question features.", "outcome": "The proposed method achieves the new state-of-the-art performance on TGIF-QA and MSVD-QA datasets and they also present qualitative analysis with visualizations."} +{"id": "D11-1132", "document": "This paper describes a novel approach to the semantic relation detection problem . Instead of relying only on the training instances for a new relation , we leverage the knowledge learned from previously trained relation detectors . Specifically , we detect a new semantic relation by projecting the new relation 's training instances onto a lower dimension topic space constructed from existing relation detectors through a three step process . First , we construct a large relation repository of more than 7,000 relations from Wikipedia . Second , we construct a set of non-redundant relation topics defined at multiple scales from the relation repository to characterize the existing relations . Similar to the topics defined over words , each relation topic is an interpretable multinomial distribution over the existing relations . Third , we integrate the relation topics in a kernel function , and use it together with SVM to construct detectors for new relations . The experimental results on Wikipedia and ACE data have confirmed that backgroundknowledge-based topics generated from the Wikipedia relation repository can significantly improve the performance over the state-of-theart relation detection approaches . Detecting semantic relations in text is very useful in both information retrieval and question answering because it enables knowledge bases to be leveraged to score passages and retrieve candidate answers . To extract semantic relations from text , three types of approaches have been applied . Rule-based methods ( Miller et al . , 2000 ) employ a number of linguistic rules to capture relation patterns . Featurebased methods ( Kambhatla , 2004 ; Zhao and Grishman , 2005 ) transform relation instances into a large amount of linguistic features like lexical , syntactic and semantic features , and capture the similarity between these feature vectors . Recent results mainly rely on kernel-based approaches . Many of them focus on using tree kernels to learn parse tree structure related features ( Collins and Duffy , 2001 ; Culotta and Sorensen , 2004 ; Bunescu and Mooney , 2005 ) . Other researchers study how different approaches can be combined to improve the extraction performance . For example , by combining tree kernels and convolution string kernels , ( Zhang et al . , 2006 ) achieved the state of the art performance on ACE ( ACE , 2004 ) , which is a benchmark dataset for relation extraction . Although a large set of relations have been identified , adapting the knowledge extracted from these relations for new semantic relations is still a challenging task . Most of the work on domain adaptation of relation detection has focused on how to create detectors from ground up with as little training data as possible through techniques such as bootstrapping ( Etzioni et al . , 2005 ) . We take a different approach , focusing on how the knowledge extracted from the existing relations can be reused to help build detectors for new relations . We believe by reusing knowledge one can build a more cost effective relation detector , but there are several challenges associated with reusing knowledge . The first challenge to address in this approach is how to construct a relation repository that has suffi-cient coverage . In this paper , we introduce a method that automatically extracts the knowledge characterizing more than 7,000 relations from Wikipedia . Wikipedia is comprehensive , containing a diverse body of content with significant depth and grows rapidly . Wikipedia 's infoboxes are particularly interesting for relation extraction . They are short , manually-created , and often have a relational summary of an article : a set of attribute / value pairs describing the article 's subject . Another challenge is how to deal with overlap of relations in the repository . For example , Wikipedia authors may make up a name when a new relation is needed without checking if a similar relation has already been created . This leads to relation duplication . We refine the relation repository based on an unsupervised multiscale analysis of the correlations between existing relations . This method is parameter free , and able to produce a set of non-redundant relation topics defined at multiple scales . Similar to the topics defined over words ( Blei et al . , 2003 ) , we define relation topics as multinomial distributions over the existing relations . The relation topics extracted in our approach are interpretable , orthonormal to each other , and can be used as basis relations to re-represent the new relation instances . The third challenge is how to use the relation topics for a relation detector . We map relation instances in the new domains to the relation topic space , resulting in a set of new features characterizing the relationship between the relation instances and existing relations . By doing so , background knowledge from the existing relations can be introduced into the new relations , which overcomes the limitations of the existing approaches when the training data is not sufficient . Our work fits in to a class of relation extraction research based on \" distant supervision \" , which studies how knowledge and resources external to the target domain can be used to improve relation extraction . ( Mintz et al . , 2009 ; Jiang , 2009 ; Chan and Roth , 2010 ) . One distinction between our approach and other existing approaches is that we represent the knowledge from distant supervision using automatically constructed topics . When we test on new instances , we do not need to search against the knowledge base . In addition , our topics also model the indirect relationship between relations . Such information can not be directly found from the knowledge base . The contributions of this paper are three-fold . Firstly , we extract a large amount of training data for more than 7,000 semantic relations from Wikipedia ( Wikipedia , 2011 ) and DBpedia ( Auer et al . , 2007 ) . A key part of this step is how we handle noisy data with little human effort . Secondly , we present an unsupervised way to construct a set of relation topics at multiple scales . This step is parameter free , and results in a nonredundant , multiscale relation topic space . Thirdly , we design a new kernel for relation detection by integrating the relation topics into the relation detector construction . The experimental results on Wikipedia and ACE data ( ACE , 2004 ) have confirmed that background-knowledge-based features generated from the Wikipedia relation repository can significantly improve the performance over the state-of-the-art relation detection approaches . This paper proposes a novel approach to create detectors for new relations integrating the knowledge extracted from the existing relations . The contributions of this paper are three-fold . Firstly , we pro- vide an automatic way to collect training data for more than 7,000 relations from Wikipedia and DBpedia . Secondly , we present an unsupervised way to construct a set of relation topics at multiple scales . Different from the topics defined over words , relation topics are defined over the existing relations . Thirdly , we design a new kernel for relation detection by integrating the relation topics in the representation of the relation instances . By leveraging the knowledge extracted from the Wikipedia relation repository , our approach significantly improves the performance over the state-of-the-art approaches on ACE data . This paper makes use of all DBpedia relations to create relation topics . It is possible that using a subset of them ( more related to the target relations ) might improve the performance . We will explore this in future work .", "challenge": "Semantic relation detection is useful for information retrieval and question answering however adapting the knowledge extracted for new semantic relations remains as a challenge.", "approach": "They propose to create detectors for new relations from existing relations by projecting new instances onto a lower topic space constructed from existing relation extractors.", "outcome": "The proposed background-knowledge-based features generated from the Wikipedia relation repository outperform the state-of-the-art approaches on Wikipedia and ACE data."} \ No newline at end of file