{"ID":"miller-etal-2012-using","url":"https:\/\/aclanthology.org\/C12-1109.pdf","title":"Using Distributional Similarity for Lexical Expansion in Knowledge-based Word Sense Disambiguation","abstract":"We explore the contribution of distributional information for purely knowledge-based word sense disambiguation. Specifically, we use a distributional thesaurus, computed from a large parsed corpus, for lexical expansion of context and sense information. This bridges the lexical gap that is seen as the major obstacle for word overlap-based approaches. We apply this mechanism to two traditional knowledge-based methods and show that distributional information significantly improves disambiguation results across several data sets. This improvement exceeds the state of the art for disambiguation without sense frequency information-a situation which is especially encountered with new domains or languages for which no sense-annotated corpus is available.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Richard Steuer for computing and providing us access to the distributional thesaurus.This work has been supported by the Hessian research excellence program Landes-Offensive zur Entwicklung Wissenschaftlich-\u00f6konomischer Exzellenz (LOEWE) as part of the research center Digital Humanities, and also by the Volkswagen Foundation as part of the Lichtenberg Professorship Program under grant N \u014d I\/82806.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kim-etal-2019-unsupervised","url":"https:\/\/aclanthology.org\/N19-1114.pdf","title":"Unsupervised Recurrent Neural Network Grammars","abstract":"Recurrent neural network grammars (RNNG) are generative models of language which jointly model syntax and surface structure by incrementally generating a syntax tree and sentence in a top-down, left-to-right order. Supervised RNNGs achieve strong language modeling and parsing performance, but require an annotated corpus of parse trees. In this work, we experiment with unsupervised learning of RNNGs. Since directly marginalizing over the space of latent trees is intractable, we instead apply amortized variational inference. To maximize the evidence lower bound, we develop an inference network parameterized as a neural CRF constituency parser. On language modeling, unsupervised RNNGs perform as well their supervised counterparts on benchmarks in English and Chinese. On constituency grammar induction, they are competitive with recent neural language models that induce tree structures from words through attention mechanisms.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the members of the DeepMind language team for helpful feedback. YK is supported by a Google Fellowship. AR is supported by NSF Career 1845664.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"iyer-etal-2021-veealign","url":"https:\/\/aclanthology.org\/2021.emnlp-main.842.pdf","title":"VeeAlign: Multifaceted Context Representation Using Dual Attention for Ontology Alignment","abstract":"Ontology Alignment is an important research problem applied to various fields such as data integration, data transfer, data preparation, etc. State-of-the-art (SOTA) Ontology Alignment systems typically use naive domain-dependent approaches with handcrafted rules or domainspecific architectures, making them unscalable and inefficient. In this work, we propose VeeAlign, a Deep Learning based model that uses a novel dual-attention mechanism to compute the contextualized representation of a concept which, in turn, is used to discover alignments. By doing this, not only is our approach able to exploit both syntactic and semantic information encoded in ontologies, it is also, by design, flexible and scalable to different domains with minimal effort. We evaluate our model on four different datasets from different domains and languages, and establish its superiority through these results as well as detailed ablation studies. The code and datasets used are available at https:\/\/github.com\/Remorax\/VeeAlign.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This project received funding from the European Union's Horizon 2020 research and innovation programme under grant agreements 825299 (GoURMET).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"patra-etal-2013-automatic","url":"https:\/\/aclanthology.org\/W13-4104.pdf","title":"Automatic Music Mood Classification of Hindi Songs","abstract":"The popularity of internet, downloading and purchasing music from online music shops are growing dramatically. As an intimate relationship presents between music and human emotions, we often choose to listen a song that suits our mood at that instant. Thus, the automatic methods are needed to classify music by moods even from the uploaded music files in social networks. However, several studies on Music Information Retrieval (MIR) have been carried out in recent decades. In the present task, we have built a system for classifying moods of Hindi songs using different audio related features like rhythm, timber and intensity. Our dataset is composed of 230 Hindi music clips of 30 seconds that consist of five mood clusters. We have achieved an average accuracy of 51.56% for music mood classification on the above data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work reported in this paper is supported by a grant from the India-Japan Cooperative Programme (DST-JST) 2009 Research project entitled \"Sentiment Analysis where AI meets Psychology\" funded by Department of Science and Technology (DST), Government of India.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kazi-etal-2014-mitll","url":"https:\/\/aclanthology.org\/2014.iwslt-evaluation.8.pdf","title":"The MITLL-AFRL IWSLT 2014 MT system","abstract":"This report summarizes the MITLL-AFRL MT and ASR systems and the experiments run using them during the 2014 IWSLT evaluation campaign. Our MT system is much improved over last year, owing to integration of techniques such as PRO and DREM optimization, factored language models, neural network joint model rescoring, multiple phrase tables, and development set creation. We focused our eforts this year on the tasks of translating from Arabic, Russian, Chinese, and Farsi into English, as well as translating from English to French. ASR performance also improved, partly due to increased eforts with deep neural networks for hybrid and tandem systems. Work focused on both the English and Italian ASR tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank Tina May and Wahid Abdul Qudus for their eforts in spot-checking Chinese and Farsi dataset processing, respectively. We would also like to thank Kyle Wilkinson for creating the Italian pronunciation dictionary.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wandji-tchami-grabar-2014-towards","url":"https:\/\/aclanthology.org\/W14-4814.pdf","title":"Towards Automatic Distinction between Specialized and Non-Specialized Occurrences of Verbs in Medical Corpora","abstract":"The medical field gathers people of different social statuses, such as students, pharmacists, managers, biologists, nurses and mainly medical doctors and patients, who represent the main actors. Despite their different levels of expertise, these actors need to interact and understand each other but the communication is not always easy and effective. This paper describes a method for a contrastive automatic analysis of verbs in medical corpora, based on the semantic annotation of the verbs nominal co-occurents. The corpora used are specialized in cardiology and distinguished according to their levels of expertise (high and low). The semantic annotation of these corpora is performed by using an existing medical terminology. The results indicate that the same verbs occurring in the two corpora show different specialization levels, which are indicated by the words (nouns and adjectives derived from medical terms) they occur with.","label_nlp4sg":1,"task":["Automatic Distinction between Specialized and Non - Specialized Occurrences of Verbs"],"method":["contrastive automatic analysis"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"The authors acknowledge the support of the French Agence Nationale de la Recherche (ANR) and the DGA, under the Tecsan grant ANR-11-TECS-012.","year":2014,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hewavitharana-etal-2014-anticipatory","url":"https:\/\/aclanthology.org\/2014.iwslt-papers.11.pdf","title":"Anticipatory translation model adaptation for bilingual conversations","abstract":"Conversational spoken language translation (CSLT) systems facilitate bilingual conversations in which the two participants speak different languages. Bilingual conversations provide additional contextual information that can be used to improve the underlying machine translation system. In this paper, we describe a novel translation model adaptation method that anticipates a participant's response in the target language, based on his counterpart's prior turn in the source language. Our proposed strategy uses the source language utterance to perform cross-language retrieval on a large corpus of bilingual conversations in order to obtain a set of potentially relevant target responses. The responses retrieved are used to bias translation choices towards anticipated responses. On an Iraqi-to-English CSLT task, our method achieves a significant improvement over the baseline system in terms of BLEU, TER and METEOR metrics.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank the anonymous reviewers for their helpful feedback. This work was funded in part by the DARPA BOLT program under contract number HR0011-12-C-0014.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"andresen-etal-2020-modeling","url":"https:\/\/aclanthology.org\/2020.law-1.5.pdf","title":"Modeling Ambiguity with Many Annotators and Self-Assessments of Annotator Certainty","abstract":"Most annotation efforts assume that annotators will agree on labels, if the annotation categories are well-defined and documented in annotation guidelines. However, this is not always true. For instance, content-related questions such as 'Is this sentence about topic X?' are unlikely to elicit the same answer from all annotators. Additional specifications in the guidelines are helpful to some extent, but can soon get overspecified by rules that cannot be justified by a research question. In this study, we model the semantic category 'illness' and its use in a gradual way. For this purpose, we (i) ask many annotators (30 votes per item, 960 items) for their opinion in a crowdsourcing experiment, (ii) ask annotators to indicate their certainty with respect to their annotation, and (iii) compare this across two different text types. We show that results of multiple annotations and average annotator certainty correlate, but many ambiguities can only be captured if several people contribute. The annotated data allow us to filter for sentences with high or low agreement and analyze causes of disagreement, thus getting a better understanding of people's perception of illness-as an example of a semantic category-as well as of the content of our annotated texts.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work on this paper was funded by the Landesforschungsf\u00f6rderung Hamburg (LFF-FV 35) in the context of the project hermA (Gaidys et al., 2017) at Universit\u00e4t Hamburg and Hamburg University of Technology. We thank Piklu Gupta and Carla S\u00f6kefeld for proofreading. All remaining errors are our own.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zens-ney-2004-improvements","url":"https:\/\/aclanthology.org\/N04-1033.pdf","title":"Improvements in Phrase-Based Statistical Machine Translation","abstract":"In statistical machine translation, the currently best performing systems are based in some way on phrases or word groups. We describe the baseline phrase-based translation system and various refinements. We describe a highly efficient monotone search algorithm with a complexity linear in the input sentence length. We present translation results for three tasks: Verbmobil, Xerox and the Canadian Hansards. For the Xerox task, it takes less than 7 seconds to translate the whole test set consisting of more than 10K words. The translation results for the Xerox and Canadian Hansards task are very promising. The system even outperforms the alignment template system. K k=1 p(f k |\u1ebd k) We use the maximum approximation for the hidden variable S. Therefore, the feature functions are dependent on S. Although the number of phrases K is implicitly given by the segmentation S, we used both S and K to make this dependency more obvious.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been partially funded by the EU project TransType 2, IST-2001-32091. ","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"laws-etal-2011-active","url":"https:\/\/aclanthology.org\/D11-1143.pdf","title":"Active Learning with Amazon Mechanical Turk","abstract":"Supervised classification needs large amounts of annotated training data that is expensive to create. Two approaches that reduce the cost of annotation are active learning and crowdsourcing. However, these two approaches have not been combined successfully to date. We evaluate the utility of active learning in crowdsourcing on two tasks, named entity recognition and sentiment detection, and show that active learning outperforms random selection of annotation examples in a noisy crowdsourcing scenario.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Florian Laws is a recipient of the Google Europe Fellowship in Natural Language Processing, and this research is supported in part by his fellowship. Christian Scheible is supported by the Deutsche Forschungsgemeinschaft project Sonderforschungsbereich 732.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"biermann-etal-1993-efficient","url":"https:\/\/aclanthology.org\/H93-1034.pdf","title":"Efficient Collaborative Discourse: A Theory and Its Implementation","abstract":"An architecture for voice dialogue machines is described with emphasis on the problem solving and high level decision making mechanisms. The architecture provides facilities for generating voice interactions aimed at cooperative human-machine problem solving. It assumes that the dialogue will consist of a series of local selfconsistent subdialogues each aimed at subgoals related to the overall task. The discourse may consist of a set of such subdiaiogues with jumps from one subdialogue to the other in a search for a successful conclusion. The architecture maintains a user model to assure that interactions properly account for the level of competence of the user, and it includes an ability for the machine to take the initiative or yield the initiative to the user. It uses expectation from the dialogue processor to aid in the correction of errors from the speech recognizer.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by National Science Foundation grant number NSF-IRI-88-03802 and by Duke University.","year":1993,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wu-1996-argument","url":"https:\/\/aclanthology.org\/Y96-1026.pdf","title":"Argument Control and Mapping Theory : Evidence from the HO Construction in Taiwanese","abstract":"The Lexical-Mapping Theory (LMT) in Lexical-Functional Grammar (LFG) predicts syntactic representations by mapping from the lexical predicate argument structure of the verbs (Bresnan and Kanerva 1989). Yet the widely followed account of control still is Bresnan's (1982) functional control based on grammatical functions and structural terms. In this paper, we try to propose a theory of control in line with the LMT theory. The new control mechanism is called Argument Control. Facts involving the HO construction in Taiwanese will be given first to show the inadequacy of Functional Control. Then, based on the observation that the HO construction in Taiwanese manifests a semantic alternation determined by both the property of the matrix subject and of the embedded subject, we propose the theory of argument control. In this theory, the control relation lies between two thematic roles.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"king-1998-workflow","url":"https:\/\/aclanthology.org\/1998.eamt-1.5.pdf","title":"Workflow, computer aids and organisational issues","abstract":"Workflow, Computer Aids and Organisational Issues.\nThe burden of this article is that since translation services and agencies can vary enormously in the kind of work they do and how they do it, the introduction of electronic documents and the tools that make use of them into the translation process needs to take account of the differences. In particular, the consequent changes in work flow patterns may be very different, ranging from doing little more than offering another possible way of doing things at some point in the translation process to radically changing the way work is divided and tackled.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dorow-widdows-2003-discovering","url":"https:\/\/aclanthology.org\/E03-1020.pdf","title":"Discovering Corpus-Specific Word Senses","abstract":"This paper presents an unsupervised algorithm which automatically discovers word senses from text. The algorithm is based on a graph model representing words and relationships between them. Sense clusters are iteratively computed by clustering the local graph of similar words around an ambiguous word. Discrimination against previously extracted sense clusters enables us to discover new senses. We use the same data for both recognising and resolving ambiguity.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jin-etal-2021-assistance","url":"https:\/\/aclanthology.org\/2021.dialdoc-1.16.pdf","title":"Can I Be of Further Assistance? Using Unstructured Knowledge Access to Improve Task-oriented Conversational Modeling","abstract":"Most prior work on task-oriented dialogue systems are restricted to limited coverage of domain APIs. However, users oftentimes have requests that are out of the scope of these APIs. This work focuses on responding to these beyond-API-coverage user turns by incorporating external, unstructured knowledge sources. Our approach works in a pipelined manner with knowledge-seeking turn detection, knowledge selection, and response generation in sequence. We introduce novel data augmentation methods for the first two steps and demonstrate that the use of information extracted from dialogue context improves the knowledge selection and end-to-end performances. Through experiments, we achieve state-of-theart performance for both automatic and human evaluation metrics on the DSTC9 Track 1 benchmark dataset, validating the effectiveness of our contributions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"vempala-blanco-2016-beyond","url":"https:\/\/aclanthology.org\/P16-1142.pdf","title":"Beyond Plain Spatial Knowledge: Determining Where Entities Are and Are Not Located, and For How Long","abstract":"This paper complements semantic role representations with spatial knowledge beyond indicating plain locations. Namely, we extract where entities are (and are not) located, and for how long (seconds, hours, days, etc.). Crowdsourced annotations show that this additional knowledge is intuitive to humans and can be annotated by non-experts. Experimental results show that the task can be automated.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mayer-nelson-2020-phonotactic","url":"https:\/\/aclanthology.org\/2020.scil-1.36.pdf","title":"Phonotactic learning with neural language models","abstract":"Computational models of phonotactics share much in common with language models, which assign probabilities to sequences of words. While state of the art language models are implemented using neural networks, phonotactic models have not followed suit. We present several neural models of phonotactics, and show that they perform favorably when compared to existing models. In addition, they provide useful insights into the role of representations on phonotactic learning and generalization. This work provides a promising starting point for future modeling of human phonotactic knowledge.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Gillian Gallagher, Bruce Hayes, Gaja Jarosz, Joe Pater, and the attendees of the UMass Sound Workshop. We also thank three anonymous reviewers for their valuable feedback and criticism. The authors are listed in alphabetical order.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wendt-lewis-2009-pushing","url":"https:\/\/aclanthology.org\/2009.mtsummit-commercial.9.pdf","title":"Pushing the Quality of a Customized SMT System Using Shared Training Data","abstract":"\u2022 Determine the effect of data pooling among multiple parallel data providers within a domain, measured by the translation quality of an SMT system trained with that data.\n\u2022 There is noticeable benefit in sharing parallel data among multiple data owners within the same domain: An MT system trained with the combined data can deliver significantly improved translation quality, compared to a system trained with the provider's own data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"miller-vosoughi-2020-big","url":"https:\/\/aclanthology.org\/2020.wnut-1.36.pdf","title":"Big Green at WNUT 2020 Shared Task-1: Relation Extraction as Contextualized Sequence Classification","abstract":"Relation and event extraction is an important task in natural language processing. We introduce a system which uses contextualized knowledge graph completion to classify relations and events between known entities in a noisy text environment. We report results which show that our system is able to effectively extract relations and events from a dataset of wet lab protocols.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"brown-manandhar-2000-precompilation","url":"https:\/\/aclanthology.org\/W00-1602.pdf","title":"Precompilation of HPSG in ALE into a CFG for Fast Parsing","abstract":"Context free grammars parse faster than TFS grammars, but have disadvantages. On our test TFS grammar, precompilation into CFG results in a speedup of 16 times for parsing without taking into account additional mechanisms for increasing parsing efficiency. A formal overview is given of precompilation and parsing. Modifications to ALE rules permit a closure over the rules from the lexicon, and analysis leading to a fast treatment of semantic structure. The closure algorithm, and retrieval of full semantic structure are described.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was funded by a legacy from Miss Nora Brown, and Workshop attendance funded by INVU. Our thanks to the anonymous referees and to Mr. Stephan Oepen for their suggestions, and to Bernd Kiefer and Kentaro Torisawa for copies of their papers.","year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-etal-2011-structural","url":"https:\/\/aclanthology.org\/P11-1153.pdf","title":"Structural Topic Model for Latent Topical Structure Analysis","abstract":"Topic models have been successfully applied to many document analysis tasks to discover topics embedded in text. However, existing topic models generally cannot capture the latent topical structures in documents. Since languages are intrinsically cohesive and coherent, modeling and discovering latent topical transition structures within documents would be beneficial for many text analysis tasks. In this work, we propose a new topic model, Structural Topic Model, which simultaneously discovers topics and reveals the latent topical structures in text through explicitly modeling topical transitions with a latent first-order Markov chain. Experiment results show that the proposed Structural Topic Model can effectively discover topical structures in text, and the identified structures significantly improve the performance of tasks such as sentence annotation and sentence ordering.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their useful comments. This material is based upon work supported by the National Science Foundation under Grant Numbers IIS-0713581 and CNS-0834709, and NASA grant NNX08AC35A.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"maas-etal-2015-lexicon","url":"https:\/\/aclanthology.org\/N15-1038.pdf","title":"Lexicon-Free Conversational Speech Recognition with Neural Networks","abstract":"We present an approach to speech recognition that uses only a neural network to map acoustic input to characters, a character-level language model, and a beam search decoding procedure. This approach eliminates much of the complex infrastructure of modern speech recognition systems, making it possible to directly train a speech recognizer using errors generated by spoken language understanding tasks. The system naturally handles out of vocabulary words and spoken word fragments. We demonstrate our approach using the challenging Switchboard telephone conversation transcription task, achieving a word error rate competitive with existing baseline systems. To our knowledge, this is the first entirely neural-network-based system to achieve strong speech transcription results on a conversational speech task. We analyze qualitative differences between transcriptions produced by our lexicon-free approach and transcriptions produced by a standard speech recognition system. Finally, we evaluate the impact of large context neural network character language models as compared to standard n-gram models within our framework.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Awni Hannun for his contributions to the software used for experiments in this work. We also thank Peng Qi and Thang Luong for insightful discussions, and Kenneth Heafield for help with the KenLM toolkit. Our work with HMM-GMM systems was possible thanks to the Kaldi toolkit and its contributors. Some of the GPUs used in this work were donated by the NVIDIA Corporation. AM was supported as an NSF IGERT Traineeship Recipient under Award 0801700. ZX was supported by an NDSEG Graduate Fellowship.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"meya-1990-tenets","url":"https:\/\/aclanthology.org\/C90-2046.pdf","title":"Tenets for an Interlingual Representation Definite NPs","abstract":"The main goal of this paper (as in Keenan and Stavi 1986) is to characterize the possible determiner denotations in order to develop a computational approach that makes explicit use of this information. To cope with the constraints that languages impose when generating determiners, a computational model has to follow the laws that map definiteness to structures and strings and viceversa.\nIn the following proposal I distantiate from K. Btihlers Deixis Theory and Weinrichs (76) proposal where indefinites suggest subsequent information, while definite point out facts from the previous intbrmation. This very general position is insufficient if we want to formalize NP-definiteness.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1990,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tseng-etal-2021-aspect","url":"https:\/\/aclanthology.org\/2021.rocling-1.26.pdf","title":"Aspect-Based Sentiment Analysis and Singer Name Entity Recognition using Parameter Generation Network Based Transfer Learning","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"soh-etal-2019-legal","url":"https:\/\/aclanthology.org\/W19-2208.pdf","title":"Legal Area Classification: A Comparative Study of Text Classifiers on Singapore Supreme Court Judgments","abstract":"This paper conducts a comparative study on the performance of various machine learning (\"ML\") approaches for classifying judgments into legal areas. Using a novel dataset of 6,227 Singapore Supreme Court judgments, we investigate how state-of-the-art NLP methods compare against traditional statistical models when applied to a legal corpus that comprised few but lengthy documents. All approaches tested, including topic model, word embedding, and language model-based classifiers, performed well with as little as a few hundred judgments. However, more work needs to be done to optimize state-of-the-art methods for the legal domain.","label_nlp4sg":1,"task":["Legal Area Classification"],"method":["Comparative Study"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their helpful comments and the Singapore Academy of Law for permitting us to scrape and use this corpus.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"stajner-etal-2015-deeper","url":"https:\/\/aclanthology.org\/P15-2135.pdf","title":"A Deeper Exploration of the Standard PB-SMT Approach to Text Simplification and its Evaluation","abstract":"In the last few years, there has been a growing number of studies addressing the Text Simplification (TS) task as a monolingual machine translation (MT) problem which translates from 'original' to 'simple' language. Motivated by those results, we investigate the influence of quality vs quantity of the training data on the effectiveness of such a MT approach to text simplification. We conduct 40 experiments on the aligned sentences from English Wikipedia and Simple English Wikipedia, controlling for: (1) the similarity between the original and simplified sentences in the training and development datasets, and (2) the sizes of those datasets. The results suggest that in the standard PB-SMT approach to text simplification the quality of the datasets has a greater impact on the system performance. Additionally, we point out several important differences between cross-lingual MT and monolingual MT used in text simplification, and show that BLEU is not a good measure of system performance in text simplification task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research described in this paper was partially funded by the project SKATER-UPF-TALN (TIN2012-38584-C06-03), Ministerio de Econom\u00eda y Competitividad, Secretar\u00eda de Estado de Investigaci\u00f3n, Desarrollo e Innovaci\u00f3n, Spain, and the project ABLE-TO-INCLUDE (CIP-ICT-PSP-2013-7\/621055). Hannah B\u00e9chara is supported by the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme FP7\/2007-2013\/ under REA grant agreement no. 31747.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"huang-bai-2021-team","url":"https:\/\/aclanthology.org\/2021.ltedi-1.17.pdf","title":"TEAM HUB@LT-EDI-EACL2021: Hope Speech Detection Based On Pre-trained Language Model","abstract":"This article introduces the system description of TEAM HUB team participating in LT-EDI 2021: Hope Speech Detection. This shared task is the first task related to the desired voice detection. The data set in the shared task consists of three different languages (English, Tamil, and Malayalam). The task type is text classification. Based on the analysis and understanding of the task description and data set, we designed a system based on a pre-trained language model to complete this shared task. In this system, we use methods and models that combine the XLM-RoBERTa pre-trained language model and the Tf-Idf algorithm. In the final result ranking announced by the task organizer, our system obtained F1 scores of 0.93, 0.84, 0.59 on the English dataset, Malayalam dataset, and Tamil dataset. Our submission results are ranked 1, 2, and 3 respectively.","label_nlp4sg":1,"task":["Hope Speech Detection"],"method":["Pre - trained Language Model","pre - trained language model","XLM - RoBERTa","Tf - Idf"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"luoma-etal-2021-fine","url":"https:\/\/aclanthology.org\/2021.nodalida-main.14.pdf","title":"Fine-grained Named Entity Annotation for Finnish","abstract":"We introduce a corpus with fine-grained named entity annotation for Finnish, following the OntoNotes guidelines to create a resource that is cross-lingually compatible with existing resources for other languages. We combine and extend two NER corpora recently introduced for Finnish and revise their custom annotation scheme through a combination of automatic and manual processing steps. The resulting corpus consists of nearly 500,000 tokens annotated for over 50,000 mentions categorized into 18 name and numeric entity types. We evaluate this resource and demonstrate its compatibility with the English OntoNotes annotations by training state-of-the-art mono-, bi-, and multilingual deep learning models, finding both that the corpus allows highly accurate tagging at 93% F-score and that a comparable level of performance can be achieved by a bilingual Finnish-English NER model. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was funded in part by the Academy of Finland. We wish to thank CSC -IT Center for Science, Finland, for computational resources.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kunchukuttan-etal-2014-tuning","url":"https:\/\/aclanthology.org\/W14-1708.pdf","title":"Tuning a Grammar Correction System for Increased Precision","abstract":"In this paper, we propose two enhancements to a statistical machine translation based approach to grammar correction for correcting all error categories. First, we propose tuning the SMT systems to optimize a metric more suited to the grammar correction task (F-\u03b2 score) rather than the traditional BLEU metric used for tuning language translation tasks. Since the F-\u03b2 score favours higher precision, tuning to this score can potentially improve precision. While the results do not indicate improvement due to tuning with the new metric, we believe this could be due to the small number of grammatical errors in the tuning corpus and further investigation is required to answer the question conclusively. We also explore the combination of custom-engineered grammar correction techniques, which are targeted to specific error categories, with the SMT based method. Our simple ensemble methods yield improvements in recall but decrease the precision. Tuning the custom-built techniques can help in increasing the overall accuracy also.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"michon-etal-2020-integrating","url":"https:\/\/aclanthology.org\/2020.coling-main.348.pdf","title":"Integrating Domain Terminology into Neural Machine Translation","abstract":"This paper extends existing work on terminology integration into Neural Machine Translation, a common industrial practice to dynamically adapt translation to a specific domain. Our method, based on the use of placeholders complemented with morphosyntactic annotation, efficiently taps into the ability of the neural network to deal with symbolic knowledge to surpass the surface generalization shown by alternative techniques. We compare our approach to state-of-the-art systems and benchmark them through a well-defined evaluation framework, focusing on actual application of terminology and not just on the overall performance. Results indicate the suitability of our method in the use-case where terminology is used in a system trained on generic data only.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work presented in this paper was partially supported by the European Commission under contract H2020-787061 ANITA.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"oh-2021-team","url":"https:\/\/aclanthology.org\/2021.cmcl-1.11.pdf","title":"Team Ohio State at CMCL 2021 Shared Task: Fine-Tuned RoBERTa for Eye-Tracking Data Prediction","abstract":"This paper describes Team Ohio State's approach to the CMCL 2021 Shared Task, the goal of which is to predict five eye-tracking features from naturalistic self-paced reading corpora. For this task, we fine-tune a pretrained neural language model (RoBERTa; Liu et al., 2019) to predict each feature based on the contextualized representations. Moreover, motivated by previous eye-tracking studies, we include word length in characters and proportion of sentence processed as two additional input features. Our best model strongly outperforms the baseline and is also competitive with other systems submitted to the shared task. An ablation study shows that the word length feature contributes to making more accurate predictions, indicating the usefulness of features that are specific to the eye-tracking paradigm.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their helpful comments.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sikdar-etal-2018-flytxt","url":"https:\/\/aclanthology.org\/S18-1144.pdf","title":"Flytxt\\_NTNU at SemEval-2018 Task 8: Identifying and Classifying Malware Text Using Conditional Random Fields and Na\\\"\\ive Bayes Classifiers","abstract":"Cybersecurity risks such as malware threaten the personal safety of users, but to identify malware text is a major challenge. The paper proposes a supervised learning approach to identifying malware sentences given a document (subTask1 of SemEval 2018, Task 8), as well as to classifying malware tokens in the sentences (subTask2). The approach achieved good results, ranking second of twelve participants for both subtasks, with F-scores of 57% for subTask1 and 28% for subTask2.","label_nlp4sg":1,"task":["Identifying and Classifying Malware Text"],"method":["Conditional Random Fields","Na\\\"\\ive Bayes Classifiers"],"goal1":"Peace, Justice and Strong Institutions","goal2":"Industry, Innovation and Infrastructure","goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"rio-2002-compiling","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2002\/pdf\/195.pdf","title":"Compiling an Interactive Literary Translation Web Site for Education Purposes","abstract":"The project under discussion represents an attempt to exploit the potential of web resources for higher education and, more particularly, on a domain (that of literary translation) which is traditionally considered not very much in relation to technology and computer science. Translation and Interpreting students at the Universidad de M\u00e1laga are offered the possibility to take an English-Spanish Literary Translation module, which epitomises the need for debate in the field of Humanities. Sadly enough, implementation of course methodology is rendered very difficult or impossible owing to time restrictions and overcrowded classrooms. It is our contention that the setting up of a web site may solve some of these issues. We intend to provide both students and the literary translation-aware Internet audience with an integrated, scalable, multifunctional debate forum. Project contents will include a detailed course description, relevant reference materials and interaction services (mailing list, debate forum and chat rooms). This is obviously without limitation, as the Forum is open to any other contents that users may consider necessary or convenient, with a view to a more interdisciplinary approach, further research on the field of Literary Translation and future developments within the project framework.","label_nlp4sg":1,"task":["literary translation"],"method":["Web Site"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":"Many thanks to the staff at the UMA's DEV and SCI departments for their advice on technical issues, to David Moreno for his painstaking revision of web site contents and especially to \u00c1lvaro van Hilten for being a selfless pillar of this project.","year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ohno-etal-2006-syntactically","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2006\/pdf\/106_pdf.pdf","title":"A Syntactically Annotated Corpus of Japanese Spoken Monologue","abstract":"Recently, monologue data such as lecture and commentary by professionals have been considered as valuable intellectual resources, and have been gathering attention. On the other hand, in order to use these monologue data effectively and efficiently, it is necessary for the monologue data not only just to be accumulated but also to be structured. This paper describes the construction of a Japanese spoken monologue corpus in which dependency structure is given to each utterance. Spontaneous monologue includes a lot of very long sentences composed of two or more clauses. In these sentences, there may exist the subject or the adverb common to multi-clauses, and it may be considered that the subject or adverb depend on multi-predicates. In order to give the dependency information in a real fashion, our research allows that a bunsetsu depends on multiple bunsetsus.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgements The authors would like to thank graduate students of Nagoya University for their helpful support in correcting the monologue dependency corpus. The research reported here was supported in part by a contract with the Telecommunications Advancement Organization of Japan entitled, \"A study of speech dialogue translation technology based on a large corpus.\"","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nandan-etal-2014-sap","url":"https:\/\/aclanthology.org\/S14-2090.pdf","title":"SAP-RI: A Constrained and Supervised Approach for Aspect-Based Sentiment Analysis","abstract":"We describe the submission of the SAP Research & Innovation team to the Se-mEval 2014 Task 4: Aspect-Based Sentiment Analysis (ABSA). Our system follows a constrained and supervised approach for aspect term extraction, categorization and sentiment classification of online reviews and the details are included in this paper.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research is partially funded by the Economic Development Board and the National Research Foundation of Singapore.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"karmakar-krishna-2020-leveraging","url":"https:\/\/aclanthology.org\/2020.icon-main.45.pdf","title":"Leveraging Latent Representations of Speech for Indian Language Identification","abstract":"Identification of the language spoken from speech utterances is an interesting task because of the diversity associated with different languages and human voices. Indian languages have diverse origins and identifying them from speech utterances would help several language recognition, translation and relationship mining tasks. The current approaches for tackling the problem of languages identification in the Indian context heavily use feature engineering and classical speech processing techniques. This is a bottleneck for language identification systems, as we require to exploit necessary features in speech, required for machine identification, which are learnt by a probabilistic framework, rather than handcrafted feature engineering. In this paper, we tackle the problem of language identification using latent representations learnt from speech using Variational Autoencoders (VAEs) and leverage the representations learnt to train sequence models. Our framework attains an accuracy of 89% in the identification of 8 well known Indian languages (namely Tamil, Telugu, Punjabi, Marathi, Gujarati, Hindi, Kannada and Bengali) from the CMU\/IIITH Indic Speech Database. The presented approach can be applied to several scenarios for speech processing by employing representation learning and leveraging them for sequence models.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yang-heeman-2007-avoiding","url":"https:\/\/aclanthology.org\/N07-1003.pdf","title":"Avoiding and Resolving Initiative Conflicts in Dialogue","abstract":"In this paper, we report on an empirical study on initiative conflicts in human-human conversation. We examined these conflicts in two corpora of task-oriented dialogues. The results show that conversants try to avoid initiative conflicts, but when these conflicts occur, they are efficiently resolved by linguistic devices, such as volume.","label_nlp4sg":1,"task":["Avoiding and Resolving Initiative Conflicts in Dialogue"],"method":["empirical study"],"goal1":"Partnership for the goals","goal2":"Peace, Justice and Strong Institutions","goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":1} {"ID":"neale-etal-2018-leveraging","url":"https:\/\/aclanthology.org\/L18-1623.pdf","title":"Leveraging Lexical Resources and Constraint Grammar for Rule-Based Part-of-Speech Tagging in Welsh","abstract":"As the quantity of annotated language data and the quality of machine learning algorithms have increased over time, statistical part-of-speech (POS) taggers trained over large datasets have become as robust or better than their rule-based counterparts. However, for lesser-resourced languages such as Welsh there is simply not enough accurately annotated data to train a statistical POS tagger. Furthermore, many of the more popular rule-based taggers still require that their rules be inferred from annotated data, which while not as extensive as that required for training a statistical tagger must still be sizeable. In this paper we describe CyTag, a rule-based POS tagger for Welsh based on the VISL Constraint Grammar parser. Leveraging lexical information from Eurfa (an extensive open-source dictionary for Welsh), we extract lists of possible POS tags for each word token in a running text and then apply various constraintsbased on various features of surrounding word tokens-to prune the number of possible tags until the most appropriate tag for a given token can be selected. We explain how this approach is particularly useful in dealing with some of the specific intricacies of Welsh-such as morphological changes and word mutations-and present an evaluation of the performance of the tagger using a manually checked test corpus of 611 Welsh sentences.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been funded by the UK Economic and Social Research Council (ESRC) and Arts and Humanities Research Council (AHRC) as part of the Corpws Cenedlaethol Cymraeg Cyfoes (The National Corpus of Contemporary Welsh): A community driven approach to linguistic corpus construction project (Grant Number ES\/M011348\/1).","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-etal-2019-meta","url":"https:\/\/aclanthology.org\/D19-1431.pdf","title":"Meta Relational Learning for Few-Shot Link Prediction in Knowledge Graphs","abstract":"Link prediction is an important way to complete knowledge graphs (KGs), while embedding-based methods, effective for link prediction in KGs, perform poorly on relations that only have a few associative triples. In this work, we propose a Meta Relational Learning (MetaR) framework to do the common but challenging few-shot link prediction in KGs, namely predicting new triples about a relation by only observing a few associative triples. We solve few-shot link prediction by focusing on transferring relation-specific meta information to make model learn the most important knowledge and learn faster, corresponding to relation meta and gradient meta respectively in MetaR. Empirically, our model achieves stateof-the-art results on few-shot link prediction KG benchmarks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We want to express gratitude to the anonymous reviewers for their hard work and kind comments, which will further improve our work in the future.This work is funded by NSFC 91846204\/61473260, national key research program YS2018YFB140004, and Alibaba CangJingGe(Knowledge Engine) Research Plan.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"koper-etal-2015-multilingual","url":"https:\/\/aclanthology.org\/W15-0105.pdf","title":"Multilingual Reliability and ``Semantic'' Structure of Continuous Word Spaces","abstract":"While continuous word vector representations enjoy increasing popularity, it is still poorly understood (i) how reliable they are for other languages than English, and (ii) to what extent they encode deep semantic relatedness such as paradigmatic relations. This study presents experiments with continuous word vectors for English and German, a morphologically rich language. For evaluation, we use both published and newly created datasets of morpho-syntactic and semantic relations. Our results show that (i) morphological complexity causes a drop in accuracy, and (ii) continuous representations lack the ability to solve analogies of paradigmatic relations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research was supported by the DFG Collaborative Research Centre SFB 732 (Maximilian K\u00f6per) and the DFG Heisenberg Fellowship SCHU-2580 (Sabine Schulte im Walde).","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rushall-ilgen-1996-context","url":"https:\/\/aclanthology.org\/X96-1032.pdf","title":"A Context Vector-Based Self Organizing Map for Information Visualization","abstract":"HNC Software, Inc. has developed a system called DOCUVERSE for visualizing the information content of large textual corpora. The system is built around two separate neural network methodologies: context vectors and self organizing maps. Context vectors (CVs) are high dimensional information representations that encode the semantic content of the textual entities they represent. Self organizing maps (SOMs) are capable of transforming an input, high dimensional signal space into a much lower (usually two or three) dimensional output space useful for visualization. Related information themes contained in the corpus, depicted graphically, are presented in spatial proximity to one another. Neither process requires human intervention, nor an external knowledge base. Together, these neural network techniques can be utilized to automatically identi~ the relevant information themes present in a corpus, and present those themes to the user in a intuitive visual form.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"del-tredici-fernandez-2020-words","url":"https:\/\/aclanthology.org\/2020.coling-main.477.pdf","title":"Words are the Window to the Soul: Language-based User Representations for Fake News Detection","abstract":"Cognitive and social traits of individuals are reflected in language use. Moreover, individuals who are prone to spread fake news online often share common traits. Building on these ideas, we introduce a model that creates representations of individuals on social media based only on the language they produce, and use them to detect fake news. We show that language-based user representations are beneficial for this task. We also present an extended analysis of the language of fake news spreaders, showing that its main features are mostly domain independent and consistent across two English datasets. Finally, we exploit the relation between language use and connections in the social graph to assess the presence of the Echo Chamber effect in our data.","label_nlp4sg":1,"task":["Fake News Detection"],"method":["Language - based User Representations"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"This research has received funding from the Netherlands Organisation for Scientific Research (NWO) under VIDI grant nr. 276-89-008, Asymmetry in Conversation. We thank the anonymous reviewers for their comments as well as the area chairs and PC chairs of COLING 2020. The work presented in this paper was entirely conducted when the first author was affiliated with the University of Amsterdam, prior to working at Amazon.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"tambouratzis-etal-2000-automatic","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2000\/pdf\/301.pdf","title":"Automatic Style Categorisation of Corpora in the Greek Language","abstract":"In this article, a system is proposed for the automatic style categorisation of text corpora in the Greek language. This categorisation is based to a large extent on the type of language used in the text, for example whether the language used is representative of formal Greek or not. To arrive to this categorisation, the highly inflectional nature of the Greek language is exploited. For each text, a vector of both structural and morphological characteristics is assembled. Categorisation is achieved by comparing this vector to given archetypes using a statistical-based method. Experimental results reported in this article indicate an accuracy exceeding 98% in the categorisation of a corpus of texts spanning different registers.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors wish to acknowledge the assistance of the Secretariat of the Hellenic Parliament in obtaining the session transcripts studied in this piece of research.","year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bentivogli-pianta-2003-beyond","url":"https:\/\/aclanthology.org\/E03-1018.pdf","title":"Beyond Lexical Units: Enriching WordNets with Phrasets","abstract":"In this paper we present a proposal to extend WordNet-like lexical databases by adding phrasets, i.e. sets of free combinations of words which are recurrently used to express a concept (let's call them recurrent free phrases). Phrasets are a useful source of information for different NLP tasks, and particularly in a multilingual environment to manage lexical gaps. Two experiments are presented to check the possibility of acquiring recurrent free phrases from dictionaries and corpora.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jo-2017-corpus","url":"https:\/\/aclanthology.org\/Y17-1034.pdf","title":"A corpus-based study on synesthesia in Korean ordinary language","abstract":"Synesthesia means an involuntary neurological phenomenon where \"sensory events in one modality take on qualities usually considered appropriate to another\" (Marks, 1982, p. 15). More generally, it indicates an experiential mapping of one sense domain with another, such as \"sweet sound\". The study reported in this paper is to test Ullmann's (1963) theoretical framework of \"hierarchical distribution\" through the synesthetic data coming out of Korean National Corpus (KNC), focusing on modern daily Korean. The research questions here are (a) what are the routes for Korean synesthetic transfers like?, (b) what are the predominant source and target domain for the transfers?, and (c) what are the universal and\/or culture-specific aspects in the association? Based on Strik Lievers et al.'s (2013) methodology, the study extracts synesthetic data from KNC. As a result, the data analysis shows that (a) Korean synesthesia conforms to Ullmann's (1963) general scheme in the metaphoric mappings, (b) the predominant source domain is touch while the predominant target is hearing, which matches with Ullmann's (1963) study as well, and (c) there could be a delicate cultural dependency, which means \"taste\" occupies a significant position together with \"touch\" in Korean synesthetic metaphors.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pereira-etal-2014-collocation","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/519_Paper.pdf","title":"Collocation or Free Combination? --- Applying Machine Translation Techniques to identify collocations in Japanese","abstract":"This work presents an initial investigation on how to distinguish collocations from free combinations. The assumption is that, while free combinations can be literally translated, the overall meaning of collocations is different from the sum of the translation of its parts. Based on that, we verify whether a machine translation system can help us perform such distinction. Results show that it improves the precision compared with standard methods of collocation identification through statistical association measures.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hashimoto-etal-2020-building","url":"https:\/\/aclanthology.org\/2020.amta-user.20.pdf","title":"Building Salesforce Neural Machine Translation System","abstract":"\u2022 Why invest in machine translation \nA three-year collaboration between R&D Localization and Salesforce Research teams","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"fonseca-etal-2016-summ","url":"https:\/\/aclanthology.org\/L16-1324.pdf","title":"Summ-it++: an Enriched Version of the Summ-it Corpus","abstract":"This paper presents Summ-it++, an enriched version the Summit corpus. In this new version, the corpus has received new semantic layers, named entity categories and relations between named entities, adding to the previous coreference annotation. In addition, we change the original Summit format to SemEval.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors acknowledge the financial support of CNPq (Conselho Nacional de Desenvolvimento Cient\u00edfico e Tecnol\u00f3gico), CAPES (Coordena\u00e7\u00e3o de Aperfei\u00e7oamento de Pessoal de N\u00edvel Superior) and FAPERGS (Funda\u00e7\u00e3o de Amparo \u00e0 Pesquisa do Rio Grande do Sul).","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kim-etal-2019-abstractive","url":"https:\/\/aclanthology.org\/N19-1260.pdf","title":"Abstractive Summarization of Reddit Posts with Multi-level Memory Networks","abstract":"We address the problem of abstractive summarization in two directions: proposing a novel dataset and a new model. First, we collect Reddit TIFU dataset, consisting of 120K posts from the online discussion forum Reddit. We use such informal crowd-generated posts as text source, in contrast with existing datasets that mostly use formal documents as source such as news articles. Thus, our dataset could less suffer from some biases that key sentences usually locate at the beginning of the text and favorable summary candidates are already inside the text in similar forms. Second, we propose a novel abstractive summarization model named multilevel memory networks (MMN), equipped with multi-level memory to store the information of text from different levels of abstraction. With quantitative evaluation and user studies via Amazon Mechanical Turk, we show the Reddit TIFU dataset is highly abstractive and the MMN outperforms the state-ofthe-art summarization models. The code and dataset are available at http:\/\/vision. snu.ac.kr\/projects\/reddit-tifu.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Chris Dongjoo Kim, Yunseok Jang and the anonymous reviewers for their helpful comments. This work was supported by Kakao and Kakao Brain corporations and IITP grant funded by the Korea government (MSIT) (No. 2017-0-01772, Development of QA systems for Video Story Understanding to pass the Video Turing Test). Gunhee Kim is the corresponding author.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jiang-zhai-2007-instance","url":"https:\/\/aclanthology.org\/P07-1034.pdf","title":"Instance Weighting for Domain Adaptation in NLP","abstract":"Domain adaptation is an important problem in natural language processing (NLP) due to the lack of labeled data in novel domains. In this paper, we study the domain adaptation problem from the instance weighting perspective. We formally analyze and characterize the domain adaptation problem from a distributional view, and show that there are two distinct needs for adaptation, corresponding to the different distributions of instances and classification functions in the source and the target domains. We then propose a general instance weighting framework for domain adaptation. Our empirical results on three NLP tasks show that incorporating and exploiting more information from the target domain through instance weighting is effective.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was in part supported by the National Science Foundation under award numbers 0425852 and 0428472. We thank the anonymous reviewers for their valuable comments.","year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yeh-2000-accurate","url":"https:\/\/aclanthology.org\/C00-2137.pdf","title":"More accurate tests for the statistical significance of result differences","abstract":"Statistical significance testing of differences in values of metrics like recall, precision and balanced F-score is a necessary part of empirical natural language processing. Unfortunately, we find in a set of experiments that many commonly used tests often underestimate the significance and so are less likely to detect differences that exist between different techniques. This underestimation comes from an independence assumption that is often violated. We point out some useful tests that do not make this assumption, including computationally-intensive randomization tests.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"van-deemter-gatt-2007-content","url":"https:\/\/aclanthology.org\/2007.mtsummit-ucnlg.21.pdf","title":"Content determination in GRE: evaluating the evaluator","abstract":"In this paper, we discuss the evaluation measures proposed in a number of recent papers associated with the TUNA project 1 , and which have become an important component of the First NLG Shared Task and Evaluation Campaign (STEC) on attribute selection for referring expressions generation. Focusing on reference to individual objects, we discuss what such evaluation measures should be expected to achieve, and what alternative measures merit consideration.","label_nlp4sg":1,"task":["Content determination"],"method":["evaluation measures"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"moreno-perez-2000-reusing","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2000\/pdf\/74.pdf","title":"Reusing the Mikrokosmos Ontology for Concept-based Multilingual Terminology Databases","abstract":"This paper reports work carried out within a multilingual terminology project (OncoTerm) in which the Mikrokosmos (\u00b5K) ontology (Mahesh, 1996; Viegas et al 1999) has been used as a language independent conceptual structure to achieve a truly concept-based terminology database (termbase, for short). The original ontology, containing nearly 4,700 concepts and available in Lisp-like format (January 1997 version), was first converted into a set of tables in a relational database. A specific software tool was developed in order to edit and browse this resource. This tool has now been integrated within a termbase editor and released under the name of OntoTerm\u2122. In this paper we focus on the suitability of the \u00b5K ontology for the representation of domain-specific knowledge and its associated lexical items.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research has been partly carried out within the framework of the project OncoTerm: System of oncological information and resources, funded by the Spanish Ministry of Education (DGICYT) under code number PB98-1342.","year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"marvel-koenig-2015-event","url":"https:\/\/aclanthology.org\/W15-0913.pdf","title":"Event Categorization beyond Verb Senses","abstract":"Verb senses are often assumed to distinguish among different conceptual event categories. However, senses misrepresent the number of event categories expressed both within and across languages and event categories may be \"named\" by more than a word, i.e. a multi-word expression. Determining the nature and number of event categories in an event description requires an understanding of the parameters relevant for categorization. We propose a set of parameters for use in creating a Gold Standard of event categories and apply them to a corpus sample of 2000 sentences across 10 verbs. In doing so, we find an asymmetry between subjects and direct objects in their contributions to distinguishing event categories. We then explore methods of automating event categorization to approximate our Gold Standard through the use of hierarchical clustering and Latent Semantic Analysis (Deerwester et al., 1990).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dumitrescu-avram-2020-introducing","url":"https:\/\/aclanthology.org\/2020.lrec-1.546.pdf","title":"Introducing RONEC - the Romanian Named Entity Corpus","abstract":"We present RONEC-the Named Entity Corpus for the Romanian language. The corpus contains over 26000 entities in 5000 annotated sentences, belonging to 16 distinct classes. The sentences have been extracted from a copyright free newspaper, covering several styles. This corpus represents the first initiative in the Romanian language space specifically targeted for named entity recognition. It is available as BRAT and CoNLL-U Plus (in Multi-Word Expression and IOB formats) text downloads, and it is free to use and extend at github.com\/dumitrescustefan\/ronec .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"filannino-etal-2013-mantime","url":"https:\/\/aclanthology.org\/S13-2009.pdf","title":"ManTIME: Temporal expression identification and normalization in the TempEval-3 challenge","abstract":"This paper describes a temporal expression identification and normalization system, Man-TIME, developed for the TempEval-3 challenge. The identification phase combines the use of conditional random fields along with a post-processing identification pipeline, whereas the normalization phase is carried out using NorMA, an open-source rule-based temporal normalizer. We investigate the performance variation with respect to different feature types. Specifically, we show that the use of WordNet-based features in the identification task negatively affects the overall performance, and that there is no statistically significant difference in using gazetteers, shallow parsing and propositional noun phrases labels on top of the morphological features. On the test data, the best run achieved 0.95 (P), 0.85 (R) and 0.90 (F1) in the identification phase. Normalization accuracies are 0.84 (type attribute) and 0.77 (value attribute). Surprisingly, the use of the silver data (alone or in addition to the gold annotated ones) does not improve the performance.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the organizers of the TempEval-3 challenge. The first author would like also to acknowledge Marilena Di Bari, Joseph Mellor and Daniel Jamieson for their support and the UK Engineering and Physical Science Research Council for its support in the form of a doctoral training grant.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"el-kishky-etal-2020-ccaligned","url":"https:\/\/aclanthology.org\/2020.emnlp-main.480.pdf","title":"CCAligned: A Massive Collection of Cross-Lingual Web-Document Pairs","abstract":"Cross-lingual document alignment aims to identify pairs of documents in two distinct languages that are of comparable content or translations of each other. In this paper, we exploit the signals embedded in URLs to label web documents at scale with an average precision of 94.5% across different language pairs. We mine sixty-eight snapshots of the Common Crawl corpus and identify web document pairs that are translations of each other. We release a new web dataset consisting of over 392 million URL pairs from Common Crawl covering documents in 8144 language pairs of which 137 pairs include English. In addition to curating this massive dataset, we introduce baseline methods that leverage crosslingual representations to identify aligned documents based on their textual content. Finally, we demonstrate the value of this parallel documents dataset through a downstream task of mining parallel sentences and measuring the quality of machine translations from models trained on this mined data. Our objective in releasing this dataset is to foster new research in cross-lingual NLP across a variety of low, medium, and high-resource languages.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"atwell-etal-1994-amalgam","url":"https:\/\/aclanthology.org\/W94-0103.pdf","title":"AMALGAM: Automatic Mapping Among Lexico-Grammatical Annotation Models","abstract":"The title of this paper playfully contrasts two rather different approaches to language analysis. The \"Noisy Channel\" 's are the promoters of statistically based approaches to language learning. Many of these studies are based on the Shannons's Noisy Channel model. The \"Braying Donkey\" 's are those oriented towards theoretically motivated language models. They are interested in any type of language expressions (such as the famous \"Donkey Sentences\"), regardless of their frequency in real language, because the focus is the study of human communication. In the past few years, we supported a more balanced approach. While our major concern is applicability to real NLP systems, we think that, after aLl, quantitative methods in Computational Linguistic should provide not only practical tools for language processing, but also some linguistic insight. Since, for sake of space, in this paper we cannot give any complete account of our research, we will present examples of \"linguistically appealing\", automatically acquired, lexical data (selectional restrictions of words) obtained trough an integrated use of knowledge-based and statistical techniques. We discuss the pros and cons of adding symbolic knowledge to the corpus linguistic recipe.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"goutte-etal-2014-nrc","url":"https:\/\/aclanthology.org\/W14-5316.pdf","title":"The NRC System for Discriminating Similar Languages","abstract":"We describe the system built by the National Research Council Canada for the \"Discriminating between similar languages\" (DSL) shared task. Our system uses various statistical classifiers and makes predictions based on a two-stage process: we first predict the language group, then discriminate between languages or variants within the group. Language groups are predicted using a generative classifier with 99.99% accuracy on the five target groups. Within each group (except English), we use a voting combination of discriminative classifiers trained on a variety of feature spaces, achieving an average accuracy of 95.71%, with per-group accuracy between 90.95% and 100% depending on the group. This approach turns out to reach the best performance among all systems submitted to the open and closed tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"akhlaghi-etal-2020-constructing","url":"https:\/\/aclanthology.org\/2020.lrec-1.40.pdf","title":"Constructing Multimodal Language Learner Texts Using LARA: Experiences with Nine Languages","abstract":"LARA (Learning and Reading Assistant) is an open source platform whose purpose is to support easy conversion of plain texts into multimodal online versions suitable for use by language learners. This involves semi-automatically tagging the text, adding other annotations and recording audio. The platform is suitable for creating texts in multiple languages via crowdsourcing techniques that can be used for teaching a language via reading and listening. We present results of initial experiments by various collaborators where we measure the time required to produce substantial LARA resources, up to the length of short novels, in Dutch, English, Farsi, French, German, Icelandic, Irish, Swedish and Turkish. The first results are encouraging. Although there are some startup problems, the conversion task seems manageable for the languages tested so far. The resulting enriched texts are posted online and are freely available in both source and compiled form.","label_nlp4sg":1,"task":["Constructing Multimodal Language Learner Texts"],"method":["open source platform"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lukovnikov-etal-2021-detecting-compositionally","url":"https:\/\/aclanthology.org\/2021.findings-emnlp.54.pdf","title":"Detecting Compositionally Out-of-Distribution Examples in Semantic Parsing","abstract":"While neural networks are ubiquitous in state-of-the-art semantic parsers, it has been shown that most standard models suffer from dramatic performance losses when faced with compositionally out-of-distribution (OOD) data. Recently several methods have been proposed to improve compositional generalization in semantic parsing. In this work we instead focus on the problem of detecting compositionally OOD examples with neural semantic parsers, which, to the best of our knowledge, has not been investigated before. We investigate several strong yet simple methods for OOD detection based on predictive uncertainty. The experimental results demonstrate that these techniques perform well on the standard SCAN and CFQ datasets. Moreover, we show that OOD detection can be further improved by using a heterogeneous ensemble.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partially supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy -EXC 2092 CASA -390781972.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"stajner-etal-2017-effects","url":"https:\/\/aclanthology.org\/W17-5030.pdf","title":"Effects of Lexical Properties on Viewing Time per Word in Autistic and Neurotypical Readers","abstract":"Eye tracking studies from the past few decades have shaped the way we think of word complexity and cognitive load: words that are long, rare and ambiguous are more difficult to read. However, online processing techniques have been scarcely applied to investigating the reading difficulties of people with autism and what vocabulary is challenging for them. We present parallel gaze data obtained from adult readers with autism and a control group of neurotypical readers and show that the former required higher cognitive effort to comprehend the texts as evidenced by three gaze-based measures. We divide all words into four classes based on their viewing times for both groups and investigate the relationship between longer viewing times and word length, word frequency, and four cognitively-based measures (word concreteness, familiarity, age of acquisition and imagability).","label_nlp4sg":1,"task":["investigating the reading difficulties"],"method":["parallel gaze data","Eye tracking studies"],"goal1":"Quality Education","goal2":"Reduced Inequalities","goal3":null,"acknowledgments":"This work has been partially supported by the SFB 884 on the Political Economy of Reforms at the University of Mannheim (project C4), funded by the German Research Foundation (DFG) and the","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":1,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jansen-2020-visually","url":"https:\/\/aclanthology.org\/2020.findings-emnlp.395.pdf","title":"Visually-Grounded Planning without Vision: Language Models Infer Detailed Plans from High-level Instructions","abstract":"The recently proposed ALFRED challenge task aims for a virtual robotic agent to complete complex multi-step everyday tasks in a virtual home environment from high-level natural language directives, such as \"put a hot piece of bread on a plate\". Currently, the best-performing models are able to complete less than 5% of these tasks successfully. In this work we focus on modeling the translation problem of converting natural language directives into detailed multi-step sequences of actions that accomplish those goals in the virtual environment. We empirically demonstrate that it is possible to generate gold multi-step plans from language directives alone without any visual input in 26% of unseen cases. When a small amount of visual information is incorporated, namely the starting location in the virtual environment, our best-performing GPT-2 model successfully generates gold command sequences in 58% of cases. Our results suggest that contextualized language models may provide strong visual semantic planning modules for grounded virtual agents.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"temnikova-etal-2019-human","url":"https:\/\/aclanthology.org\/W19-8713.pdf","title":"Human-Informed Speakers and Interpreters Analysis in the WAW Corpus and an Automatic Method for Calculating Interpreters' D\\'ecalage","abstract":"This article presents a multi-faceted analysis of a subset of interpreted conference speeches from the WAW corpus for the English-Arabic language pair. We analyze several speakers and interpreters variables via manual annotation and automatic methods. We propose a new automatic method for calculating interpreters' d\u00e9calage (ear-voice span) based on Automatic Speech Recognition (ASR) and automatic alignment of named entities and content words between speaker and interpreter. The method is evaluated by two human annotators who have expertise in interpreting and Interpreting Studies and shows highly satisfactory results, accompanied with a high inter-annotator agreement. We provide insights about the relations of speakers' variables, interpreters' variables and d\u00e9calage and discuss them from Interpreting Studies and interpreting practice point of view. We had interesting findings about interpreters behavior which need to be extended to a large number of conference sessions in our future research.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the HIT-IT 2019 reviewers for their comments and Katsiaryna Panasuyk (A3) for her feedback.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"du-etal-2019-extracting","url":"https:\/\/aclanthology.org\/P19-1087.pdf","title":"Extracting Symptoms and their Status from Clinical Conversations","abstract":"This paper describes novel models tailored for a new application, that of extracting the symptoms mentioned in clinical conversations along with their status. Lack of any publicly available corpus in this privacy-sensitive domain led us to develop our own corpus, consisting of about 3K conversations annotated by professional medical scribes. We propose two novel deep learning approaches to infer the symptom names and their status: (1) a new hierarchical span-attribute tagging (SA-T) model, trained using curriculum learning, and (2) a variant of sequence-to-sequence model which decodes the symptoms and their status from a few speaker turns within a sliding window over the conversation. This task stems from a realistic application of assisting medical providers in capturing symptoms mentioned by patients from their clinical conversations. To reflect this application, we define multiple metrics. From inter-rater agreement, we find that the task is inherently difficult. We conduct comprehensive evaluations on several contrasting conditions and observe that the performance of the models range from an F-score of 0.5 to 0.8 depending on the condition. Our analysis not only reveals the inherent challenges of the task, but also provides useful directions to improve the models.","label_nlp4sg":1,"task":["Extracting Symptoms"],"method":["corpus","deep learning","hierarchical span - attribute tagging","SA - T","curriculum learning","sequence - to - sequence model"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"This work would not have been possible without the help of a number of colleagues, including Gang Li, Mingqiu Wang, Laurent El Shafey, Hagen Soltau, Patrick Nguyen, Nina Gonzales, Diana Jaunzeikare, Philip Chung, Ashley Robson Domin, Lauren Keyes, Alvin Rajkomar, Justin Stuart Paul, Katherine Chou, Chris Co, Claire Cui, and Kyle Scholz.","year":2019,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"biran-etal-2016-mining","url":"https:\/\/aclanthology.org\/P16-1180.pdf","title":"Mining Paraphrasal Typed Templates from a Plain Text Corpus","abstract":"Finding paraphrases in text is an important task with implications for generation, summarization and question answering, among other applications. Of particular interest to those applications is the specific formulation of the task where the paraphrases are templated, which provides an easy way to lexicalize one message in multiple ways by simply plugging in the relevant entities. Previous work has focused on mining paraphrases from parallel and comparable corpora, or mining very short sub-sentence synonyms and paraphrases. In this paper we present an approach which combines distributional and KB-driven methods to allow robust mining of sentence-level paraphrasal templates, utilizing a rich type system for the slots, from a plain text corpus.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"morio-etal-2020-towards","url":"https:\/\/aclanthology.org\/2020.acl-main.298.pdf","title":"Towards Better Non-Tree Argument Mining: Proposition-Level Biaffine Parsing with Task-Specific Parameterization","abstract":"State-of-the-art argument mining studies have advanced the techniques for predicting argument structures. However, the technology for capturing non-tree-structured arguments is still in its infancy. In this paper, we focus on non-tree argument mining with a neural model. We jointly predict proposition types and edges between propositions. Our proposed model incorporates (i) task-specific parameterization (TSP) that effectively encodes a sequence of propositions and (ii) a proposition-level biaffine attention (PLBA) that can predict a non-tree argument consisting of edges. Experimental results show that both TSP and PLBA boost edge prediction performance compared to baselines.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We appreciate Prof. Dr. Naoaki Okazaki at Tokyo Institute of Technology for his helpful comments.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"al-maskari-sanderson-2006-effect","url":"https:\/\/aclanthology.org\/W06-1902.pdf","title":"The Effect of Machine Translation on the Performance of Arabic-EnglishQA System","abstract":"The aim of this paper is to investigate how much the effectiveness of a Question Answering (QA) system was affected by the performance of Machine Translation (MT) based question translation. Nearly 200 questions were selected from TREC QA tracks and ran through a question answering system. It was able to answer 42.6% of the questions correctly in a monolingual run. These questions were then translated manually from English into Arabic and back into English using an MT system, and then reapplied to the QA system. The system was able to answer 10.2% of the translated questions. An analysis of what sort of translation error affected which questions was conducted, concluding that factoid type questions are less prone to translation error than others.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank EU FP6 project BRICKS (IST-2002-2.3.1.12) and Ministry of Manpower, Oman, for partly funding this study. Thanks are also due to Mark Greenwood for helping us with access to his AnswerFinder system.","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cheng-kartsaklis-2015-syntax","url":"https:\/\/aclanthology.org\/D15-1177.pdf","title":"Syntax-Aware Multi-Sense Word Embeddings for Deep Compositional Models of Meaning","abstract":"Deep compositional models of meaning acting on distributional representations of words in order to produce vectors of larger text constituents are evolving to a popular area of NLP research. We detail a compositional distributional framework based on a rich form of word embeddings that aims at facilitating the interactions between words in the context of a sentence. Embeddings and composition layers are jointly learned against a generic objective that enhances the vectors with syntactic information from the surrounding context. Furthermore, each word is associated with a number of senses, the most plausible of which is selected dynamically during the composition process. We evaluate the produced vectors qualitatively and quantitatively with positive results. At the sentence level, the effectiveness of the framework is demonstrated on the MSRPar task, for which we report results within the state-of-the-art range.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank the three anonymous reviewers for their useful comments, as well as Nal Kalchbrenner and Ed Grefenstette for early discussions and suggestions on the paper, and Si-mon\u0160uster for comments on the final draft. Dimitri Kartsaklis gratefully acknowledges financial support by AFOSR.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"park-2006-modeling","url":"https:\/\/aclanthology.org\/P06-3005.pdf","title":"Modeling Human Sentence Processing Data with a Statistical Parts-of-Speech Tagger","abstract":"It has previously been assumed in the psycholinguistic literature that finite-state models of language are crucially limited in their explanatory power by the locality of the probability distribution and the narrow scope of information used by the model. We show that a simple computational model (a bigram part-of-speech tagger based on the design used by Corley and Crocker (2000)) makes correct predictions on processing difficulty observed in a wide range of empirical sentence processing data. We use two modes of evaluation: one that relies on comparison with a control sentence, paralleling practice in human studies; another that measures probability drop in the disambiguating region of the sentence. Both are surprisingly good indicators of the processing difficulty of garden-path sentences. The sentences tested are drawn from published sources and systematically explore five different types of ambiguity: previous studies have been narrower in scope and smaller in scale. We do not deny the limitations of finite-state models, but argue that our results show that their usefulness has been underestimated.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This project was supported by the Cognitive Science Summer 2004 Research Award at the Ohio State University. We acknowledge support from NSF grant IIS 0347799.","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ide-romary-2001-common","url":"https:\/\/aclanthology.org\/P01-1040.pdf","title":"A Common Framework for Syntactic Annotation","abstract":"It is widely recognized that the proliferation of annotation schemes runs counter to the need to re-use language resources, and that standards for linguistic annotation are becoming increasingly mandatory. To answer this need, we have developed a representation framework comprised of an abstract model for a variety of different annotation types (e.g., morpho-syntactic tagging, syntactic annotation, co-reference annotation, etc.), which can be instantiated in different ways depending on the annotator s approach and goals. In this paper we provide an overview of our representation framework and demonstrate its applicability to syntactic annotation. We show how the framework can contribute to comparative evaluation and merging of parser output and diverse syntactic annotation schemes.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2001,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gibbon-1987-finite-string","url":"https:\/\/aclanthology.org\/J87-3011.pdf","title":"The Finite String Newsletter: SITE REPORT: LINGUISTICS IN GERMANY","abstract":"A recent dramatic increase in activities in computational linguistics in the German Federal Republic prompts this note. In particular, considerable attention has been focused on the major event in this field during the last few years, the three-week SiC Summer School (\"Sprache im Computerzeitalter\" --\"Language in the Computer Age\") organized in Munich in September 1986 by the German Linguistics Society (Deutsche Gesellschaft fuer Sprachwissenschaft, DGfS), and attended by 260 linguists, including students, faculty members, and representatives of industrial research and development departments.\nA mark of the importance attributed to the field, and of the impact of this Summer School, is the inauguration of a \"Sektion Computerlinguistik\" in the DGfS during its Annual Meeting in March 1987, with the aim of providing an official German partner in computational linguistics for internationally oriented research, under the auspices of the official representative body for linguistics in Germany. This initiative was supported by 37 eminent theoretical and computational linguists on the staffs of German universities and industrial R&D departments. The Society considered the time to be ripe for such a step, which would provide official representation in a linguistic context for researchers in computational linguistic questions but working in other fields, such as applied linguistics (by the \"Gesellschaft fuer Angewandte Linguistik\"), phonetic speech signal processing, or language data processing (in the \"Gesellschaff fuer linguistische Datenverarbeitung\").","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1987,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-etal-2017-instance","url":"https:\/\/aclanthology.org\/D17-1155.pdf","title":"Instance Weighting for Neural Machine Translation Domain Adaptation","abstract":"Instance weighting has been widely applied to phrase-based machine translation domain adaptation. However, it is challenging to be applied to Neural Machine Translation (NMT) directly, because NMT is not a linear model. In this paper, two instance weighting technologies, i.e., sentence weighting and domain weighting with a dynamic weight learning strategy, are proposed for NMT domain adaptation. Empirical results on the IWSLT English-German\/French tasks show that the proposed methods can substantially improve NMT performance by up to 2.7-6.7 BLEU points, outperforming the existing baselines by up to 1.6-3.6 BLEU points.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to the Dr. Andrew Finch, Dr. Atsushi Fujita and three anonymous reviewers for their insightful comments and suggestions. This work is partially supported by the program \"Promotion of Global Communications Plan:Research, Development, and Social Demonstration of Multilingual Speech Translation Technology\" of MIC, Japan.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-etal-2021-autoaspect","url":"https:\/\/aclanthology.org\/2021.law-1.4.pdf","title":"AutoAspect: Automatic Annotation of Tense and Aspect for Uniform Meaning Representations","abstract":"We present AutoAspect, a novel, rule-based annotation tool for labeling tense and aspect. The pilot version annotates English data. The aspect labels are designed specifically for Uniform Meaning Representations (UMR), an annotation schema that aims to encode crosslingual semantic information. The annotation tool combines syntactic and semantic cues to assign aspects on a sentence-by-sentence basis, following a sequence of rules that each output a UMR aspect. Identified events proceed through the sequence until they are assigned an aspect. We achieve a recall of 76.17% for identifying UMR events and an accuracy of 62.57% on all identified events, with high precision values for 2 of the aspect labels.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We gratefully acknowledge the support of NSF 1764048 RI: Medium: Collaborative Research: Developing a Uniform Meaning Representation for Natural Language Processing. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of NSF or the U.S. government. We thank James Gung and Ghazaleh Kazeminejad for their valuable assistance in teaching us how to use the SemParse tool. We thank Skatje Myers for assisting us with the ClearTAC tool.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"guo-etal-2021-pre","url":"https:\/\/aclanthology.org\/2021.smm4h-1.8.pdf","title":"Pre-trained Transformer-based Classification and Span Detection Models for Social Media Health Applications","abstract":"This paper describes our approach for six classification tasks (Tasks 1a, 3a, 3b, 4 and 5) and one span detection task (Task 1b) from the Social Media Mining for Health (SMM4H) 2021 shared tasks. We developed two separate systems for classification and span detection, both based on pre-trained Transformer-based models. In addition, we applied oversampling and classifier ensembling in the classification tasks. The results of our submissions are over the median scores in all tasks except for Task 1a. Furthermore, our model achieved first place in Task 4 and obtained a 7% higher F 1-score than the median in Task 1b.","label_nlp4sg":1,"task":["Classification","Span Detection","Health Applications"],"method":["pre - trained Transformer - based models","ensembling"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yauney-mimno-2021-comparing","url":"https:\/\/aclanthology.org\/2021.emnlp-main.449.pdf","title":"Comparing Text Representations: A Theory-Driven Approach","abstract":"Much of the progress in contemporary NLP has come from learning representations, such as masked language model (MLM) contextual embeddings, that turn challenging problems into simple classification tasks. But how do we quantify and explain this effect? We adapt general tools from computational learning theory to fit the specific characteristics of text datasets and present a method to evaluate the compatibility between representations and tasks. Even though many tasks can be easily solved with simple bag-of-words (BOW) representations, BOW does poorly on hard natural language inference tasks. For one such task we find that BOW cannot distinguish between real and randomized labelings, while pre-trained MLM representations show 72x greater distinction between real and random labelings than BOW. This method provides a calibrated, quantitative measure of the difficulty of a classificationbased NLP task, enabling comparisons between representations without requiring empirical evaluations that may be sensitive to initializations and hyperparameters. The method provides a fresh perspective on the patterns in a dataset and the alignment of those patterns with specific labels.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Maria Antoniak, Katherine Lee, Rosamond Thalken, Melanie Walsh, and Matthew Wilkens for thoughtful feedback. We also thank Benjamin Hoffman for fruitful discussions of analytic approaches. This work was supported by NSF #1652536.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kim-etal-2010-chunk","url":"https:\/\/aclanthology.org\/2010.eamt-1.27.pdf","title":"Chunk-Based EBMT","abstract":"Corpus driven machine translation approaches such as Phrase-Based Statistical Machine Translation and Example-Based Machine Translation have been successful by using word alignment to find translation fragments for matched source parts in a bilingual training corpus. However, they still cannot properly deal with systematic translation for insertion or deletion words between two distant languages. In this work, we used syntactic chunks as translation units to alleviate this problem, improve alignments and show improvement in BLEU for Korean to English and Chinese to English translation tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chakraborty-etal-2011-shared","url":"https:\/\/aclanthology.org\/W11-1307.pdf","title":"Shared Task System Description: Measuring the Compositionality of Bigrams using Statistical Methodologies","abstract":"The measurement of relative compositionality of bigrams is crucial to identify Multi-word Expressions (MWEs) in Natural Language Processing (NLP) tasks. The article presents the experiments carried out as part of the participation in the shared task 'Distributional Semantics and Compositionality (DiSCo)' organized as part of the DiSCo workshop in ACL-HLT 2011. The experiments deal with various collocation based statistical approaches to compute the relative compositionality of three types of bigram phrases (Adjective-Noun, Verbsubject and Verb-object combinations). The experimental results in terms of both fine-grained and coarse-grained compositionality scores have been evaluated with the human annotated gold standard data. Reasonable results have been obtained in terms of average point difference and coarse precision.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work has been carried out with support from \"Indian Language to Indian Language Machine Translation (ILMT) System Phrase II\", funded by DIT, Govt. of India.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gehrmann-etal-2019-improving","url":"https:\/\/aclanthology.org\/N19-1168.pdf","title":"Improving Human Text Comprehension through Semi-Markov CRF-based Neural Section Title Generation","abstract":"Titles of short sections within long documents support readers by guiding their focus towards relevant passages and by providing anchorpoints that help to understand the progression of the document. The positive effects of section titles are even more pronounced when measured on readers with less developed reading abilities, for example in communities with limited labeled text resources. We, therefore, aim to develop techniques to generate section titles in low-resource environments. In particular, we present an extractive pipeline for section title generation by first selecting the most salient sentence and then applying deletion-based compression. Our compression approach is based on a Semi-Markov Conditional Random Field that leverages unsupervised word-representations such as ELMo or BERT, eliminating the need for a complex encoder-decoder architecture. The results show that this approach leads to competitive performance with sequence-to-sequence models with high resources, while strongly outperforming it with low resources. In a humansubjects study across subjects with varying reading abilities, we find that our section titles improve the speed of completing comprehension tasks while retaining similar accuracy.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful for the helpful feedback from the three anonymous reviewers. We additionally thank Anthony Colas and Sean MacAvaney for the multiple rounds of feedback on the ideas presented in this paper.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nguyen-2019-question","url":"https:\/\/aclanthology.org\/P19-2008.pdf","title":"Question Answering in the Biomedical Domain","abstract":"Question answering techniques have mainly been investigated in open domains. However, there are particular challenges in extending these open-domain techniques to extend into the biomedical domain. Question answering focusing on patients is less studied. We find that there are some challenges in patient question answering such as limited annotated data, lexical gap and quality of answer spans. We aim to address some of these gaps by extending and developing upon the literature to design a question answering system that can decide on the most appropriate answers for patients attempting to self-diagnose while including the ability to abstain from answering when confidence is low.","label_nlp4sg":1,"task":["Question Answering"],"method":["question answering system"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"I thank for my supervisors, Dr Sarvnaz Karimi and Dr Zhenchang Xing for providing invaluable insight into the writing of this proposal. This research is supported by the Australian Research Training Program and the CSIRO Postgraduate Scholarship.","year":2019,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pradet-etal-2014-wonef","url":"https:\/\/aclanthology.org\/W14-0105.pdf","title":"WoNeF, an improved, expanded and evaluated automatic French translation of WordNet","abstract":"Automatic translations of WordNet have been tried to many different target languages. JAWS is such a translation for French nouns using bilingual dictionaries and a syntactic language model. We improve its precision and coverage, complete it with translations of other parts of speech and enhance its evaluation method. The result is named WoNeF. We produce three final translations balanced between precision (up to 93%) and coverage (up to 109 447 (literal, synset) pairs).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zaghouani-etal-2010-revised","url":"https:\/\/aclanthology.org\/W10-1836.pdf","title":"The Revised Arabic PropBank","abstract":"The revised Arabic PropBank (APB) reflects a number of changes to the data and the process of PropBanking. Several changes stem from Treebank revisions. An automatic process was put in place to map existing annotation to the new trees. We have revised the original 493 Frame Files from the Pilot APB and added 1462 new files for a total of 1955 Frame Files with 2446 framesets. In addition to a heightened attention to sense distinctions this cycle includes a greater attempt to address complicated predicates such as light verb constructions and multi-word expressions. New tools facilitate the data tagging and also simplify frame creation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We gratefully acknowledge a grant from the Defense Advanced Research Projects Agency (DARPA\/IPTO) under the GALE program, DARPA\/CMO Contract No. HR0011-06-C-0022, subcontract from BBN, Inc. We also thank Abdel-Aati Hawwary and Maha Saliba Foster and our annotators for their invaluable contributions to this project.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"widdows-cederberg-2003-monolingual","url":"https:\/\/aclanthology.org\/N03-4016.pdf","title":"Monolingual and Bilingual Concept Visualization from Corpora","abstract":"As well as identifying relevant information, a successful information management system must be able to present its findings in terms which are familiar to the user, which is especially challenging when the incoming information is in a foreign language (Levow et al., 2001) . We demonstrate techniques which attempt to address this challenge by placing terms in an abstract 'information space' based on their occurrences in text corpora, and then allowing a user to visualize local regions of this information space. Words are plotted in a 2-dimensional picture so that related words are close together and whole classes of similar words occur in recognizable clusters which sometimes clearly signify a particular meaning. As well as giving a clear view of which concepts are related in a particular document collection, this technique also helps a user to interpret unknown words.\nThe main technique we will demonstrate is planar projection of word-vectors from a vector space built using Latent Semantic Analysis (LSA) (Landauer and Dumais, 1997; Sch\u00fctze, 1998) , a method which can be applied multilingually if translated corpora are available for training. Following the method of Sch\u00fctze (1998), we assign each word 1000 coordinates based on the number of times that word occurs in a 15 word window with one of 1000 'content-bearing words', chosen by frequency, and the number of coordinates is reduced to 100 'latent dimensions' using LSA. This is still far too many words and too many dimensions to be visualized at once. To produce a meaningful diagram of results related to a particular word or query, we perform two extra steps. Firstly, we restrict attention to a given number of closely related words (determined by cosine similarity of word vectors), selecting a local group of up to 100 words and their word vectors for deeper analysis. A second round of Latent Semantic Analysis is then performed on this restricted set, giving the most significant directions to describe this local information. The 2 most significant axes determine the plane which best represents the data. (This process can be regarded as a higher-dimensional analogue of finding the line of best-fit for a normal 2-dimensional graph.) The resulting diagrams give an summary of the areas of meaning in which a word is actually used in a particular document collection. This is particularly effective for visualizing words in more than one language. This can be achieved by building a single latent semantic vector space incorporating words from two languages using a parallel corpus (Littman et al., 1998; Widdows et al., 2002b) . We will demonstrate a system which does this for English and German terms in the medical domain. The system is trained on a corpus of 10,000 abstracts from German medical documents available with their English translations 1 . In the demonstration, users submit a query statement consisting of any combination of words in English or German, and are then able to visualize the words most closely related to this query in a 2-dimensional plot of the latent semantic space.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by the Research Collaboration between the NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation and CSLI, Stanford University, and by EC\/NSF grant IST-1999-11438 for the MUCHMORE project.","year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"polat-saraclar-2020-unsupervised","url":"https:\/\/aclanthology.org\/2020.signlang-1.31.pdf","title":"Unsupervised Term Discovery for Continuous Sign Language","abstract":"Most of the sign language recognition (SLR) systems rely on supervision for training and available annotated sign language resources are scarce due to the difficulties of manual labeling. Unsupervised discovery of lexical units would facilitate the annotation process and thus lead to better SLR systems. Inspired by the unsupervised spoken term discovery in speech processing field, we investigate whether a similar approach can be applied in sign language to discover repeating lexical units. We adapt an algorithm that is designed for spoken term discovery by using hand shape and pose features instead of speech features. The experiments are run on a large scale continuous sign corpus and the performance is evaluated using gloss level annotations. This work introduces a new task for sign language processing that has not been addressed before.","label_nlp4sg":1,"task":["Continuous Sign Language"],"method":["Unsupervised Term Discovery"],"goal1":"Reduced Inequalities","goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":1,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ju-etal-2022-logic","url":"https:\/\/aclanthology.org\/2022.acl-long.407.pdf","title":"Logic Traps in Evaluating Attribution Scores","abstract":"Modern deep learning models are notoriously opaque, which has motivated the development of methods for interpreting how deep models predict. This goal is usually approached with attribution method, which assesses the influence of features on model predictions. As an explanation method, the evaluation criteria of attribution methods is how accurately it reflects the actual reasoning process of the model (faithfulness). Meanwhile, since the reasoning process of deep models is inaccessible, researchers design various evaluation methods to demonstrate their arguments. However, some crucial logic traps in these evaluation methods are ignored in most works, causing inaccurate evaluation and unfair comparison. This paper systematically reviews existing methods for evaluating attribution scores and summarizes the logic traps in these methods. We further conduct experiments to demonstrate the existence of each logic trap. Through both theoretical and experimental analysis, we hope to increase attention on the inaccurate evaluation of attribution scores. Moreover, with this paper, we suggest stopping focusing on improving performance under unreliable evaluation systems and starting efforts on reducing the impact of proposed logic traps.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by the National Key Research and Development Program of China (No.2020AAA0106400), the National Natural Science Foundation of China (No.61922085, No.61976211, No.61906196). This work is also supported by the Key Research Program of the Chinese Academy of Sciences (Grant NO. ZDBS-SSW-JSC006), the independent research project of National Laboratory of Pattern Recognition and in part by the Youth Innovation Promotion Association CAS.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gordon-1997-tm","url":"https:\/\/aclanthology.org\/1997.tc-1.10.pdf","title":"The TM Revolution - What does it really mean?","abstract":"Time is the dimension we can least control. In the translation industry time is of crucial importance to the end users and therefore to all the links in the translation industry chain. The old adage that 'Time is Money' is still true-but it is not the whole truth. In the software industry, for example, the timing of the release of a product in a local market may ensure its complete successor its failure. Any loss of impetus is exploited by competitors. The sales lost are not a one-off loss but have serious repercussions for lost repeat sales of additional licences, upgrades and add-ons, so lost market share in the early stages may result in the failure of a software package to become established on any significant sale. Lost sales are not just delayed-they are truly lost. Hence the demand for simultaneous release of local language versions of software, documentation and on-line help. The software industry is notorious for product launch delays 1 so when software is finally completed in the original language and ready for translation, the pressure on the translation function is already extreme. Consequently it is no surprise to discover the software localisation industry has been one of the most enthusiastic in embracing Translation Memory technology as a means of ensuring consistency and speeding the process. The latest release of any software is almost invariably a development of the previous version, so much of the material for translation may be unchanged-or appear to be. The traditional cut and paste techniques, whether electronic or literally cut and paste, were always time consuming, uncertain and dangerous because there was no guarantee that the translator would discover all the changes. Translation Memory systems not only recognise every change, however small, but speed up the process by dropping in unchanged text automatically (or semi-automatically if preferred) and also alert the translator to similar text for editing. There are gains in speed and consistency-and the translator is secure in the knowledge that nothing has been missed.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"damljanovic-etal-2012-applying","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/628_Paper.pdf","title":"Applying Random Indexing to Structured Data to Find Contextually Similar Words","abstract":"Language resources extracted from structured data (e.g. Linked Open Data) have already been used in various scenarios to improve conventional Natural Language Processing techniques. The meanings of words and the relations between them are made more explicit in RDF graphs, in comparison to human-readable text, and hence have a great potential to improve legacy applications. In this paper, we describe an approach that can be used to extend or clarify the semantic meaning of a word by constructing a list of contextually related terms. Our approach is based on exploiting the structure inherent in an RDF graph and then applying the methods from statistical semantics, and in particular, Random Indexing, in order to discover contextually related terms. We evaluate our approach in the domain of life science using the dataset generated with the help of domain experts from a large pharmaceutical company (AstraZeneca). They were involved in two phases: firstly, to generate a set of keywords of interest to them, and secondly to judge the set of generated contextually similar words for each keyword of interest. We compare our proposed approach, exploiting the semantic graph, with the same method applied on the human readable text extracted from the graph.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank creators of the SemanticVectors 3 library which is used in the experiments reported in this paper. This research was supported by the EU-funded LarKC 4 (FP7-215535) project. Mihai Lupu was partially supported by the PROMISE NoE (FP7-258191).","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bagheri-garakani-etal-2022-improving","url":"https:\/\/aclanthology.org\/2022.ecnlp-1.6.pdf","title":"Improving Relevance Quality in Product Search using High-Precision Query-Product Semantic Similarity","abstract":"Ensuring relevance quality in product search is a critical task as it impacts the customer's ability to find intended products in the short-term as well as the general perception and trust of the e-commerce system in the long term. In this work we leverage a high-precision crossencoder BERT model for semantic similarity between customer query and products and survey its effectiveness for three ranking applications where offline-generated scores could be used: (1) as an offline metric for estimating relevance quality impact, (2) as a re-ranking feature covering head\/torso queries, and (3) as a training objective for optimization. We present results on effectiveness of this strategy for the large e-commerce setting, which has general applicability for choice of other high-precision models and tasks in ranking.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"federico-2016-mt","url":"https:\/\/aclanthology.org\/2016.amta-users.3.pdf","title":"MT Adaptation from TMs in ModernMT","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"king-cook-2017-supervised","url":"https:\/\/aclanthology.org\/W17-1906.pdf","title":"Supervised and unsupervised approaches to measuring usage similarity","abstract":"Usage similarity (USim) is an approach to determining word meaning in context that does not rely on a sense inventory. Instead, pairs of usages of a target lemma are rated on a scale. In this paper we propose unsupervised approaches to USim based on embeddings for words, contexts, and sentences, and achieve state-of-the-art results over two USim datasets. We further consider supervised approaches to USim, and find that although they outperform unsupervised approaches, they are unable to generalize to lemmas that are unseen in the training data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was financially supported by the Natural Sciences and Engineering Research Council of Canada, the New Brunswick Innovation Foundation, ACENET, and the University of New Brunswick.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"prost-2006-numbat","url":"https:\/\/aclanthology.org\/W06-0403.pdf","title":"Numbat: Abolishing Privileges when Licensing New Constituents in Constraint-Oriented Parsing","abstract":"The constraint-oriented approaches to language processing step back from the generative theory and make it possible, in theory, to deal with all types of linguistic relationships (e.g. dependency, linear precedence or immediate dominance) with the same importance when parsing an input utterance. Yet in practice, all implemented constraint-oriented parsing strategies still need to discriminate between \"important\" and \"not-so-important\" types of relations during the parsing process. In this paper we introduce a new constraint-oriented parsing strategy based on Property Grammars, which overcomes this drawback and grants the same importance to all types of relations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rudnicky-etal-1989-evaluating","url":"https:\/\/aclanthology.org\/H89-2021.pdf","title":"Evaluating spoken language interaction","abstract":"To study the spoken language interface in the context of a complex problem-solving task, a group of users were asked to perform a spreadsheet task, alternating voice and keyboard input. A total of 40 tasks were performed by each participant, the first thirty in a group (over several days), the remaining ones a month later. The voice spreadsheet program used in this study was extensively instrumented to provide detailed information about the components of the interaction. These data, as well as analysis of the participants's utterances and recognizer output, provide a fairly detailed picture of spoken language interaction. Although task completion by voice took longer than by keyboard, analysis shows that users would be able to perform the spreadsheet task faster by voice, if two key criteria could be met: recognition occurs in real-time, and the error rate is sufficiently low. This initial experience with a spoken language system also allows us to identify several metrics, beyond those traditionally associated with speech recognition, that can be used to characterize system performance.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"A number of people have contnbuted to the work described in this paper. We would like to thank Robert Brennan who did the initial implementation of the voice spreadsheet program and Takeema Hoy who produced the bulk of the transcriptions used in our performance analyses.The research described in this paper was sponsored by the Defense Advanced Research Projects Agency (DOD), Arpa Order No. 5167, monitored by SPAWAR under contract N00039-85-C-0163. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the US Government.","year":1989,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"park-2014-phonological","url":"https:\/\/aclanthology.org\/Y14-1009.pdf","title":"Phonological Suppression of Anaphoric Wh-expressions in English and Korean","abstract":"This paper follows the lead of Chung (2013), examining the phonological suppression of the wh-expression in English and Korean. We argue that the wh-expression itself cannot undergo ellipsis\/deletion\/dropping, as it carries information focus. However, it can do so, when in anaphoricity with the preceding token of wh-expression, it changes into an E-type or sloppy-identity pronoun. This vehicle change from the whexpression to a pronoun accompanies the loss of the wh-feature inherent in the wh-expression. In a certain structural context such as a quiz question, the interrogative [+wh] complementizer does not require the presence of a whexpression, thus the expression being optionally dropped.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"clements-ohara-1997-globalink","url":"https:\/\/aclanthology.org\/1997.mtsummit-systems.2.pdf","title":"Globalink Power Translator 6 -- Barcelona Technology","abstract":"Globalink Power Translator 6 is the latest commercial MT system based on the company's Barcelona\u2122 technology. Barcelona\u2122 uses a rule-based, transfer system, with a proprietary rule editor that is accessible to users of the Power Translator Pro series. The software translates business documents, e-mail and Web pages. The program is targeted at end users in small to medium-sized businesses who seek a fast and cost-effective translation tool. The five language pairs currently available are: English to and from Spanish, French, German, Italian and Portuguese.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"korhonen-2002-assigning","url":"https:\/\/aclanthology.org\/W02-1108.pdf","title":"Assigning Verbs to Semantic Classes via WordNet","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"johnson-1998-finite-state","url":"https:\/\/aclanthology.org\/P98-1101.pdf","title":"Finite-state Approximation of Constraint-based Grammars using Left-corner Grammar Transforms","abstract":"This paper describes how to construct a finite-state machine (FSM) approximating a 'unification-based' grammar using a left-corner grammar transform. The approximation is presented as a series of grammar transforms, and is exact for left-linear and rightlinear CFGs, and for trees up to a user-specified depth of center-embedding.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sato-1993-example","url":"https:\/\/aclanthology.org\/1993.tmi-1.5.pdf","title":"Example-Based Translation of Technical Terms","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1993,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"doddington-1989-initial","url":"https:\/\/aclanthology.org\/H89-1046.pdf","title":"Initial Draft Guidelines for the Development of the Next-Generation Spoken Language Systems Speech Research Database","abstract":"To best serve the strategic needs of the DARPA SLS research program by creating the next-generation speech database(s).","label_nlp4sg":1,"task":["Development of the Next - Generation Spoken Language Systems Speech Research Database"],"method":["Guidelines"],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":null,"year":1989,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sakai-nagao-1965-sentence","url":"https:\/\/aclanthology.org\/C65-1022.pdf","title":"Sentence Generation by Semantic Concordance","abstract":"Generation of English sentence is realized in the following three steps. First, the generation of kernel sentence by phrase structure rules; second, the application of transformational rules to the kernel sentence; and finally the completion of a sentence by the morphophonemic modifications. At the first stage of generating kernel sentence, the semantics of words are fully utilized. The method is such that a pair of words in the generation process (subject noun and predicate verb, verb and object or complement, sdJective and modified noun etc.) is selected in accordance with the semantic categories which are attached to each word in the word dictionary. The semantic categories are determined by considering both the meaning of words themselves and also the functioning of words in sentences. At the stage of transformational rules, sentence is considered not as a simple string but as the one having the internal tree structure, and the transformational rules are applied to this tree structure. For these two stages the generation process is formalized strictly and is realized in a computer programming. We have presented in relation to the transformational rules a method of sentence generation not from the axiom (from th@ top Gf the tree) but from any point, from which the whole tree is constructed. We have also proposed that the morphophonemic rules can be presented as a kind of operators operating on words in the neighbourhood of a generated string.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1965,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"maegaard-etal-2008-medar","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2008\/pdf\/917_paper.pdf","title":"MEDAR: Collaboration between European and Mediterranean Arabic Partners to Support the Development of Language Technology for Arabic","abstract":"After the successful completion of the NEMLAR project 2003-2005, a new opportunity for a project was opened by the European Commission, and a group of largely the same partners is now executing the MEDAR project. MEDAR will be updating the surveys and BLARK for Arabic already made, and will then focus on machine translation (and other tools for translation) and information retrieval with a focus on language resources, tools and evaluation for these applications. A very important part of the MEDAR project is to reinforce and extend the NEMLAR network and to create a cooperation roadmap for Human Language Technologies for Arabic. It is expected that the cooperation roadmap will attract wide attention from other parties and that it can help create a larger platform for collaborative projects. Finally, the project will focus on dissemination of knowledge about existing resources and tools, as well as actors and activities; this will happen through newsletter, website and an international conference which will follow up on the Cairo conference of 2004. Dissemination to user communities will also be important, e.g. through participation in translators' conferences. The goal of these activities is to create a stronger and lasting collaboration between EU countries and Arabic speaking countries.","label_nlp4sg":1,"task":["Collaboration"],"method":[],"goal1":"Partnership for the goals","goal2":null,"goal3":null,"acknowledgments":"We want to thank the European Commission for the support to this important activity.This paper builds on work done in NEMLAR, as well as the preparation and the first part of MEDAR. MEDAR has 15 partners, and we want to acknowledge the contribution of all of them: ","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":1} {"ID":"eppler-2013-dependency","url":"https:\/\/aclanthology.org\/W13-3710.pdf","title":"Dependency Distance and Bilingual Language Use: Evidence from German\/English and Chinese\/English Data","abstract":"Closely related words tend to be close together in monolingual language use. This paper suggests that this is different in bilingual language use. The Distance Hypothesis (DH) proposes that long dependency distances between syntactically related units facilitate bilingual code-switching. We test the DH on a 9,023 word German\/English and a 19,766 word Chinese\/English corpus. Both corpora support the DH in that they present longer mixed dependencies than monolingual ones. Selected major dependency types (subject, object, adjunct) also have longer dependency distances when the head word and its dependent are from different languages. We discuss how processing motivations behind the DH make it a potentially viable motivator for bili ngual language use.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"vasquez-etal-2018-toward","url":"https:\/\/aclanthology.org\/W18-6018.pdf","title":"Toward Universal Dependencies for Shipibo-Konibo","abstract":"We present an initial version of the Universal Dependencies (UD) treebank for Shipibo-Konibo, the first South American, Amazonian, Panoan and Peruvian language with a resource built under UD. We describe the linguistic aspects of how the tagset was defined and the treebank was annotated; in addition we present our specific treatment of linguistic units called clitics. Although the treebank is still under development, it allowed us to perform a typological comparison against Spanish, the predominant language in Peru, and dependency syntax parsing experiments in both monolingual and cross-lingual approaches.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We gratefully acknowledge the support of the \"Consejo Nacional de Ciencia, Tecnolog\u00eda e Innovaci\u00f3n Tecnol\u00f3gica\" (CONCYTEC, Peru) under the contract 225-2015-FONDECYT. Furthermore, we appreciate the detailed feedback of the anonymous reviewers.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ploux-ji-2003-model","url":"https:\/\/aclanthology.org\/J03-2001.pdf","title":"A Model for Matching Semantic Maps between Languages (French\/English, English\/French)","abstract":"This article describes a spatial model for matching semantic values between two languages, French and English. Based on semantic similarity links, the model constructs a map that represents a word in the source language. Then the algorithm projects the map values onto a space in the target language. The new space abides by the semantic similarity links specific to the second language. Then the two maps are projected onto the same plane in order to detect overlapping values. For instructional purposes, the different steps are presented here using a few examples. The entire set of results is available at the following address: http:\/\/dico.isc.cnrs.fr.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We gratefully acknowledge support of the Agence Universitaire de la Francophonie and the FRANCIL network.","year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"fu-etal-2020-rethinkcws","url":"https:\/\/aclanthology.org\/2020.emnlp-main.457.pdf","title":"RethinkCWS: Is Chinese Word Segmentation a Solved Task?","abstract":"The performance of the Chinese Word Segmentation (CWS) systems has gradually reached a plateau with the rapid development of deep neural networks, especially the successful use of large pre-trained models. In this paper, we take stock of what we have achieved and rethink what's left in the CWS task. Methodologically, we propose a finegrained evaluation for existing CWS systems, which not only allows us to diagnose the strengths and weaknesses of existing models (under the in-dataset setting), but enables us to quantify the discrepancy between different criterion and alleviate the negative transfer problem when doing multi-criteria learning. Strategically, despite not aiming to propose a novel model in this paper, our comprehensive experiments on eight models and seven datasets, as well as thorough analysis, could search for some promising direction for future research. We make all codes publicly available and release an interface that can quickly evaluate and diagnose user's models: https:\/\/github. com\/neulab\/InterpretEval.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors wish to thank Zhenghua Li and Meishan Zhang for their helpful comments and carefully proofreading of our manuscript. This work was partially funded by China National Key R&D Program (No. 2017YFB1002104\uff0c2018YFC0831105, 2018YFB1005104), National Natural Science Foundation of China (No. 61976056, 61751201, 61532011), Shanghai Municipal Science and Technology Major Project (No.2018SHZDZX01), Science and Technology Commission of Shanghai Municipality Grant (No.18DZ1201000, 17JC1420200).","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wei-etal-2021-shot","url":"https:\/\/aclanthology.org\/2021.naacl-main.434.pdf","title":"Few-Shot Text Classification with Triplet Networks, Data Augmentation, and Curriculum Learning","abstract":"Few-shot text classification is a fundamental NLP task in which a model aims to classify text into a large number of categories, given only a few training examples per category. This paper explores data augmentation-a technique particularly suitable for training with limited data-for this few-shot, highlymulticlass text classification setting. On four diverse text classification tasks, we find that common data augmentation techniques can improve the performance of triplet networks by up to 3.0% on average. To further boost performance, we present a simple training strategy called curriculum data augmentation, which leverages curriculum learning by first training on only original examples and then introducing augmented data as training progresses. We explore a twostage and a gradual schedule, and find that, compared with standard single-stage training, curriculum data augmentation trains faster, improves performance, and remains robust to high amounts of noising from augmentation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Ruibo Liu, Weicheng Ma, Jerry Wei, and Chunxiao Zhou for feedback on the manuscript. We also thank Kai Zou for organizational support.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"flekova-etal-2016-exploring","url":"https:\/\/aclanthology.org\/P16-2051.pdf","title":"Exploring Stylistic Variation with Age and Income on Twitter","abstract":"Writing style allows NLP tools to adjust to the traits of an author. In this paper, we explore the relation between stylistic and syntactic features and authors' age and income. We confirm our hypothesis that for numerous feature types writing style is predictive of income even beyond age. We analyze the predictive power of writing style features in a regression task on two data sets of around 5,000 Twitter users each. Additionally, we use our validated features to study daily variations in writing style of users from distinct income groups. Temporal stylistic patterns not only provide novel psychological insight into user behavior, but are useful for future research and applications in social media.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors acknowledge the support from Templeton Religion Trust, grant TRT-0048. We also wish to thank Prof. Iryna Gurevych for supporting the collaboration.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hieu-etal-2020-reintel","url":"https:\/\/aclanthology.org\/2020.vlsp-1.1.pdf","title":"ReINTEL Challenge 2020: Vietnamese Fake News Detection usingEnsemble Model with PhoBERT embeddings","abstract":"Along with the increasing traffic of social networks in Vietnam in recent years, the number of unreliable news has also grown rapidly. As we make decisions based on the information we come across daily, fake news, depending on the severity of the matter, can lead to disastrous consequences. This paper presents our approach for the Fake News Detection on Social Network Sites (SNSs), using an ensemble method with linguistic features extracted using PhoBERT (Nguyen and Nguyen, 2020). Our method achieves AUC score of 0.9521 and got 1 st place on the private test at the 7 th International Workshop on Vietnamese Language and Speech Processing (VLSP). For reproducing the result, the code can be found at https:\/\/gitlab.com\/thuan.","label_nlp4sg":1,"task":["Fake News Detection"],"method":["PhoBERT embeddings","ensemble"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"chen-liu-1992-word","url":"https:\/\/aclanthology.org\/C92-1019.pdf","title":"Word Identification for Mandarin Chinese Sentences","abstract":"Chinese sentences are composed with string of characters without blanks to mark words. However the basic unit for sentence parsing and understanding is word. Therefore the first step of processing Chinese sentences is to identify the words. The difficulties of identifying words include (l) the identification of complex words, such as Determinative-Measure, reduplications, derived words etc., (2) the identification of proper names,(3) resolving the ambiguous segmentations. In this paper, we propose the possible solutions for the above difficulties. We adopt a matching algorithm with 6 different heuristic rules to resolve the ambiguities and achieve an 99.77% of the success rate. The statistical data supports that the maximal matching algorithm is the most effective heuristics.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"osborne-etal-2016-encoding","url":"https:\/\/aclanthology.org\/Q16-1030.pdf","title":"Encoding Prior Knowledge with Eigenword Embeddings","abstract":"Canonical correlation analysis (CCA) is a method for reducing the dimension of data represented using two views. It has been previously used to derive word embeddings, where one view indicates a word, and the other view indicates its context. We describe a way to incorporate prior knowledge into CCA, give a theoretical justification for it, and test it by deriving word embeddings and evaluating them on a myriad of datasets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank Paramveer Dhillon for his help with running the SWELL package. The authors would also like to thank Manaal Faruqui and Sujay Kumar Jauhar for their help and technical assistance with the retrofitting package and the word embedding evaluation suite. Thanks also to Ankur Parikh for early discusions on this project. This work was completed while the first author was an intern at the University of Edinburgh, as part of the Equate Scotland program. This research was supported by an EPSRC grant (EP\/L02411X\/1) and an EU H2020 grant (688139\/H2020-ICT-2015; SUMMA).","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"arppe-etal-2017-converting","url":"https:\/\/aclanthology.org\/W17-0108.pdf","title":"Converting a comprehensive lexical database into a computational model: The case of East Cree verb inflection","abstract":"In this paper we present a case study of how comprehensive, well-structured, and consistent lexical databases, one indicating the exact inflectional subtype of each word and another exhaustively listing the full paradigm for each inflectional subtype, can be quickly and reliably converted into a computational model of the finite-state transducer (FST) kind. As our example language, we will use (Northern) East Cree (Algonquian, ISO 639-3: crl), a morphologically complex Indigenous language. We will focus on modeling (Northern) East Cree verbs, as their paradigms represent the most richly inflected forms in this language.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been supported by funding from the Social Sciences and Humanities Research Council of Canada Partnership Development (890-2013-0047) and Insight (435-2014-1199) grants, a Carleton University FAAS research award, and Kule Institute for Advanced Study, University of Alberta, Research Cluster Grant.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hong-etal-2011-using","url":"https:\/\/aclanthology.org\/P11-1113.pdf","title":"Using Cross-Entity Inference to Improve Event Extraction","abstract":"Event extraction is the task of detecting certain specified types of events that are mentioned in the source language data. The state-of-the-art research on the task is transductive inference (e.g. cross-event inference). In this paper, we propose a new method of event extraction by well using cross-entity inference. In contrast to previous inference methods, we regard entitytype consistency as key feature to predict event mentions. We adopt this inference method to improve the traditional sentence-level event extraction system. Experiments show that we can get 8.6% gain in trigger (event) identification, and more than 11.8% gain for argument (role) classification in ACE event extraction.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Ruifang He. And we acknowledge the support of the National Natural Science Foundation of China under Grant Nos. 61003152, 60970057, 90920004.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hindle-1993-prediction","url":"https:\/\/aclanthology.org\/H93-1048.pdf","title":"Prediction of Lexicalized Tree Fragments in Text","abstract":"There is a mismatch between the distribution of information in text, and a variety of grammatical formalisms for describing it, including ngrams, context-free grammars, and dependency grammars. Rather than adding probabilities to existing grammars, it is proposed to collect the distributions of flexibly sized partial trees. These can be used to enhance an ngram model, and in analogical parsing.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1993,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cheung-2015-concept","url":"https:\/\/aclanthology.org\/W15-4003.pdf","title":"Concept Extensions as the Basis for Vector-Space Semantics: Combining Distributional and Ontological Information about Entities","abstract":"We propose to base the development of vector-space models of semantics on concept extensions, which defines concepts to be sets of entities. We investigate two sources of knowledge about entities that could be relevant: distributional information provided by word or phrase embeddings, and ontological information derived from a knowledge base. We develop a feedforward neural network architecture to learn entity representations that are used to predict their concept memberships, and show that the two sources of information are actually complementary. In an entity ranking experiment, the combination approach that uses both types of information outperforms models that only rely on one of the two. We also perform an analysis of the output using fuzzy logic techniques to demonstrate the potential of learning concept extensions for supporting inference involving classical semantic operations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Patricia Araujo Thaine, Aida Nematzadeh, Nissan Pow, and the anonymous reviewers for useful discussions and feedback. This work is funded by the Natural Sciences and Engineering Research Council of Canada.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gao-etal-2021-coil","url":"https:\/\/aclanthology.org\/2021.naacl-main.241.pdf","title":"COIL: Revisit Exact Lexical Match in Information Retrieval with Contextualized Inverted List","abstract":"Classical information retrieval systems such as BM25 rely on exact lexical match and carry out search efficiently with inverted list index. Recent neural IR models shifts towards soft semantic matching all query document terms, but they lose the computation efficiency of exact match systems. This paper presents COIL, a contextualized exact match retrieval architecture that brings semantic lexical matching. COIL scoring is based on overlapping query document tokens' contextualized representations. The new architecture stores contextualized token representations in inverted lists, bringing together the efficiency of exact match and the representation power of deep language models. Our experimental results show COIL outperforms classical lexical retrievers and state-of-the-art deep LM retrievers with similar or smaller latency. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"erk-2014-evoked","url":"https:\/\/aclanthology.org\/W14-3009.pdf","title":"Who Evoked that Frame? Some Thoughts on Context Effects and Event Types","abstract":"Lexical substitution is an annotation task in which annotators provide one-word paraphrases (lexical substitutes) for individual target words in a sentence context. Lexical substitution yields a fine-grained characterization of word meaning that can be done by non-expert annotators. We discuss results of a recent lexical substitution annotation effort, where we found strong contextual modulation effects: Many substitutes were not synonyms, hyponyms or hypernyms of the targets, but were highly specific to the situation at hand. This data provides some food for thought for framesemantic analysis.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"deng-wiebe-2014-investigation","url":"https:\/\/aclanthology.org\/W14-2603.pdf","title":"An Investigation for Implicatures in Chinese : Implicatures in Chinese and in English are similar !","abstract":"Implicit opinions are commonly seen in opinion-oriented documents, such as political editorials. Previous work have utilized opinion inference rules to detect implicit opinions evoked by events that positively\/negatively affect entities (good-For\/badFor) to improve sentiment analysis for English text. Since people in different languages may express implicit opinions in different ways, in this work we investigate implicit opinions expressed via goodFor\/badFor events in Chinese. The positive results have provided evidences that such implicit opinions and inference rules are similar in Chinese and in English. Moreover, we have observed cases where the inferences are blocked.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgement This work was supported in part by DARPA-BAA-12-47 DEFT grant #12475008 and National Science Foundation grant #IIS-0916046. We would like to thank Changsheng Liu and Fan Zhang for their annotations in the agreement study, and thank anonymous reviewers for their feedback.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"boros-2013-unified","url":"https:\/\/aclanthology.org\/R13-1012.pdf","title":"A unified lexical processing framework based on the Margin Infused Relaxed Algorithm. A case study on the Romanian Language","abstract":"General natural language processing and text-to-speech applications require certain (lexical level) processing steps in order to solve some frequent tasks such as lemmatization, syllabification, lexical stress prediction and phonetic transcription. These steps usually require knowledge of the word's lexical composition (derivative morphology, inflectional affixes, etc.). For known words all applications use lexicons, but there are always out-of-vocabulary (OOV) words that impede the performance of NLP and speech synthesis applications. In such cases, either rule based or data-driven techniques are used to automatically process these OOV words and generate the desired results. In this paper we describe how the above mentioned tasks can be achieved using a Perceptron with the Margin Infused Relaxed Algorithm (MIRA) and sequence labeling.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work reported here was funded by the project METANET4U by the European Commission under the Grant Agreement No 270893","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"olariu-2014-efficient","url":"https:\/\/aclanthology.org\/E14-4046.pdf","title":"Efficient Online Summarization of Microblogging Streams","abstract":"The large amounts of data generated on microblogging services are making summarization challenging. Previous research has mostly focused on working in batches or with filtered streams. Input data has to be saved and analyzed several times, in order to detect underlying events and then summarize them. We improve the efficiency of this process by designing an online abstractive algorithm. Processing is done in a single pass, removing the need to save any input data and improving the running time. An online approach is also able to generate the summaries in real time, using the latest information. The algorithm we propose uses a word graph, along with optimization techniques such as decaying windows and pruning. It outperforms the baseline in terms of summary quality, as well as time and memory efficiency.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mori-nagao-1995-parsing","url":"https:\/\/aclanthology.org\/1995.iwpt-1.22.pdf","title":"Parsing Without Grammar","abstract":"mori, nagao \u00a9kuee. kyoto-u .ac.jp Abstruct \\Ve describe and evaluate experimentally a method to parse a tagged corpus without grammar modeling a natural language on context-free language. This method is based on the following three hypotheses. 1) Part-of-speech sequences on the right-hand side of a rewriting rule are less constrained as to what part-of-speech precedes and follows them than non-constituent sequences. 2) Part-of-speech sequences directly derived from the same non-terminal symbol have similar environments. 3) The most suitable set of rewriting rules makes the greatest reduction of the corpus size. Based on these hypotheses, the system finds a set of constituent-like part-of-speech sequences and replaces them with a new symbol. The repetition of these processes brings us a set of rewriting rules, a grammar , and the bracketed corpus.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1995,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wu-etal-2012-system","url":"https:\/\/aclanthology.org\/W12-5704.pdf","title":"System Combination with Extra Alignment Information","abstract":"This paper provides the system description of the IHMM team of Dublin City University for our participation in the system combination task in the Second Workshop on Applying Machine Learning Techniques to Optimise the Division of Labour in Hybrid MT (ML4HMT-12). Our work is based on a confusion network-based approach to system combination. We propose a new method to build a confusion network for this: (1) incorporate extra alignment information extracted from given meta data, treating them as sure alignments, into the results from IHMM, and (2) decode together with this information. We also heuristically set one of the system outputs as the default backbone. Our results show that this backbone, which is the RBMT system output, achieves an 0.11% improvement in BLEU over the backbone chosen by TER, while the extra information we added in the decoding part does not improve the results.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by Science Foundation Ireland (Grant No. 07\/CE\/I1142) as part of the Centre for Next Generation Localisation (www.cngl.ie) at Dublin City University.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dinarelli-etal-2009-ranking","url":"https:\/\/aclanthology.org\/D09-1112.pdf","title":"Re-Ranking Models Based-on Small Training Data for Spoken Language Understanding","abstract":"The design of practical language applications by means of statistical approaches requires annotated data, which is one of the most critical constraint. This is particularly true for Spoken Dialog Systems since considerably domain-specific conceptual annotation is needed to obtain accurate Language Understanding models. Since data annotation is usually costly, methods to reduce the amount of data are needed. In this paper, we show that better feature representations serve the above purpose and that structure kernels provide the needed improved representation. Given the relatively high computational cost of kernel methods, we apply them to just re-rank the list of hypotheses provided by a fast generative model. Experiments with Support Vector Machines and different kernels on two different dialog corpora show that our re-ranking models can achieve better results than state-of-the-art approaches when small data is available.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-etal-2021-de","url":"https:\/\/aclanthology.org\/2021.acl-long.430.pdf","title":"De-Confounded Variational Encoder-Decoder for Logical Table-to-Text Generation","abstract":"Logical table-to-text generation aims to automatically generate fluent and logically faithful text from tables. The task remains challenging where deep learning models often generated linguistically fluent but logically inconsistent text. The underlying reason may be that deep learning models often capture surface-level spurious correlations rather than the causal relationships between the table x and the sentence y. Specifically, in the training stage, a model can get a low empirical loss without understanding x and use spurious statistical cues instead. In this paper, we propose a de-confounded variational encoder-decoder (DCVED) based on causal intervention, learning the objective p(y|do(x)). Firstly, we propose to use variational inference to estimate the confounders in the latent space and cooperate with the causal intervention based on Pearl's do-calculus to alleviate the spurious correlations. Secondly, to make the latent confounder meaningful, we propose a backprediction process to predict the not-used entities but linguistically similar to the exactly selected ones. Finally, since our variational model can generate multiple candidates, we train a table-text selector to find out the best candidate sentence for the given table. An extensive set of experiments show that our model outperforms the baselines and achieves new state-of-the-art performance on two logical table-to-text datasets in terms of logical fidelity. country date Europe","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank the anonymous reviewers for their constructive comments. This work was supported by the National Key Research and Development Program of China under Grant 2018YFC0830400, and Shanghai Municipal Science and Technology Major Project under Grant 2021SHZDZX0102.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"o-seaghdha-copestake-2009-using","url":"https:\/\/aclanthology.org\/E09-1071.pdf","title":"Using Lexical and Relational Similarity to Classify Semantic Relations","abstract":"Many methods are available for computing semantic similarity between individual words, but certain NLP tasks require the comparison of word pairs. This paper presents a kernel-based framework for application to relational reasoning tasks of this kind. The model presented here combines information about two distinct types of word pair similarity: lexical similarity and relational similarity. We present an efficient and flexible technique for implementing relational similarity and show the effectiveness of combining lexical and relational models by demonstrating state-ofthe-art results on a compound noun interpretation task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to Peter Turney, Andreas Vlachos and the anonymous EACL reviewers for their helpful comments. This work was supported in part by EPSRC grant EP\/C010035\/1.","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"spiegl-etal-2010-fau","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/275_Paper.pdf","title":"FAU IISAH Corpus -- A German Speech Database Consisting of Human-Machine and Human-Human Interaction Acquired by Close-Talking and Far-Distance Microphones","abstract":"In this paper the FAU IISAH corpus and its recording conditions are described: a new speech database consisting of human-machine and human-human interaction recordings. Beside close-talking microphones for the best possible audio quality of the recorded speech, far-distance microphones were used to acquire the interaction and communication. The recordings took place during a Wizard-of-Oz experiment in the intelligent, senior-adapted house (ISA-House). That is a living room with a speech controlled home assistance system for elderly people, based on a dialogue system, which is able to process spontaneous speech. During the studies in the ISA-House more than eight hours of interaction data were recorded including 3 hours and 27 minutes of spontaneous speech. The data were annotated under the aspect of human-human (off-talk) and human-machine (on-talk) interaction.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"morik-1985-user","url":"https:\/\/aclanthology.org\/E85-1040.pdf","title":"User Modelling, Dialog Structure, and Dialog Strategy in HAM-ANS","abstract":"AI dialog systems are now developing from question-answering systems toward advising systems. This includes: discussed here, but see (Jameson, Wahlster 1982). The second part of this paper presents user modelling with respect to a dialog strategy which selects and verbalizes the appropriate speech act of recommendation.-structuring dialog-understanding and generating a wider range of speech acts than simply information request and answer user modelling User modelling in HAM-ANS is closely connected to dialog structure and dialog strategy. In advising the user, the system generates and verbalizes speech acts. The choice of the speech act is guided by the user profile and the dialog strategy of the system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1985,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"labutov-lipson-2013-embedding","url":"https:\/\/aclanthology.org\/P13-2087.pdf","title":"Re-embedding words","abstract":"We present a fast method for re-purposing existing semantic word vectors to improve performance in a supervised task. Recently, with an increase in computing resources, it became possible to learn rich word embeddings from massive amounts of unlabeled data. However, some methods take days or weeks to learn good embeddings, and some are notoriously difficult to train. We propose a method that takes as input an existing embedding, some labeled data, and produces an embedding in the same space, but with a better predictive performance in the supervised task. We show improvement on the task of sentiment classification with respect to several baselines, and observe that the approach is most useful when the training set is sufficiently small.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"alqifari-2019-question","url":"https:\/\/aclanthology.org\/R19-2011.pdf","title":"Question Answering Systems Approaches and Challenges","abstract":"Question answering (QA) systems permit the user to ask a question using natural language, and the system provides a concise and correct answer. QA systems can be implemented for different types of datasets, structured or unstructured. In this paper, some of the recent studies will be reviewed and the limitations will be discussed. Consequently, the current issues are analyzed with the proposed solutions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"I would like to thank my research supervisors, Dr. Suresh Manandhar, and Prof. Hend Al-Khalifa.Without their support and advice, this paper would have never been accomplished.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"grundkiewicz-etal-2021-user","url":"https:\/\/aclanthology.org\/2021.humeval-1.11.pdf","title":"On User Interfaces for Large-Scale Document-Level Human Evaluation of Machine Translation Outputs","abstract":"Recent studies emphasize the need of document context in human evaluation of machine translations, but little research has been done on the impact of user interfaces on annotator productivity and the reliability of assessments. In this work, we compare human assessment data from the last two WMT evaluation campaigns collected via two different methods for document-level evaluation. Our analysis shows that a document-centric approach to evaluation where the annotator is presented with the entire document context on a screen leads to higher quality segment and document level assessments. It improves the correlation between segment and document scores and increases inter-annotator agreement for document scores but is considerably more time consuming for annotators.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rauch-1994-automatisk","url":"https:\/\/aclanthology.org\/W93-0418.pdf","title":"Automatisk igenk\\\"anning av nominalfraser i l\\\"opande text (Automatic recognition of nominal phrases in running text) [In Swedish]","abstract":"I uppsatsen redog\u00f6rs f\u00f6r en samling algoritmer f\u00f6r automatisk nominalfrasmarkering i l\u00f6pande text. Algoritmerna bygger p\u00e5 varandra och anv\u00e4nder den redan utf\u00f6rda analysen. De \u00e4r d\u00e4rmed enkla och inte tidskr\u00e4vande. Algoritmerna kan grovt indelas i tv\u00e5 grupper: den f\u00f6rsta gruppen markerar k\u00e4rnnominalfraser eller minimala nominalfraser. F\u00f6r svenskan r\u00f6r det sig i stort sett om best\u00e4mningar, som st\u00e5r till v\u00e4nster om huvudledet (substantivet). Den andra gruppen av algoritmer markerar maximala nominalfraser. H\u00e4r l\u00e4gger man allts\u00e5 till prepositionsfraser, infmitivkonstruktioner, m m. Den sista gruppen har inte tagits upp h\u00e4r. Indatan har h\u00e4mtats ur tidningar, b\u00f6cker och andra publikationer p\u00e5 svenska. Texterna taggades med ordklasser och morfologiskt markerade, grammatiska kategorier, men f\u00f6r \u00f6vrigt anv\u00e4nder algoritmerna ingen lexikal information. (\u00c4ven om mindre ordlistor kunde f\u00f6rb\u00e4ttra resultatet avsev\u00e4rt; t ex en lista \u00f6ver substantiv som best\u00e4mmer m\u00e4ngden av n\u00e5gonting och som f\u00f6rekommer i appositioner (i exemplen: par och antal): ett par minuter, ett antal m\u00e4nniskor. Utan semantisk information kan man inte avg\u00f6ra om det r\u00f6r sig om en eller tv\u00e5 NP.) Indatan inneh\u00e5ller ungef\u00e4r 12 000 k\u00e4rnnominalfraser. En vidareutveckling av programmet kan vara att j\u00e4mf\u00f6ra meningar med liknande struktur (samma finita verb) f\u00f6r att skapa ett valenslexikon (huvudsakligen f\u00f6r verb, men substantiv och adjektiv skulle ocks\u00e5 kunna vara med).\nSom man ser i inledningen \u00e4r uppgiften v\u00e4ldigt komplex. Framf\u00f6r allt om man t\u00e4nker p\u00e5 att det \u00e4r en korkad dator som skall utf\u00f6ra arbetet. D\u00e4rf\u00f6r \u00e4r det n\u00f6dv\u00e4ndigt att splittra problemet i sm\u00e5 delproblem som l\u00e4ttare kan l\u00f6sas. Samtidigt kan de olika delarna av programmet ta h\u00e4nsyn till redan utf\u00f6rt arbete, vilket ytterligare underl\u00e4ttar analysen. En annan f\u00f6rdel med denna indelning \u00e4r, att man kan f\u00f6lja principen att inskr\u00e4nka den grammatiska informationen i programmets olika delar, f\u00f6r att se vilka grammatiska kategorier som \u00e4r n\u00f6dv\u00e4ndiga respektive on\u00f6diga f\u00f6r analysen. Vidare skulle programmet vara tolerant mot 'ogrammatiska' nominalfraser, som:","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"olteanu-moldovan-2005-pp","url":"https:\/\/aclanthology.org\/H05-1035.pdf","title":"PP-attachment Disambiguation using Large Context","abstract":"Prepositional Phrase-attachment is a common source of ambiguity in natural language. The previous approaches use limited information to solve the ambiguity-four lexical heads-although humans disambiguate much better when the full sentence is available. We propose to solve the PP-attachment ambiguity with a Support Vector Machines learning model that uses complex syntactic and semantic features as well as unsupervised information obtained from the World Wide Web. The system was tested on several datasets obtaining an accuracy of 93.62% on a Penn Treebank-II dataset; 91.79% on a FrameNet dataset when no manuallyannotated semantic information is provided and 92.85% when semantic information is provided.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bao-etal-2011-graph","url":"https:\/\/aclanthology.org\/P11-1091.pdf","title":"A Graph Approach to Spelling Correction in Domain-Centric Search","abstract":"Spelling correction for keyword-search queries is challenging in restricted domains such as personal email (or desktop) search, due to the scarcity of query logs, and due to the specialized nature of the domain. For that task, this paper presents an algorithm that is based on statistics from the corpus data (rather than the query log). This algorithm, which employs a simple graph-based approach, can incorporate different types of data sources with different levels of reliability (e.g., email subject vs. email body), and can handle complex spelling errors like splitting and merging of words. An experimental study shows the superiority of the algorithm over existing alternatives in the email domain.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"da-san-martino-etal-2020-prta","url":"https:\/\/aclanthology.org\/2020.acl-demos.32.pdf","title":"Prta: A System to Support the Analysis of Propaganda Techniques in the News","abstract":"Recent events, such as the 2016 US Presidential Campaign, Brexit and the COVID-19 \"infodemic\", have brought into the spotlight the dangers of online disinformation. There has been a lot of research focusing on factchecking and disinformation detection. However, little attention has been paid to the specific rhetorical and psychological techniques used to convey propaganda messages. Revealing the use of such techniques can help promote media literacy and critical thinking, and eventually contribute to limiting the impact of \"fake news\" and disinformation campaigns. Prta (Propaganda Persuasion Techniques Analyzer) allows users to explore the articles crawled on a regular basis by highlighting the spans in which propaganda techniques occur and to compare them on the basis of their use of propaganda techniques. The system further reports statistics about the use of such techniques, overall and over time, or according to filtering criteria specified by the user based on time interval, keywords, and\/or political orientation of the media. Moreover, it allows users to analyze any text or URL through a dedicated interface or via an API. The system is available online: https:\/\/www.tanbih.org\/prta.","label_nlp4sg":1,"task":["Analysis of Propaganda"],"method":[],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"The Prta system is developed within the Propaganda Analysis Project 7 , part of the Tanbih project 8 . Tanbih aims to limit the effect of \"fake news\", propaganda, and media bias by making users aware of what they are reading, thus promoting media literacy and critical thinking. Different organizations collaborate in Tanbih, including the Qatar Computing Research Institute (HBKU) and MIT-CSAIL.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"akasaki-kaji-2019-conversation","url":"https:\/\/aclanthology.org\/N19-1400.pdf","title":"Conversation Initiation by Diverse News Contents Introduction","abstract":"In our everyday chitchat , there is a conversation initiator, who proactively casts an initial utterance to start chatting. However, most existing conversation systems cannot play this role. Previous studies on conversation systems assume that the user always initiates conversation, and have placed emphasis on how to respond to the given user's utterance. As a result, existing conversation systems become passive. Namely they continue waiting until being spoken to by the users. In this paper, we consider the system as a conversation initiator and propose a novel task of generating the initial utterance in open-domain non-task-oriented conversation. Here, in order not to make users bored, it is necessary to generate diverse utterances to initiate conversation without relying on boilerplate utterances like greetings. To this end, we propose to generate initial utterance by summarizing and chatting about news articles, which provide fresh and various contents everyday. To address the lack of the training data for this task, we constructed a novel largescale dataset through crowd-sourcing. We also analyzed the dataset in detail to examine how humans initiate conversations (the dataset will be released to facilitate future research activities). We present several approaches to conversation initiation including information retrieval based and generation based models. Experimental results showed that the proposed models trained on our dataset performed reasonably well and outperformed baselines that utilize automatically collected training data in both automatic and manual evaluation. * This work was done during research internship at Yahoo Japan Corporation. 1 \"Conversation\" in this paper refers to open-domain nontask-oriented conversations and chitchat .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Manabu Sassano for fruitful discussions and comments. We also thank the anonymous reviewers.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"banea-etal-2010-multilingual","url":"https:\/\/aclanthology.org\/C10-1004.pdf","title":"Multilingual Subjectivity: Are More Languages Better?","abstract":"While subjectivity related research in other languages has increased, most of the work focuses on single languages. This paper explores the integration of features originating from multiple languages into a machine learning approach to subjectivity analysis, and aims to show that this enriched feature set provides for more effective modeling for the source as well as the target languages. We show not only that we are able to achieve over 75% macro accuracy in all of the six languages we experiment with, but also that by using features drawn from multiple languages we can construct high-precision meta-classifiers with a precision of over 83%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This material is based in part upon work supported by National Science Foundation awards #0917170 and #0916046. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dale-etal-2002-evangelising","url":"https:\/\/aclanthology.org\/W02-0104.pdf","title":"Evangelising Language Technology: A Practically-Focussed Undergraduate Program","abstract":"This paper describes an undergraduate program in Language Technology that we have developed at Macquarie University. We question the industrial relevance of much that is taught in NLP courses, and emphasize the need for a practical orientation as a means to growing the size of the field. We argue that a more evangelical approach, both with regard to students and industry, is required. The paper provides an overview of the material we cover, and makes some observations for the future on the basis of our experiences so far.","label_nlp4sg":1,"task":["Evangelising Language Technology"],"method":["Undergraduate Program"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wahlgren-1961-linguistic","url":"https:\/\/aclanthology.org\/1961.earlymt-1.15.pdf","title":"Linguistic analysis or Russian chemical terminology","abstract":"THIS paper is a discussion of a specialized phase of linguistic research being carried on in the Machine Translation Project at the University of California in Berkeley. The material presented here is intended to illustrate in some detail the application of linguistic analysis to a particular problem. The fundamental approach upon which this work, is based has been described in a paper by Sydney M. Lamb. 1 The first part of the following discussion deals with theoretical considerations underlying linguistic research into scientific terminology, with special reference to chemical terminology. The second part of the paper provides material which is illustrative of a linguistic description of chemical nomenclature. Examples are drawn from a detailed grammatical analysis of chemical terminology which is being conducted. Ultimately the results of this analysis will be incorporated into the total grammatical description of Russian which is to be employed in the machine translation process.\nRelatively little attention is devoted here to the machine translation process, inasmuch as the application of the results of linguistic analysis constitutes a separate operation in the California Project. Some general comment on this aspect of the problem, however, will be made where necessary.","label_nlp4sg":1,"task":["linguistic research"],"method":["Linguistic analysis"],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":null,"year":1961,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"welleck-etal-2019-dialogue","url":"https:\/\/aclanthology.org\/P19-1363.pdf","title":"Dialogue Natural Language Inference","abstract":"Consistency is a long standing issue faced by dialogue models. In this paper, we frame the consistency of dialogue agents as natural language inference (NLI) and create a new natural language inference dataset called Dialogue NLI. We propose a method which demonstrates that a model trained on Dialogue NLI can be used to improve the consistency of a dialogue model, and evaluate the method with human evaluation and with automatic metrics on a suite of evaluation sets designed to measure a dialogue model's consistency.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hernaez-etal-2002-bizkaifon","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2002\/pdf\/46.pdf","title":"BIZKAIFON: A sound archive of dialectal varieties of spoken Basque","abstract":"This paper presents the sound archive of dialectal varieties of spoken Basque called BIZKAIFON. This database contains sound archives with their associated information and it is accessible via a web interface. A prototype of BIZKAIFON is available at http:\/\/bizkaifon.ehu.es\/.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhu-etal-2022-continual","url":"https:\/\/aclanthology.org\/2022.acl-long.80.pdf","title":"Continual Prompt Tuning for Dialog State Tracking","abstract":"A desirable dialog system should be able to continually learn new skills without forgetting old ones, and thereby adapt to new domains or tasks in its life cycle. However, continually training a model often leads to a well-known catastrophic forgetting issue. In this paper, we present Continual Prompt Tuning, a parameterefficient framework that not only avoids forgetting but also enables knowledge transfer between tasks. To avoid forgetting, we only learn and store a few prompt tokens' embeddings for each task while freezing the backbone pre-trained model. To achieve bi-directional knowledge transfer among tasks, we propose several techniques (continual prompt initialization, query fusion, and memory replay) to transfer knowledge from preceding tasks and a memory-guided technique to transfer knowledge from subsequent tasks. Extensive experiments demonstrate the effectiveness and efficiency of our proposed method on continual learning for dialog state tracking, compared with state-of-the-art baselines. * Corresponding author. USER: I'd like to find a place to eat. SYSTEM: In which city are you looking for the restaurant and do you have any preferred cuisine? USER: Find me Ethiopian cuisine in Berkeley.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the National Science Foundation for Distinguished Young Scholars (with No. 62125604) and the NSFC projects (Key project with No. 61936010 and regular project with No. 61876096). This work was also supported by the Guoqiang Institute of Tsinghua University, with Grant No. 2019GQG1 and 2020GQG0005.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"trippel-etal-2004-consistent","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/655.pdf","title":"Consistent Storage of Metadata in Inference Lexica: the MetaLex Approach","abstract":"With MetaLex we introduce a framework for metadata management where information can be inferred from different areas of metadata coding, such as metadata for catalogue descriptions, linguistic levels, or tiers. This is done for consistency and efficiency in metadata recording and applies the same inference techniques that are used for lexical inference. For this purpose we motivate the need for metadata descriptions on all document levels, describe the different structures of metadata, use existing metadata recommendations on different levels of annotations, and show a usecase of metadata inference.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work presented in this paper was funded mainly by the German Research Council (DFG) grant to the project Theory and Design of Multimodal Lexica, Research Group Text Technological Information Modelling. Too many colleagues and students have helped with critical feedback following guest lectures and conference presentations to be named here; we are grateful to them all.","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kobayashi-etal-2022-constrained","url":"https:\/\/aclanthology.org\/2022.acl-long.56.pdf","title":"Constrained Multi-Task Learning for Bridging Resolution","abstract":"We examine the extent to which supervised bridging resolvers can be improved without employing additional labeled bridging data by proposing a novel constrained multi-task learning framework for bridging resolution, within which we (1) design cross-task consistency constraints to guide the learning process; (2) pretrain the entity coreference model in the multitask framework on the large amount of publicly available coreference data; and (3) integrate prior knowledge encoded in rule-based resolvers. Our approach achieves state-of-theart results on three standard evaluation corpora.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the four anonymous reviewers for their insightful comments on an earlier draft of the paper. This work was supported in part by NSF Grants IIS-1528037 and CCF-1848608. Any opinions, findings, conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views or official policies, either expressed or implied, of the NSF.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"morawietz-cornell-1997-representing","url":"https:\/\/aclanthology.org\/P97-1060.pdf","title":"Representing Constraints with Automata","abstract":"In this paper we describe an approach to constraint based syntactic theories in terms of finite tree automata. The solutions to constraints expressed in weak monadic second order (MSO) logic are represented by tree automata recognizing the assignments which make the formulas true. We show that this allows an efficient representation of knowledge about the content of constraints which can be used as a practical tool for grammatical theory verification. We achieve this by using the intertranslatability of formulae of MSO logic and tree automata and the embedding of MSO logic into a constraint logic programming scheme. The usefulness of the approach is discussed with examples from the realm of Principles-and-Parameters based parsing.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been supported by the project A8 of the SFB 340 of the Deutsche Forschungsgemeinschaft. We wish especially to thank Uwe MSnnich and Jim Rogers for discussions and advice. Needless to say, any errors and infelicities which remain are ours alone.","year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lu-etal-2016-evaluating","url":"https:\/\/aclanthology.org\/W16-5208.pdf","title":"Evaluating Ensemble Based Pre-annotation on Named Entity Corpus Construction in English and Chinese","abstract":"Annotated corpora are crucial language resources, and pre-annotation is an usual way to reduce the cost of corpus construction. Ensemble based pre-annotation approach combines multiple existing named entity taggers and categorizes annotations into normal annotations with high confidence and candidate annotations with low confidence, to reduce the human annotation time. In this paper, we manually annotate three English datasets under various pre-annotation conditions, report the effects of ensemble based pre-annotation, and analyze the experimental results. In order to verify the effectiveness of ensemble based pre-annotation in other languages, such as Chinese, three Chinese datasets are also tested. The experimental results show that the ensemble based pre-annotation approach significantly reduces the number of annotations which human annotators have to add, and outperforms the baseline approaches in reduction of human annotation time without loss in annotation performance (in terms of F 1-measure), on both English and Chinese datasets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is partially funded by the National Science Foundation of China under Grant 61170165, 61602260, 61502095. We would like to thank all the anonymous reviewers for their helpful comments.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tang-shen-2020-categorizing","url":"https:\/\/aclanthology.org\/2020.ccl-1.97.pdf","title":"Categorizing Offensive Language in Social Networks: A Chinese Corpus, Systems and an Explainable Tool","abstract":"Recently, more and more data have been generated in the online world, filled with offensive language such as threats, swear words or straightforward insults. It is disgraceful for a progressive society, and then the question arises on how language resources and technologies can cope with this challenge. However, previous work only analyzes the problem as a whole but fails to detect particular types of offensive content in a more fine-grained way, mainly because of the lack of annotated data. In this work, we present a densely annotated data-set COLA (Categorizing Offensive LAnguage), consists of fine-grained insulting language, antisocial language and illegal language. We study different strategies for automatically identifying offensive language on COLA data. Further, we design a capsule system with hierarchical attention to aggregate and fully utilize information, which obtains a state-of-the-art result. Results from experiments prove that our hierarchical attention capsule network (HACN) performs significantly better than existing methods in offensive classification with the precision of 94.37% and recall of 95.28%. We also explain what our model has learned with an explanation tool called Integrated Gradients. Meanwhile, our system's processing speed can handle each sentence in 10msec, suggesting the potential for efficient deployment in real situations.","label_nlp4sg":1,"task":["Categorizing Offensive Language"],"method":["Corpus","hierarchical attention capsule network","densely annotated data - set"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"This research is supported by the National Language Commission Key Research Project (ZDI135-61), the National Natural Science Foundation of China (No.61532008 and 61872157), and the National Science Foundation of China (61572223).","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"el-kishky-etal-2019-constrained","url":"https:\/\/aclanthology.org\/W19-4610.pdf","title":"Constrained Sequence-to-sequence Semitic Root Extraction for Enriching Word Embeddings","abstract":"In this paper, we tackle the problem of \"root extraction\" from words in the Semitic language family. A challenge in applying natural language processing techniques to these languages is the data sparsity problem that arises from their rich internal morphology, where the substructure is inherently non-concatenative and morphemes are interdigitated in word formation. While previous automated methods have relied on human-curated rules or multiclass classification, they have not fully leveraged the various combinations of regular, sequential concatenative morphology within the words and the internal interleaving within templatic stems of roots and patterns. To address this, we propose a constrained sequence-tosequence root extraction method. Experimental results show our constrained model outperforms a variety of methods at root extraction. Furthermore, by enriching word embeddings with resulting decompositions, we show improved results on word analogy, word similarity, and language modeling tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"garrette-alpert-abrams-2016-unsupervised","url":"https:\/\/aclanthology.org\/N16-1055.pdf","title":"An Unsupervised Model of Orthographic Variation for Historical Document Transcription","abstract":"Historical documents frequently exhibit extensive orthographic variation, including archaic spellings and obsolete shorthand. OCR tools typically seek to produce so-called diplomatic transcriptions that preserve these variants, but many end tasks require transcriptions with normalized orthography. In this paper, we present a novel joint transcription model that learns, unsupervised, a probabilistic mapping between modern orthography and that used in the document. Our system thus produces dual diplomatic and normalized transcriptions simultaneously, and achieves a 35% relative error reduction over a state-of-the-art OCR model on diplomatic transcription, and a 46% reduction on normalized transcription.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Stephanie Wood, Kelly McDonough, Albert Palacios, Adam Coon, Sergio Romero, and Kent Norsworthy for their input, advice, and assistance on this project. We would also like to thank Taylor Berg-Kirkpatrick, Dan Klein, and Luke Zettlemoyer for their valuable feedback on earlier drafts of this paper. This work is supported in part by a Digital Humanities Implementation Grant from the National Endowment for the Humanities for the project Reading the First Books: Multilingual, Early-Modern OCR for Primeros Libros.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"oravecz-etal-2021-etranslations","url":"https:\/\/aclanthology.org\/2021.wmt-1.15.pdf","title":"eTranslation's Submissions to the WMT 2021 News Translation Task","abstract":"The paper describes the 3 NMT models submitted by the eTranslation team to the WMT 2021 news translation shared task. We developed systems in language pairs that are actively used in the European Commission's eTranslation service. In the WMT news task, recent years have seen a steady increase in the need for computational resources to train deep and complex architectures to produce competitive systems. We took a different approach and explored alternative strategies focusing on data selection and filtering to improve the performance of baseline systems. In the domain constrained task for the French-German language pair our approach resulted in the best system by a significant margin in BLEU. For the other two systems (English-German and English-Czech 1) we tried to build competitive models using standard best practices.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"harastani-etal-2013-ranking","url":"https:\/\/aclanthology.org\/I13-1046.pdf","title":"Ranking Translation Candidates Acquired from Comparable Corpora","abstract":"Domain-specific bilingual lexicons extracted from domain-specific comparable corpora provide for one term a list of ranked translation candidates. This study proposes to re-rank these translation candidates. We suggest that a term and its translation appear in comparable sentences that can be extracted from domainspecific comparable corpora. For a source term and a list of translation candidates, we propose a method to identify and align the best source and target sentences that contain the term and its translation candidates. We report results with two language pairs (French-English and French-German) using domain-specific comparable corpora. Our method significantly improves the top 1, top 5 and top 10 precisions of a domain-specific bilingual lexicon, and thus, provides a better useroriented results.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their valuable remarks. This work was supported by the French National Research Agency under grant ANR-12-CORD-0020.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pergola-etal-2021-boosting","url":"https:\/\/aclanthology.org\/2021.eacl-main.169.pdf","title":"Boosting Low-Resource Biomedical QA via Entity-Aware Masking Strategies","abstract":"Biomedical question-answering (QA) has gained increased attention for its capability to provide users with high-quality information from a vast scientific literature. Although an increasing number of biomedical QA datasets has been recently made available, those resources are still rather limited and expensive to produce. Transfer learning via pre-trained language models (LMs) has been shown as a promising approach to leverage existing general-purpose knowledge. However, finetuning these large models can be costly and time consuming, often yielding limited benefits when adapting to specific themes of specialised domains, such as the COVID-19 literature. To bootstrap further their domain adaptation, we propose a simple yet unexplored approach, which we call biomedical entity-aware masking (BEM). We encourage masked language models to learn entity-centric knowledge based on the pivotal entities characterizing the domain at hand, and employ those entities to drive the LM fine-tuning. The resulting strategy is a downstream process applicable to a wide variety of masked LMs, not requiring additional memory or components in the neural architectures. Experimental results show performance on par with state-of-the-art models on several biomedical QA datasets.","label_nlp4sg":1,"task":["Biomedical question - answering"],"method":["entity - aware masking","language models"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"This work is funded by the EPSRC (grant no. EP\/T017112\/1, EP\/V048597\/1). YH is supported by a Turing AI Fellowship funded by the UK Research and Innovation (UKRI) (grant no. EP\/V020579\/1).","year":2021,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ding-etal-2020-three","url":"https:\/\/aclanthology.org\/2020.acl-main.44.pdf","title":"A Three-Parameter Rank-Frequency Relation in Natural Languages","abstract":"We present that, the rank-frequency relation in textual data follows f \u221d r \u2212\u03b1 (r + \u03b3) \u2212\u03b2 , where f is the token frequency and r is the rank by frequency, with (\u03b1, \u03b2, \u03b3) as parameters. The formulation is derived based on the empirical observation that d 2 (x+y)\/dx 2 is a typical impulse function, where (x, y) = (log r, log f). The formulation is the power law when \u03b2 = 0 and the Zipf-Mandelbrot law when \u03b1 = 0. We illustrate that \u03b1 is related to the analytic features of syntax and \u03b2 + \u03b3 to those of morphology in natural languages from an investigation of multilingual corpora.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jokinen-wilcock-2016-double","url":"https:\/\/aclanthology.org\/W16-4408.pdf","title":"Double Topic Shifts in Open Domain Conversations: Natural Language Interface for a Wikipedia-based Robot Application","abstract":"The paper describes topic shifting in dialogues with a robot that provides information from Wikipedia. The work focuses on a double topical construction of dialogue coherence which refers to discourse coherence on two levels: the evolution of dialogue topics via the interaction between the user and the robot system, and the creation of discourse topics via the content of the Wikipedia article itself. The user selects topics that are of interest to her, and the system builds a list of potential topics, anticipated to be the next topic, by the links in the article and by the keywords extracted from the article. The described system deals with Wikipedia articles, but could easily be adapted to other digital information providing systems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Mikulas Muron for his work on keyword extraction, and financial support from the Estonian Science Foundation project IUT 20-56 and the Academy of Finland grant n\u00b0289985.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"beale-2011-using","url":"https:\/\/aclanthology.org\/I11-2002.pdf","title":"Using Linguist's Assistant for Language Description and Translation","abstract":"The Linguist's Assistant (LA) is a practical computational paradigm for describing languages. LA seeks to specify in semantic representations a large subset of possible written communication. These semantic representations then become the starting point and organizing principle from which a linguist describes the linguistic surface forms of a language using LA's visual lexicon and grammatical rule development interface. The resulting computational description can then be used in our document authoring and translation applications.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The author gratefully acknowledges the partnership of Tod Allman from the University of Texas, Arlington. Dr. Allman is co-developer of LA.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"abeille-etal-1989-lexicalized","url":"https:\/\/aclanthology.org\/H89-1036.pdf","title":"Lexicalized TAGs, Parsing and Lexicons","abstract":"In our approach, each elementary structure is systematically associated with a lexical head. These structures specify extended domains of locality (as compared to a context-free grammar) over which constraints can be stated. These constraints either hold within the elementary structure itself or specify what other structures can be composed with a given elementary structure. The 'grammar' consists of a lexicon where each lexical item is associated with a finite number of structures for which that item is the head. There are no separate grammar rules. There are, of course, 'rules' which tell us how these structures are composed. A grammar of this form will be said to be 'lexicalized'. A 'lexicalized' grammar naturally follows from the extended domain of locality of TAGs. A general parsing strategy for 'lexicalized' grammars is discussed. In the first stage, the parser selects a set of elementary structures associated with the lexical items in the input sentence, and in the second stage the sentence is parsed with respect to this set. An Earley-type parser for TAGs has been has been developed. It can be adapted to take advantage of the two steps parsing strategy. The system parses unification formalisms that have a CFG skeleton and that have a TAG skeleton. Along with the development of an Earley-type parser for TAGs, lexicons for English are under development. A lexicons for French is also being developed. Subsets of these lexicons are being incrementally interfaced to the parser. We finally show how idioms are represented in lexicalized TAGs. We assign them regular syntactic structures while representing them semantically as one entry. We finally show how they can be parsed by a parsing strategy as mentioned above.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1989,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"liu-etal-2018-exploiting","url":"https:\/\/aclanthology.org\/C18-1159.pdf","title":"Exploiting Syntactic Structures for Humor Recognition","abstract":"Humor recognition is an interesting and challenging task in natural language processing. This paper proposes to exploit syntactic structure features to enhance humor recognition. Our method achieves significant improvements compared with humor theory driven baselines. We found that some syntactic structure features consistently correlate with humor, which indicate interesting linguistic phenomena. Both the experimental results and the analysis demonstrate that humor can be viewed as a kind of style and content independent syntactic structures can help identify humor and have good interpretability.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research work is funded by the National Natural Science Foundation of China (No.61402304), Beijing Municipal Education Commission (KM201610028015, Connotation Development) and Beijing Advanced Innovation Center for Imaging Technology.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"guillou-2012-improving","url":"https:\/\/aclanthology.org\/E12-3001.pdf","title":"Improving Pronoun Translation for Statistical Machine Translation","abstract":"Machine Translation is a well-established field, yet the majority of current systems translate sentences in isolation, losing valuable contextual information from previously translated sentences in the discourse. One important type of contextual information concerns who or what a coreferring pronoun corefers to (i.e., its antecedent). Languages differ significantly in how they achieve coreference, and awareness of antecedents is important in choosing the correct pronoun. Disregarding a pronoun's antecedent in translation can lead to inappropriate coreferring forms in the target text, seriously degrading a reader's ability to understand it. This work assesses the extent to which source-language annotation of coreferring pronouns can improve English-Czech Statistical Machine Translation (SMT). As with previous attempts that use this method, the results show little improvement. This paper attempts to explain why and to provide insight into the factors affecting performance.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"I would like to thank Bonnie Webber (University of Edinburgh) who supervised this project and Mark\u00e9ta Lopatkov\u00e1 (Charles University) who provided the much needed Czech language assistance. I am very grateful to Ond\u0159ej Bojar (Charles University) for his numerous helpful suggestions and to the Institute of Formal and Applied Linguistics (Charles University) for providing the PCEDT 2.0 corpus. I would also like to thank Wolodja Wentland and the three anonymous reviewers for their feedback.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"de-vriend-etal-2006-unified","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2006\/pdf\/192_pdf.pdf","title":"A Unified Structure for Dutch Dialect Dictionary Data","abstract":"The traditional dialect vocabulary of the Netherlands and Flanders is recorded and researched in several Dutch and Belgian research institutes and universities. Most of these distributed dictionary creation and research projects collaborate in the \"Permanent Overlegorgaan Regionale Woordenboeken\" (ReWo). In the project \"digital databases and digital tools for WBD and WLD\" (D-square) the dialect data published by two of these dictionary projects (Woordenboek van de Brabantse Dialecten and Woordenboek van de Limburgse Dialecten) is being digitised. One of the additional goals of the D-square project is the development of an infrastructure for electronic access to all dialect dictionaries collaborating in the ReWo. In this paper we will firstly reconsider the nature of the core data types-form, sense and location-present in the different dialect dictionaries and the ways these data types are further classified. Next we will focus on the problems encountered when trying to unify this dictionary data and their classifications and suggest solutions. Finally we will look at several implementation issues regarding a specific encoding for the dictionaries.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The project D-square is partly funded by Netherlands Organisation for Scientific Research (NWO).","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mori-nagao-1996-word","url":"https:\/\/aclanthology.org\/C96-2202.pdf","title":"Word Extraction from Corpora and Its Part-of-Speech Estimation Using Distributional Analysis","abstract":"Unknown words are inevitable at any step of analysis in natural language processing. Wc propose a method to extract words from a corl)us and estimate the probability that each word belongs to given parts of speech (POSs), using a distributional analysis. Our experiments have shown that this method is etfective for inferring the POS of unknown words.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"smith-eisner-2006-quasi","url":"https:\/\/aclanthology.org\/W06-3104.pdf","title":"Quasi-Synchronous Grammars: Alignment by Soft Projection of Syntactic Dependencies","abstract":"Many syntactic models in machine translation are channels that transform one tree into another, or synchronous grammars that generate trees in parallel. We present a new model of the translation process: quasi-synchronous grammar (QG). Given a source-language parse tree T 1 , a QG defines a monolingual grammar that generates translations of T 1. The trees T 2 allowed by this monolingual grammar are inspired by pieces of substructure in T 1 and aligned to T 1 at those points. We describe experiments learning quasi-synchronous context-free grammars from bitext. As with other monolingual language models, we evaluate the crossentropy of QGs on unseen text and show that a better fit to bilingual data is achieved by allowing greater syntactic divergence. When evaluated on a word alignment task, QG matches standard baselines.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by a National Science Foundation Graduate Research Fellowship for the first author and by NSF Grant No. 0313193.","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"darwish-etal-2017-arabic-pos","url":"https:\/\/aclanthology.org\/W17-1316.pdf","title":"Arabic POS Tagging: Don't Abandon Feature Engineering Just Yet","abstract":"This paper focuses on comparing between using Support Vector Machine based ranking (SVM Rank) and Bidirectional Long-Short-Term-Memory (bi-LSTM) neuralnetwork based sequence labeling in building a state-of-the-art Arabic part-ofspeech tagging system. Using SVM Rank leads to state-of-the-art results, but with a fair amount of feature engineering. Using bi-LSTM, particularly when combined with word embeddings, may lead to competitive POS-tagging results by automatically deducing latent linguistic features. However, we show that augmenting bi-LSTM sequence labeling with some of the features that we used for the SVM Rankbased tagger yields to further improvements. We also show that gains realized using embeddings may not be additive with the gains achieved due to features. We are open-sourcing both the SVM Rank and the bi-LSTM based systems for the research community.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kulmizev-etal-2020-neural","url":"https:\/\/aclanthology.org\/2020.acl-main.375.pdf","title":"Do Neural Language Models Show Preferences for Syntactic Formalisms?","abstract":"Recent work on the interpretability of deep neural language models has concluded that many properties of natural language syntax are encoded in their representational spaces. However, such studies often suffer from limited scope by focusing on a single language and a single linguistic formalism. In this study, we aim to investigate the extent to which the semblance of syntactic structure captured by language models adheres to a surface-syntactic or deep syntactic style of analysis, and whether the patterns are consistent across different languages. We apply a probe for extracting directed dependency trees to BERT and ELMo models trained on 13 different languages, probing for two different syntactic annotation styles: Universal Dependencies (UD), prioritizing deep syntactic relations, and Surface-Syntactic Universal Dependencies (SUD), focusing on surface structure. We find that both models exhibit a preference for UD over SUD-with interesting variations across languages and layers-and that the strength of this preference is correlated with differences in tree shape.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We want to thank Miryam De Lhoneux, Paola Merlo, Sara Stymne, and Dan Zeman and the ACL reviewers and area chairs for valuable feedback on preliminary versions of this paper. We acknowledge the computational resources provided by CSC in Helsinki and Sigma2 in Oslo through NeIC-NLPL (www.nlpl.eu). Joakim Nivre's contributions to this work were supported by grant 2016-01817 of the Swedish Research Council.Igor Mel'\u010duk. 1988 ","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jin-etal-2017-combining","url":"https:\/\/aclanthology.org\/I17-1055.pdf","title":"Combining Lightly-Supervised Text Classification Models for Accurate Contextual Advertising","abstract":"In this paper we propose a lightlysupervised framework to rapidly build text classifiers for contextual advertising. In contextual advertising, advertisers often want to target to a specific class of webpages most relevant to their product, which may not be covered by a pre-trained classifier. Moreover, the advertisers are only interested in the target class. Therefore, it is more suitable to model as a oneclass classification problem, in contrast to traditional classification problems where disjoint classes are defined a priori. We first apply two state-of-the-art lightlysupervised classification models, generalized expectation (GE) criteria (Druck et al., 2008) and multinomial na\u00efve Bayes (MNB) with priors (Settles, 2011) to oneclass classification where the user only provides a small list of labeled words for the target class. We fuse the two models together by using MNB to automatically enrich the constraints for GE training. We also explore ensemble method to further improve the accuracy. On a corpus of real-time bidding requests, the proposed model achieves the highest average F 1 of 0.69 and closes half of the gap between previous state-of-the-art lightlysupervised models to a fully-supervised MaxEnt model.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"andrade-etal-2011-learning","url":"https:\/\/aclanthology.org\/W11-1203.pdf","title":"Learning the Optimal Use of Dependency-parsing Information for Finding Translations with Comparable Corpora","abstract":"Using comparable corpora to find new word translations is a promising approach for extending bilingual dictionaries (semi-) automatically. The basic idea is based on the assumption that similar words have similar contexts across languages. The context of a word is often summarized by using the bag-of-words in the sentence, or by using the words which are in a certain dependency position, e.g. the predecessors and successors. These different context positions are then combined into one context vector and compared across languages. However, previous research makes the (implicit) assumption that these different context positions should be weighted as equally important. Furthermore, only the same context positions are compared with each other, for example the successor position in Spanish is compared with the successor position in English. However, this is not necessarily always appropriate for languages like Japanese and English. To overcome these limitations, we suggest to perform a linear transformation of the context vectors, which is defined by a matrix. We define the optimal transformation matrix by using a Bayesian probabilistic model, and show that it is feasible to find an approximate solution using Markov chain Monte Carlo methods. Our experiments demonstrate that our proposed method constantly improves translation accuracy.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their helpful comments. This work was partially supported by Grant-in-Aid for Specially Promoted Research (MEXT, Japan). The first author is supported by the MEXT Scholarship and by an IBM PhD Scholarship Award.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"vaidyanathan-etal-2015-alignment","url":"https:\/\/aclanthology.org\/W15-0111.pdf","title":"Alignment of Eye Movements and Spoken Language for Semantic Image Understanding","abstract":"Extracting meaning from images is a challenging task that has generated much interest in recent years. In domains such as medicine, image understanding requires special expertise. Experts' eye movements can act as pointers to important image regions, while their accompanying spoken language descriptions, informed by their knowledge and experience, call attention to the concepts and features associated with those regions. In this paper, we apply an unsupervised alignment technique, widely used in machine translation to align parallel corpora, to align observers' eye movements with the verbal narrations they produce while examining an image. The resulting alignments can then be used to create a database of low-level image features and high-level semantic annotations corresponding to perceptually important image regions. Such a database can in turn be used to automatically annotate new images. Initial results demonstrate the feasibility of a framework that draws on recognized bitext alignment algorithms for performing unsupervised automatic semantic annotation of image regions. Planned enhancements to the methods are also discussed.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"liu-etal-2013-approach","url":"https:\/\/aclanthology.org\/I13-1120.pdf","title":"An Approach of Hybrid Hierarchical Structure for Word Similarity Computing by HowNet","abstract":"Word similarity computing is an important and fundamental task in the field of natural language processing. Most of word similarity methods perform well in synonyms, but not well between words whose similarity is vague. To overcome this problem, this paper proposes an approach of hybrid hierarchical structure computing Chinese word similarity to achieve fine-grained similarity results with HowNet 2008. The experimental results prove that the method has a better effect on computing similarity of synonyms and antonyms including nouns, verbs and adjectives. Besides, it performs stably on standard data provided by SemEval 2012.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"inie-derczynski-2021-idr","url":"https:\/\/aclanthology.org\/2021.hcinlp-1.16.pdf","title":"An IDR Framework of Opportunities and Barriers between HCI and NLP","abstract":"This paper presents a framework of opportunities and barriers\/risks between the two research fields Natural Language Processing (NLP) and Human-Computer Interaction (HCI). The framework is constructed by following an interdisciplinary research-model (IDR), combining field-specific knowledge with existing work in the two fields. The resulting framework is intended as a departure point for discussion and inspiration for research collaborations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bender-goss-grubbs-2008-semantic","url":"https:\/\/aclanthology.org\/W08-2203.pdf","title":"Semantic Representations of Syntactically Marked Discourse Status in Crosslinguistic Perspective","abstract":"This paper presents suggested semantic representations for different types of referring expressions in the format of Minimal Recursion Semantics and sketches syntactic analyses which can create them compositionally. We explore cross-linguistic harmonization of these representations, to promote interoperability and reusability of linguistic analyses. We follow Borthen and Haugereid (2005) in positing COG-ST ('cognitive status') as a feature on the syntax-semantics interface to handle phenomena associated with definiteness. Our proposal helps to unify the treatments of definiteness markers, demonstratives, overt pronouns and null anaphora across languages. In languages with articles, they contribute an existential quantifier and the appropriate value for COG-ST. In other languages, the COG-ST value is determined by an affix. The contribution of demonstrative determiners is decomposed into a COG-ST value, a quantifier, and proximity information, each of which can be contributed by a different kind of grammatical construction in a given language. Along with COG-ST, we posit a feature that distinguishes between pronouns (and null anaphora) that are sensitive to the identity of the referent of their antecedent and those that are sensitive to its type.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Toshiyuki Ogihara, Laurie Poulson, Jeanette Gundel, Jennifer Arnold, Francesca Gola, and the reviewers for STEP 2008 for helpful comments and discussion. Any remaining errors are our own. This material is based upon work supported by the National Science Foundation under Grant No. BCS-0644097.","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kim-park-2015-statistical","url":"https:\/\/aclanthology.org\/Y15-1048.pdf","title":"A Statistical Modeling of the Correlation between Island Effects and Working-memory Capacity for L2 Learners","abstract":"The cause of island effects has evoked considerable debate within syntax and other fields of linguistics. The two competing approaches stand out: the grammatical analysis; and the working-memory (WM)-based processing analysis. In this paper we report three experiments designed to test one of the premises of the WM-based processing analysis: that the strength of island effects should vary as a function of individual differences in WM capacity. The results show that island effects present even for L2 learners are more likely attributed to grammatical constraints than to limited processing resources.","label_nlp4sg":1,"task":["Premise testing"],"method":["Statistical Modeling"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":"We are grateful to the three anonymous reviewers for their comments and suggestions on the earlier version of the paper. This work was supported by the National Research Foundation of Korea Grant funded by the Korean Government (NRF-2015S1A5A2A01010233).","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ding-gimpel-2019-latent","url":"https:\/\/aclanthology.org\/D19-1048.pdf","title":"Latent-Variable Generative Models for Data-Efficient Text Classification","abstract":"Generative classifiers offer potential advantages over their discriminative counterparts, namely in the areas of data efficiency, robustness to data shift and adversarial examples, and zero-shot learning (Ng and Jordan, 2002; Yogatama et al., 2017; Lewis and Fan, 2019). In this paper, we improve generative text classifiers by introducing discrete latent variables into the generative story, and explore several graphical model configurations. We parameterize the distributions using standard neural architectures used in conditional language modeling and perform learning by directly maximizing the log marginal likelihood via gradient-based optimization, which avoids the need to do expectation-maximization. We empirically characterize the performance of our models on six text classification datasets. The choice of where to include the latent variable has a significant impact on performance, with the strongest results obtained when using the latent variable as an auxiliary conditioning variable in the generation of the textual input. This model consistently outperforms both the generative and discriminative classifiers in small-data settings. We analyze our model by using it for controlled generation, finding that the latent variable captures interpretable properties of the data, even with very small training sets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Lingyu Gao, Qingming Tang, and Lifu Tu for helpful discussions, Michael Maire and Janos Simon for their useful feedback, the anonymous reviewers for their comments that improved this paper, and Google for a faculty research award to K. Gimpel that partially supported this research.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mukherjee-etal-2007-emergence","url":"https:\/\/aclanthology.org\/W07-1313.pdf","title":"Emergence of Community Structures in Vowel Inventories: An Analysis Based on Complex Networks","abstract":"In this work, we attempt to capture patterns of co-occurrence across vowel systems and at the same time figure out the nature of the force leading to the emergence of such patterns. For this purpose we define a weighted network where the vowels are the nodes and an edge between two nodes (read vowels) signify their co-occurrence likelihood over the vowel inventories. Through this network we identify communities of vowels, which essentially reflect their patterns of co-occurrence across languages. We observe that in the assortative vowel communities the constituent nodes (read vowels) are largely uncorrelated in terms of their features indicating that they are formed based on the principle of maximal perceptual contrast. However, in the rest of the communities, strong correlations are reflected among the constituent vowels with respect to their features indicating that it is the principle of feature economy that binds them together.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"de-melo-weikum-2009-extracting","url":"https:\/\/aclanthology.org\/W09-4407.pdf","title":"Extracting Sense-Disambiguated Example Sentences From Parallel Corpora","abstract":"Example sentences provide an intuitive means of grasping the meaning of a word, and are frequently used to complement conventional word definitions. When a word has multiple meanings, it is useful to have example sentences for specific senses (and hence definitions) of that word rather than indiscriminately lumping all of them together. In this paper, we investigate to what extent such sense-specific example sentences can be extracted from parallel corpora using lexical knowledge bases for multiple languages as a sense index. We use word sense disambiguation heuristics and a cross-lingual measure of semantic similarity to link example sentences to specific word senses. From the sentences found for a given sense, an algorithm then selects a smaller subset that can be presented to end users, taking into account both representativeness and diversity. Preliminary results show that a precision of around 80% can be obtained for a reasonable number of word senses, and that the subset selection yields convincing results.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rios-etal-2021-biasing","url":"https:\/\/aclanthology.org\/2021.naacl-main.354.pdf","title":"On Biasing Transformer Attention Towards Monotonicity","abstract":"Many sequence-to-sequence tasks in natural language processing are roughly monotonic in the alignment between source and target sequence, and previous work has facilitated or enforced learning of monotonic attention behavior via specialized attention functions or pretraining. In this work, we introduce a monotonicity loss function that is compatible with standard attention mechanisms and test it on several sequence-to-sequence tasks: grapheme-to-phoneme conversion, morphological inflection, transliteration, and dialect normalization. Experiments show that we can achieve largely monotonic behavior. Performance is mixed, with larger gains on top of RNN baselines. General monotonicity does not benefit transformer multihead attention, however, we see isolated improvements when only a subset of heads is biased towards monotonic behavior.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their feedback. This project has received funding from the Swiss National Science Foundation (project nos. 176727 and 191934).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cimino-etal-2013-linguistic","url":"https:\/\/aclanthology.org\/W13-1727.pdf","title":"Linguistic Profiling based on General--purpose Features and Native Language Identification","abstract":"In this paper, we describe our approach to native language identification and discuss the results we submitted as participants to the First NLI Shared Task. By resorting to a wide set of general-purpose features qualifying the lexical and grammatical structure of a text, rather than to ad hoc features specifically selected for the NLI task, we achieved encouraging results, which show that the proposed approach is general-purpose and portable across different tasks, domains and languages.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"segal-etal-2014-limsi","url":"https:\/\/aclanthology.org\/2014.iwslt-evaluation.15.pdf","title":"LIMSI English-French speech translation system","abstract":"This paper documents the systems developed by LIMSI for the IWSLT 2014 speech translation task (English\u2192French). The main objective of this participation was twofold: adapting different components of the ASR baseline system to the peculiarities of TED talks and improving the machine translation quality on the automatic speech recognition output data. For the latter task, various techniques have been considered: punctuation and number normalization, adaptation to ASR errors, as well as the use of structured output layer neural network models for speech data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to expresses their gratitude to Jan Niehues for his help and advice in the preparation of this submission.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"he-etal-2021-distiller","url":"https:\/\/aclanthology.org\/2021.sustainlp-1.13.pdf","title":"Distiller: A Systematic Study of Model Distillation Methods in Natural Language Processing","abstract":"Knowledge Distillation (KD) offers a natural way to reduce the latency and memory\/energy usage of massive pretrained models that have come to dominate Natural Language Processing (NLP) in recent years. While numerous sophisticated variants of KD algorithms have been proposed for NLP applications, the key factors underpinning the optimal distillation performance are often confounded and remain unclear. We aim to identify how different components in the KD pipeline affect the resulting performance and how much the optimal KD pipeline varies across different datasets\/tasks, such as the data augmentation policy, the loss function, and the intermediate representation for transferring the knowledge between teacher and student. To tease apart their effects, we propose Distiller, a meta KD framework that systematically combines a broad range of techniques across different stages of the KD pipeline, which enables us to quantify each component's contribution. Within Distiller, we unify commonly used objectives for distillation of intermediate representations under a universal mutual information (MI) objective and propose a class of MI\u03b1 objective functions with better bias\/variance trade-off for estimating the MI between the teacher and the student. On a diverse set of NLP datasets, the best Distiller configurations are identified via large-scale hyper-parameter optimization. Our experiments reveal the following: 1) the approach used to distill the intermediate representations is the most important factor in KD performance, 2) among different objectives for intermediate distillation, MI-\u03b1 performs the best, and 3) data augmentation provides a large boost for small training datasets or small student networks. Moreover, we find that different datasets\/tasks prefer different KD algorithms, and thus propose a simple AutoDistiller algorithm that can recommend a good KD pipeline for a new dataset. * Work done while being an intern at Amazon Web Services.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"solovyev-loukachevitch-2021-comparing","url":"https:\/\/aclanthology.org\/2021.gwc-1.23.pdf","title":"Comparing Similarity of Words Based on Psychosemantic Experiment and RuWordNet","abstract":"In the paper we compare the structure of the Russian language thesaurus RuWord-Net with the data of a psychosemantic experiment to identify semantically close words. The aim of the study is to find out to what extent the structure of RuWordNet corresponds to the intuitive ideas of native speakers about the semantic proximity of words. The respondents were asked to list synonyms to a given word. As a result of the experiment, we found that the respondents mainly mentioned not only synonyms but words that are in paradigmatic relations with the stimuli. The words of the mental sphere were chosen for the experiment. In 95% of cases, the words characterized in the experiment as semantically close were also close according to the thesaurus. In other cases, additions to the thesaurus were proposed.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was financially supported by RFBR, grants \u2116 18-00-01238 and \u2116 18-00-01226 as parts of complex project \u2116 18-00-01240 (K).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"salawu-etal-2021-large","url":"https:\/\/aclanthology.org\/2021.woah-1.16.pdf","title":"A Large-Scale English Multi-Label Twitter Dataset for Cyberbullying and Online Abuse Detection","abstract":"In this paper, we introduce a new English Twitter-based dataset for online abuse and cyberbullying detection. Comprising 62,587 tweets, this dataset was sourced from Twitter using specific query terms designed to retrieve tweets with high probabilities of various forms of bullying and offensive content, including insult, profanity, sarcasm, threat, porn and exclusion. Analysis performed on the dataset confirmed common cyberbullying themes reported by other studies and revealed interesting relationships between the classes. The dataset was used to train a number of transformer-based deep learning models returning impressive results.","label_nlp4sg":1,"task":["Cyberbullying and Online Abuse Detection"],"method":["Multi - Label Twitter Dataset","transformer - based deep learning"],"goal1":"Peace, Justice and Strong Institutions","goal2":"Good Health and Well-Being","goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"finlayson-etal-2014-n2","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/48_Paper.pdf","title":"The N2 corpus: A semantically annotated collection of Islamist extremist stories","abstract":"We describe the N2 (Narrative Networks) Corpus, a new language resource. The corpus is unique in three important ways. First, every text in the corpus is a story, which is in contrast to other language resources that may contain stories or story-like texts, but are not specifically curated to contain only stories. Second, the unifying theme of the corpus is material relevant to Islamist Extremists, having been produced by or often referenced by them. Third, every text in the corpus has been annotated for 14 layers of syntax and semantics, including: referring expressions and co-reference; events, time expressions, and temporal relationships; semantic roles; and word senses. In cases where analyzers were not available to do high-quality automatic annotations, layers were manually doubleannotated and adjudicated by trained annotators. The corpus comprises 100 texts and 42,480 words. Most of the texts were originally in Arabic but all are provided in English translation. We explain the motivation for constructing the corpus, the process for selecting the texts, the detailed contents of the corpus itself, the rationale behind the choice of annotation layers, and the annotation procedure.","label_nlp4sg":1,"task":["Data collection"],"method":["corpus"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"The preparation of this article by Dr. Finlayson was funded by the U.S. Defense Advanced Research Project Agency (DARPA) under contract number D12AP00210. Drs. Halverson and Corman were also partially funded by DARPA under contract number D12AP00074, as well as by the Department of Defense Human Social Culture Behavior (HSCB) program under Office of Naval Research (ONR) contract number N00014-09-1-0872. We would like to thank the many scientists, engineers, and other scholars associated with DARPA's Narrative Networks (N2) program, for their input and support in collecting this data. Nevertheless, the views expressed here are solely our own, and do not necessarily reflect those of N2-affiliated researchers, DARPA, ONR, the U.S. military, or the U.S. government. We thank Chase Clow, research assistant at ASU, who helped mine stories from the OpenSource.gov data. We also thank our annotation team: project manager Jared Sprague, and annotators Julia Arnous, Wendy Austin, Valerie Best, Aerin Commins, Justin Daoust, Beryl Lipton, Josh Kearney, Matt Lord, Molly Moses, Sharon Mozgai, Zanny Perrino, Justin Smith, Jacob Stulberg, and Ashley Turner.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"shreevastava-foltz-2021-detecting","url":"https:\/\/aclanthology.org\/2021.clpsych-1.17.pdf","title":"Detecting Cognitive Distortions from Patient-Therapist Interactions","abstract":"An important part of Cognitive Behavioral Therapy (CBT) is to recognize and restructure certain negative thinking patterns that are also known as cognitive distortions. This project aims to detect these distortions using natural language processing. We compare and contrast different types of linguistic features as well as different classification algorithms and explore the limitations of applying these techniques on a small dataset. We find that pretrained Sentence-BERT embeddings to train an SVM classifier yields the best results with an F1-score of 0.79. Lastly, we discuss how this work provides insights into the types of linguistic features that are inherent in cognitive distortions.","label_nlp4sg":1,"task":["Detecting Cognitive Distortions"],"method":["Sentence - BERT embeddings","SVM"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"We thank the annotators: Rebecca Lee, Josh Daniels, Changing Yang, and Beilei Xiang; and Prof. Martha Palmer for funding this work. We also acknowledge the support of the Computational Linguistics, Analytics, Search and Informatics (CLA-SIC) department at the University of Colorado, Boulder in creating an interdisciplinary environment that is critical for research of this nature. Lastly, the critical input from three anonymous is gratefully acknowledged in improving the quality of this paper.","year":2021,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"vidal-gorene-etal-2020-recycling","url":"https:\/\/aclanthology.org\/2020.vardial-1.9.pdf","title":"Recycling and Comparing Morphological Annotation Models for Armenian Diachronic-Variational Corpus Processing","abstract":"Armenian is a language with significant variation and unevenly distributed NLP resources for different varieties. An attempt is made to process an RNN model for morphological annotation on the basis of different Armenian data (provided or not with morphologically annotated corpora), and to compare the annotation results of RNN and rule-based models. Different tests were carried out to evaluate the reuse of an unspecialized model of lemmatization and POS-tagging for under-resourced language varieties. The research focused on three dialects and further extended to Western Armenian with a mean accuracy of 94,00 % in lemmatization and 97,02% in POS-tagging, as well as a possible reusability of models to cover different other Armenian varieties. Interestingly, the comparison of an RNN model trained on Eastern Armenian with the Eastern Armenian National Corpus rule-based model applied to Western Armenian showed an enhancement of 19% in parsing. This model covers 88,79% of a short heterogeneous dataset in Western Armenian, and could be a baseline for a massive corpus annotation in that standard. It is argued that an RNN-based model can be a valid alternative to a rule-based one giving consideration to such factors as time-consumption, reusability for different varieties of a target language and significant qualitative results in morphological annotation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nederhof-1999-computational","url":"https:\/\/aclanthology.org\/J99-3002.pdf","title":"The Computational Complexity of the Correct-Prefix Property for TAGs","abstract":"A new upper bound is presented for the computational complexity of the parsing problem for TAGs, under the constraint that input is read from left to right in such a way that errors in the input are observed as soon as possible, which is called the \"correct-prefix property.\" The former upper bound, O(n9), is now improved to O(n6), which is the same as that of practical parsing algorithms for TAGs without the additional constraint of the correct-prefix property.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Most of the presented research was carried out within the framework of the Priority Programme Language and Speech Technology (TST) while the author was employed at the University of Groningen. The TST-Programme is sponsored by NWO (Dutch Organization for Scientific Research). An error in a previous version of this paper was found and corrected with the help of Giorgio Satta.","year":1999,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kirk-etal-2021-memes","url":"https:\/\/aclanthology.org\/2021.woah-1.4.pdf","title":"Memes in the Wild: Assessing the Generalizability of the Hateful Memes Challenge Dataset","abstract":"Hateful memes pose a unique challenge for current machine learning systems because their message is derived from both text-and visual-modalities. To this effect, Facebook released the Hateful Memes Challenge, a dataset of memes with pre-extracted text captions, but it is unclear whether these synthetic examples generalize to 'memes in the wild'. In this paper, we collect hateful and non-hateful memes from Pinterest to evaluate out-of-sample performance on models pre-trained on the Facebook dataset. We find that memes in the wild differ in two key aspects: 1) Captions must be extracted via OCR, injecting noise and diminishing performance of multimodal models, and 2) Memes are more diverse than 'traditional memes', including screenshots of conversations or text on a plain background. This paper thus serves as a reality check for the current benchmark of hateful meme detection and its applicability for detecting real world hate.","label_nlp4sg":1,"task":["Assessing the Generalizability","hateful meme detection"],"method":["Evaluation"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"oh-etal-2013-question","url":"https:\/\/aclanthology.org\/P13-1170.pdf","title":"Why-Question Answering using Intra- and Inter-Sentential Causal Relations","abstract":"In this paper, we explore the utility of intra-and inter-sentential causal relations between terms or clauses as evidence for answering why-questions. To the best of our knowledge, this is the first work that uses both intra-and inter-sentential causal relations for why-QA. We also propose a method for assessing the appropriateness of causal relations as answers to a given question using the semantic orientation of excitation proposed by Hashimoto et al. (2012). By applying these ideas to Japanese why-QA, we improved precision by 4.4% against all the questions in our test set over the current state-of-theart system for Japanese why-QA. In addition, unlike the state-of-the-art system, our system could achieve very high precision (83.2%) for 25% of all the questions in the test set by restricting its output to the confident answers only.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ali-etal-2015-multi","url":"https:\/\/aclanthology.org\/W15-3213.pdf","title":"Multi-Reference Evaluation for Dialectal Speech Recognition System: A Study for Egyptian ASR","abstract":"Dialectal Arabic has no standard orthographic representation. This creates a challenge when evaluating an Automatic Speech Recognition (ASR) system for dialect. Since the reference transcription text can vary widely from one user to another, we propose an innovative approach for evaluating dialectal speech recognition using Multi-References. For each recognized speech segments, we ask five different users to transcribe the speech. We combine the alignment for the multiple references, and use the combined alignment to report a modified version of Word Error Rate (WER). This approach is in favor of accepting a recognized word if any of the references typed it in the same form. Our method proved to be more effective in capturing many correctly recognized words that have multiple acceptable spellings. The initial WER according to each of the five references individually ranged between 76.4% to 80.9%. When considering all references combined, the Multi-References MR-WER was found to be 53%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"shibata-kurohashi-2018-entity","url":"https:\/\/aclanthology.org\/P18-1054.pdf","title":"Entity-Centric Joint Modeling of Japanese Coreference Resolution and Predicate Argument Structure Analysis","abstract":"Predicate argument structure analysis is a task of identifying structured events. To improve this field, we need to identify a salient entity, which cannot be identified without performing coreference resolution and predicate argument structure analysis simultaneously. This paper presents an entity-centric joint model for Japanese coreference resolution and predicate argument structure analysis. Each entity is assigned an embedding, and when the result of both analyses refers to an entity, the entity embedding is updated. The analyses take the entity embedding into consideration to access the global information of entities. Our experimental results demonstrate the proposed method can improve the performance of the intersentential zero anaphora resolution drastically, which is a notoriously difficult task in predicate argument structure analysis.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by JST CREST Grant Number JPMJCR1301, Japan.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"glass-gliozzo-2018-discovering","url":"https:\/\/aclanthology.org\/P18-1147.pdf","title":"Discovering Implicit Knowledge with Unary Relations","abstract":"State-of-the-art relation extraction approaches are only able to recognize relationships between mentions of entity arguments stated explicitly in the text and typically localized to the same sentence. However, the vast majority of relations are either implicit or not sententially localized. This is a major problem for Knowledge Base Population, severely limiting recall. In this paper we propose a new methodology to identify relations between two entities, consisting of detecting a very large number of unary relations, and using them to infer missing entities. We describe a deep learning architecture able to learn thousands of such relations very efficiently by using a common deep learning based representation. Our approach largely outperforms state of the art relation extraction technology on a newly introduced web scale knowledge base population benchmark, that we release to the research community.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"marinelli-etal-2008-encoding","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2008\/pdf\/623_paper.pdf","title":"Encoding Terms from a Scientific Domain in a Terminological Database: Methodology and Criteria","abstract":"This paper reports on the main phases of a research which aims at enhancing a maritime terminological database by means of a set of terms belonging to meteorology. The structure of the terminological database, according to EuroWordNet\/ItalWordNet model is described; the criteria used to build corpora of specialized texts are explained as well as the use of the corpora as source for term selection and extraction. The contribution of the semantic databases is taken into account: on the one hand, the most recent version of the Princeton WordNet has been exploited as reference for comparing and evaluating synsets; on the other hand, the Italian WordNet has been employed as source for exporting synsets to be coded in the terminological resource. The set of semantic relations useful to codify new terms belonging to the discipline of meteorology is examined, revising the semantic relations provided by the IWN model, introducing new relations which are more suitably tailored to specific requirements either scientific or pragmatic. The need for a particular relation is highlighted to represent the mental association which is made when a term intuitively recalls another term, but they are neither synonyms nor connected by means of a hyperonymy\/hyponymy relation.","label_nlp4sg":1,"task":["enhancing a maritime terminological database"],"method":["Methodology report"],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bornebusch-etal-2014-itac","url":"https:\/\/aclanthology.org\/S14-2059.pdf","title":"iTac: Aspect Based Sentiment Analysis using Sentiment Trees and Dictionaries","abstract":"This paper describes our approach for the fourth task of the SemEval 2014 challenge: Aspect Based Sentiment Analysis. Our system is designed to solve all four subtasks: (i) identifying aspect terms, (ii) determining the polarity of an aspect term, (iii) detecting aspect categories, and (iv) determining the polarity of a predefined aspect category. Our system is based on the Stanford sentiment tree.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-etal-2021-codi","url":"https:\/\/aclanthology.org\/2021.codi-sharedtask.8.pdf","title":"The CODI-CRAC 2021 Shared Task on Anaphora, Bridging, and Discourse Deixis Resolution in Dialogue: A Cross-Team Analysis","abstract":"The CODI-CRAC 2021 shared task is the first shared task that focuses exclusively on anaphora resolution in dialogue and provides three tracks, namely entity coreference resolution, bridging resolution, and discourse deixis resolution. We perform a cross-task analysis of the systems that participated in the shared task in each of these tracks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by NSF Grant IIS-1528037. Any opinions, findings, conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views or official policies, either expressed or implied, of the NSF.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kann-etal-2018-nyu","url":"https:\/\/aclanthology.org\/K18-3006.pdf","title":"The NYU System for the CoNLL--SIGMORPHON 2018 Shared Task on Universal Morphological Reinflection","abstract":"This paper describes the NYU submission to the CoNLL-SIGMORPHON 2018 shared task on universal morphological reinflection. Our system participates in the low-resource setting of Task 2, track 2, i.e., it predicts morphologically inflected forms in context: given a lemma and a context sentence, it produces a form of the lemma which might be used at an indicated position in the sentence. It is based on the standard attention-based LSTM encoder-decoder model, but makes use of multiple encoders to process all parts of the context as well as the lemma. In the official shared task evaluation, our system obtains the second best results out of 5 submissions for the competition it entered and strongly outperforms the official baseline.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"marcinczuk-etal-2017-liner2","url":"https:\/\/aclanthology.org\/W17-1413.pdf","title":"Liner2 --- a Generic Framework for Named Entity Recognition","abstract":"In the paper we present an adaptation of Liner2 framework to solve the BSNLP 2017 shared task on multilingual named entity recognition. The tool is tuned to recognize and lemmatize named entities for Polish.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Work financed as part of the investment in the CLARIN-PL research infrastructure funded by the Polish Ministry of Science and Higher Education.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"oard-2007-invited","url":"https:\/\/aclanthology.org\/W07-0912.pdf","title":"Invited Talk: Lessons from the MALACH Project: Applying New Technologies to Improve Intellectual Access to Large Oral History Collections","abstract":"In this talk I will describe the goals of the MALACH project (Multilingual Access to Large Spoken Archives) and our research results. I'll begin by describing the unique characteristics of the oral history collection that we used, in which Holocaust survivors, witnesses and rescuers were interviewed in several languages. Each interview has been digitized and extensively catalogued by subject matter experts, thus producing a remarkably rich collection for the application of machine learning techniques. Automatic speech recognition techniques originally developed for the domain of conversational telephone speech were adapted to process these materials with word error rates that are adequate to provide useful features to support interactive search and automated clustering, boundary detection, and topic classification tasks. As I describe our results, I will focus particularly on the evaluation methods that that we have used to assess the potential utility of this technology. I'll conclude with some remarks about possible future directions for research on applying new technologies to improve intellectual access to oral history and other spoken word collections.","label_nlp4sg":1,"task":["Improve Intellectual Access"],"method":["Automatic speech recognition"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"montariol-etal-2022-fine","url":"https:\/\/aclanthology.org\/2022.constraint-1.7.pdf","title":"Fine-tuning and Sampling Strategies for Multimodal Role Labeling of Entities under Class Imbalance","abstract":"We propose our solution to the multimodal semantic role labeling task from the CON-STRAINT'22 workshop. The task aims at classifying entities in memes into classes such as \"hero\" and \"villain\". We use several pre-trained multi-modal models to jointly encode the text and image of the memes, and implement three systems to classify the role of the entities. We propose dynamic sampling strategies to tackle the issue of class imbalance. Finally, we perform qualitative analysis on the representations of the entities.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We want to express our strong gratitude to Matt Post for the time he took providing manual annotation for our validation process. We also warmly thank the reviewers for their very valuable feedback. This work received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 101021607 and the last author acknowledges the support of the French Research Agency via the ANR ParSiTi project (ANR16-CE33-0021).","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hjelm-2006-extraction","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2006\/pdf\/356_pdf.pdf","title":"Extraction of Cross Language Term Correspondences","abstract":"This paper describes a method for extracting translations of terms across languages, using parallel corpora. The extracted term correspondences are such that they are useful when performing query expansion for cross language information retrieval, or for bilingual lexicon extraction. The method makes use of the mutual information measure and allows for mapping between single word-to multi-word terms and vice versa. The method is scalable (accommodates addition or removal of data) and produces high quality results, while keeping the computational costs low enough for allowing on-the-fly translations in e.g., cross language information retrieval systems. The work was carried out in collaboration with Intrafind Software AG (Munich, Germany).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"coope-etal-2020-span","url":"https:\/\/aclanthology.org\/2020.acl-main.11.pdf","title":"Span-ConveRT: Few-shot Span Extraction for Dialog with Pretrained Conversational Representations","abstract":"We introduce Span-ConveRT, a lightweight model for dialog slot-filling which frames the task as a turn-based span extraction task. This formulation allows for a simple integration of conversational knowledge coded in large pretrained conversational models such as ConveRT (Henderson et al., 2019a). We show that leveraging such knowledge in Span-ConveRT is especially useful for few-shot learning scenarios: we report consistent gains over 1) a span extractor that trains representations from scratch in the target domain, and 2) a BERTbased span extractor. In order to inspire more work on span extraction for the slot-filling task, we also release RESTAURANTS-8K, a new challenging data set of 8,198 utterances, compiled from actual conversations in the restaurant booking domain.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the three anonymous reviewers for their helpful suggestions and feedback. We are grateful to our colleagues at PolyAI, especially Georgios Spithourakis and I\u00f1igo Casanueva, for many fruitful discussions and suggestions.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"berzak-etal-2015-contrastive","url":"https:\/\/aclanthology.org\/K15-1010.pdf","title":"Contrastive Analysis with Predictive Power: Typology Driven Estimation of Grammatical Error Distributions in ESL","abstract":"This work examines the impact of crosslinguistic transfer on grammatical errors in English as Second Language (ESL) texts. Using a computational framework that formalizes the theory of Contrastive Analysis (CA), we demonstrate that language specific error distributions in ESL writing can be predicted from the typological properties of the native language and their relation to the typology of English. Our typology driven model enables to obtain accurate estimates of such distributions without access to any ESL data for the target languages. Furthermore, we present a strategy for adjusting our method to low-resource languages that lack typological documentation using a bootstrapping approach which approximates native language typology from ESL texts. Finally, we show that our framework is instrumental for linguistic inquiry seeking to identify first language factors that contribute to a wide range of difficulties in second language acquisition.","label_nlp4sg":1,"task":["Typology Driven Estimation of Grammatical Error Distributions in ESL"],"method":["Contrastive Analysis"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":"This material is based upon work supported by the Center for Brains, Minds, and Machines (CBMM), funded by NSF STC award CCF-1231216.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yang-etal-2019-end","url":"https:\/\/aclanthology.org\/D19-1309.pdf","title":"An End-to-End Generative Architecture for Paraphrase Generation","abstract":"Generating high-quality paraphrases is a fundamental yet challenging natural language processing task. Despite the effectiveness of previous work based on generative models, there remain problems with exposure bias in recurrent neural networks, and often a failure to generate realistic sentences. To overcome these challenges, we propose the first endto-end conditional generative architecture for generating paraphrases via adversarial training, which does not depend on extra linguistic information. Extensive experiments on four public datasets demonstrate the proposed method achieves state-of-the-art results, outperforming previous generative architectures on both automatic metrics (BLEU, METEOR, and TER) and human evaluations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by DARPA, DOE, NIH, ONR and NSF.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"huang-bai-2021-hub","url":"https:\/\/aclanthology.org\/2021.dravidianlangtech-1.27.pdf","title":"HUB@DravidianLangTech-EACL2021: Identify and Classify Offensive Text in Multilingual Code Mixing in Social Media","abstract":"This paper introduces the system description of the HUB team participating in Dravidian-LangTech-EACL2021: Offensive Language Identification in Dravidian Languages. The theme of this shared task is the detection of offensive content in social media. Among the known tasks related to offensive speech detection, this is the first task to detect offensive comments posted in social media comments in the Dravidian language. The task organizer team provided us with the code-mixing task data set mainly composed of three different languages: Malayalam, Kannada, and Tamil. The tasks on the code mixed data in these three different languages can be seen as three different comment\/post-level classification tasks. The task on the Malayalam data set is a five-category classification task, and the Kannada and Tamil language data sets are two six-category classification tasks. Based on our analysis of the task description and task data set, we chose to use the multilingual BERT model to complete this task. In this paper, we will discuss our fine-tuning methods, models, experiments, and results.","label_nlp4sg":1,"task":["Identify and Classify Offensive Text in Multilingual Code Mixing"],"method":["multilingual BERT"],"goal1":"Reduced Inequalities","goal2":"Peace, Justice and Strong Institutions","goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":1,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"zollmann-etal-2008-cmu","url":"https:\/\/aclanthology.org\/2008.iwslt-evaluation.2.pdf","title":"The CMU syntax-augmented machine translation system: SAMT on Hadoop with n-best alignments.","abstract":"We present the CMU Syntax Augmented Machine Translation System that was used in the IWSLT-08 evaluation campaign. We participated in the Full-BTEC data track for Chinese-English translation, focusing on transcript translation. For this year's evaluation, we ported the Syntax Augmented MT toolkit [1] to the Hadoop MapReduce [2] parallel processing architecture, allowing us to efficiently run experiments evaluating a novel \"wider pipelines\" approach to integrate evidence from N-best alignments into our translation models. We describe each step of the MapReduce pipeline as it is implemented in the open-source SAMT toolkit, and show improvements in translation quality by using N-best alignments in both hierarchical and syntax augmented translation systems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work would not have been possible without access to the M45 cluster, which was generously granted by Yahoo!. Our research was in part supported by DARPA under contract HR0011-06-2-0001 (GALE).","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pyysalo-etal-2007-unification","url":"https:\/\/aclanthology.org\/W07-1004.pdf","title":"On the unification of syntactic annotations under the Stanford dependency scheme: A case study on BioInfer and GENIA","abstract":"Several incompatible syntactic annotation schemes are currently used by parsers and corpora in biomedical information extraction. The recently introduced Stanford dependency scheme has been suggested to be a suitable unifying syntax formalism. In this paper, we present a step towards such unification by creating a conversion from the Link Grammar to the Stanford scheme. Further, we create a version of the BioInfer corpus with syntactic annotation in this scheme. We present an application-oriented evaluation of the transformation and assess the suitability of the scheme and our conversion to the unification of the syntactic annotations of BioInfer and the GENIA Treebank. We find that a highly reliable conversion is both feasible to create and practical, increasing the applicability of both the parser and the corpus to information extraction.","label_nlp4sg":1,"task":["unification of syntactic annotations"],"method":["conversion from the Link Grammar to the Stanford scheme"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"We would like to thank Erick Alphonse, Sophie Aubin and Adeline Nazarenko for providing us with the lp2lp software and the LLL conversion rules. We would also like to thank Andrew Brian Clegg and Adrian Shepherd for making available the data and evaluation tools used in their parser evaluation. This work was supported by the Academy of Finland.","year":2007,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"verma-vuppuluri-2015-new","url":"https:\/\/aclanthology.org\/R15-1087.pdf","title":"A New Approach for Idiom Identification Using Meanings and the Web","abstract":"There is a great deal of knowledge available on the Web, which represents a great opportunity for automatic, intelligent text processing and understanding, but the major problems are finding the legitimate sources of information and the fact that search engines provide page statistics not occurrences. This paper presents a new, domain independent, general-purpose idiom identification approach. Our approach combines the knowledge of the Web with the knowledge extracted from dictionaries. This method can overcome the limitations of current techniques that rely on linguistic knowledge or statistics. It can recognize idioms even when the complete sentence is not present, and without the need for domain knowledge. It is currently designed to work with text in English but can be extended to other languages.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"menezes-etal-2006-microsoft","url":"https:\/\/aclanthology.org\/W06-3124.pdf","title":"Microsoft Research Treelet Translation System: NAACL 2006 Europarl Evaluation","abstract":"The Microsoft Research translation system is a syntactically informed phrasal SMT system that uses a phrase translation model based on dependency treelets and a global reordering model based on the source dependency tree. These models are combined with several other knowledge sources in a log-linear manner. The weights of the individual components in the loglinear model are set by an automatic parametertuning method. We give a brief overview of the components of the system and discuss our experience with the Europarl data translating from English to Spanish.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rach-etal-2017-interaction","url":"https:\/\/aclanthology.org\/W17-5520.pdf","title":"Interaction Quality Estimation Using Long Short-Term Memories","abstract":"For estimating the Interaction Quality (IQ) in Spoken Dialogue Systems (SDS), the dialogue history is of significant importance. Previous works included this information manually in the form of precomputed temporal features into the classification process. Here, we employ a deep learning architecture based on Long Short-Term Memories (LSTM) to extract this information automatically from the data, thus estimating IQ solely by using current exchange features. We show that it is thereby possible to achieve competitive results as in a scenario where manually optimized temporal features have been included.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is part of a project that has received funding from the European Unions Horizon 2020 research and innovation programme under grant agreement No 645012.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ha-2004-practical","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/596.pdf","title":"A Practical Comparison of Different Filters Used in Automatic Term Extraction","abstract":"This paper discusses an experiment where different filters used in automatic term extraction (ATE) are practically compared. In the experiment, 8 filters, belong to three groups (lexical syntactic, statistical and semantic filters), are used to extract terms from two corpora from the domain of chemistry and of cancer research. The performance of each individual filter, and similarity among them are calculated. The experiment shows that: 1) simple filters maybe very efficient ones; 2) those filters are really different from each others; 3) the choice of which filters to be used is a domain, genre, and application-specific issue.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kilgour-etal-2014-2014","url":"https:\/\/aclanthology.org\/2014.iwslt-evaluation.9.pdf","title":"The 2014 KIT IWSLT speech-to-text systems for English, German and Italian","abstract":"This paper describes our German, Italian and English Speech-to-Text (STT) systems for the 2014 IWSLT TED ASR track. Our setup uses ROVER and confusion network combination from various subsystems to achieve a good overall performance. The individual subsystems are built by using different front-ends, (e.g., MVDR-MFCC or lMel), acoustic models (GMM or modular DNN) and phone sets and by training on various subsets of the training data. Decoding is performed in two stages, where the GMM systems are adapted in an unsupervised manner on the combination of the first stage outputs using VTLN, MLLR, and cMLLR. The combination setup produces a final hypothesis that has a significantly lower WER than any of the individual subsystems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors which to thank Roberto Gretter for providing an Italian pronunciation dictionary for us. The work leading to these results has received funding from the European Union under grant agreement n \u2022 287658.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"foster-2000-incorporating","url":"https:\/\/aclanthology.org\/W00-0707.pdf","title":"Incorporating Position Information into a Maximum Entropy\/Minimum Divergence Translation Model","abstract":"I describe two methods for incorporating information about the relative positions of bilingual word pairs into a Maximum Entropy\/Minimum Divergence translation model. The better of the two achieves over 40% lower test corpus perplexity than an equivalent combination of a trigram language model and the classical IBM translation model 2.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was carried out as part of the TransType project at RALI, funded by the Natural Sciences and Engineering Research Council of Canada.","year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhao-etal-2020-knowledge-grounded","url":"https:\/\/aclanthology.org\/2020.emnlp-main.272.pdf","title":"Knowledge-Grounded Dialogue Generation with Pre-trained Language Models","abstract":"We study knowledge-grounded dialogue generation with pre-trained language models. To leverage the redundant external knowledge under capacity constraint, we propose equipping response generation defined by a pretrained language model with a knowledge selection module, and an unsupervised approach to jointly optimizing knowledge selection and response generation with unlabeled dialogues. Empirical results on two benchmarks indicate that our model can significantly outperform state-of-the-art methods in both automatic evaluation and human judgment.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the reviewers for their constructive comments. This work was supported by the National Key Research and Development Program of China (No. 2020AAA0105200), the National Science Foundation of China (NSFC No. 61876196 and NSFC No. 61672058). Rui Yan was sponsored as the young fellow of Beijing Academy of Artificial Intelligence (BAAI). Rui Yan is the corresponding author.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"shen-etal-2013-participant","url":"https:\/\/aclanthology.org\/N13-1135.pdf","title":"A Participant-based Approach for Event Summarization Using Twitter Streams","abstract":"Twitter offers an unprecedented advantage on live reporting of the events happening around the world. However, summarizing the Twitter event has been a challenging task that was not fully explored in the past. In this paper, we propose a participant-based event summarization approach that \"zooms-in\" the Twitter event streams to the participant level, detects the important sub-events associated with each participant using a novel mixture model that combines the \"burstiness\" and \"cohesiveness\" properties of the event tweets, and generates the event summaries progressively. We evaluate the proposed approach on different event types. Results show that the participantbased approach can effectively capture the sub-events that have otherwise been shadowed by the long-tail of other dominant sub-events, yielding summaries with considerably better coverage than the state-of-the-art.","label_nlp4sg":1,"task":["Event Summarization"],"method":["Participant - based Approach"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"Part of this work was done during the first author's internship in Bosch Research and Technology Center. The work is also partially supported by NSF grants DMS-0915110 and HRD-0833093.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"smith-etal-2007-computationally","url":"https:\/\/aclanthology.org\/P07-1095.pdf","title":"Computationally Efficient M-Estimation of Log-Linear Structure Models","abstract":"We describe a new loss function, due to Jeon and Lin (2006), for estimating structured log-linear models on arbitrary features. The loss function can be seen as a (generative) alternative to maximum likelihood estimation with an interesting information-theoretic interpretation, and it is statistically consistent. It is substantially faster than maximum (conditional) likelihood estimation of conditional random fields (Lafferty et al., 2001; an order of magnitude or more). We compare its performance and training time to an HMM, a CRF, an MEMM, and pseudolikelihood on a shallow parsing task. These experiments help tease apart the contributions of rich features and discriminative training, which are shown to be more than additive.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bexte-etal-2021-implicit","url":"https:\/\/aclanthology.org\/2021.unimplicit-1.2.pdf","title":"Implicit Phenomena in Short-answer Scoring Data","abstract":"Short-answer scoring is the task of assessing the correctness of a short text given as response to a question that can come from a variety of educational scenarios. As only content, not form, is important, the exact wording including the explicitness of an answer should not matter. However, many state-of-the-art scoring models heavily rely on lexical information, be it word embeddings in a neural network or n-grams in an SVM. Thus, the exact wording of an answer might very well make a difference. We therefore quantify to what extent implicit language phenomena occur in short answer datasets and examine the influence they have on automatic scoring performance. We find that the level of implicitness depends on the individual question, and that some phenomena are very frequent. Resolving implicit wording to explicit formulations indeed tends to improve automatic scoring performance.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgement. This work was supported by the DFG RTG 2535: Knowledge-and Data-Based Personalization of Medicine at the Point of care.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-sporleder-2009-cohesion","url":"https:\/\/aclanthology.org\/W09-3211.pdf","title":"A Cohesion Graph Based Approach for Unsupervised Recognition of Literal and Non-literal Use of Multiword Expressions","abstract":"We present a graph-based model for representing the lexical cohesion of a discourse. In the graph structure, vertices correspond to the content words of a text and edges connecting pairs of words encode how closely the words are related semantically. We show that such a structure can be used to distinguish literal and non-literal usages of multi-word expressions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was funded by the Cluster of Excellence \"Multimodal Computing and Interaction\".","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"briem-1990-automatisk","url":"https:\/\/aclanthology.org\/W89-0101.pdf","title":"Automatisk morfologisk analyse af islandsk tekst (Automatic morphological analysis of Icelandic text) [In Danish]","abstract":"Automatic Morphological Analysis o f Icelandic Text One of the projects worked on at the Institute o f Lexicography at the University o f Iceland is a frequency analysis of Icelandic vocabulary and grammar. The most time-consuming part o f the work consists in morpho logical analysis of text samples containing in all more than half a million running words. For every single word the analysis results in registration of the word class, the flexion form and the lemma to which the text word belongs.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1990,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"loaiciga-etal-2017-findings","url":"https:\/\/aclanthology.org\/W17-4801.pdf","title":"Findings of the 2017 DiscoMT Shared Task on Cross-lingual Pronoun Prediction","abstract":"We describe the design, the setup, and the evaluation results of the DiscoMT 2017 shared task on cross-lingual pronoun prediction. The task asked participants to predict a target-language pronoun given a source-language pronoun in the context of a sentence. We further provided a lemmatized target-language human-authored translation of the source sentence, and automatic word alignments between the source sentence words and the targetlanguage lemmata. The aim of the task was to predict, for each target-language pronoun placeholder, the word that should replace it from a small, closed set of classes, using any type of information that can be extracted from the entire document. We offered four subtasks, each for a different language pair and translation direction: English-to-French, Englishto-German, German-to-English, and Spanish-to-English. Five teams participated in the shared task, making submissions for all language pairs. The evaluation results show that all participating teams outperformed two strong n-gram-based language model-based baseline systems by a sizable margin.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The organization of this task has received support from the following project: Discourse-Oriented Statistical Machine Translation funded by the Swedish Research Council (2012-916). We thank Andrei Popescu-Belis and Bonnie Webber for their advice in organizing this shared task. The work of Chistian Hardmeier and Sara Stymne is part of the Swedish strategic research programme eSSENCE.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nyberg-frederking-2003-javelin","url":"https:\/\/aclanthology.org\/N03-4010.pdf","title":"JAVELIN: A Flexible, Planner-Based Architecture for Question Answering","abstract":"The JAVELIN system integrates a flexible, planning-based architecture with a variety of language processing modules to provide an open-domain question answering capability on free text. The demonstration will focus on how JAVELIN processes questions and retrieves the most likely answer candidates from the given text corpus. The operation of the system will be explained in depth through browsing the repository of data objects created by the system during each question answering session.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research described in this paper was supported in part by a grant from ARDA under the AQUAINT Program Phase I. The current version of the JAVELIN system was conceived, designed and constructed with past and current members of the JAVELIN team at CMU, including: ","year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ibekwe-sanjuan-2006-task","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2006\/pdf\/678_pdf.pdf","title":"A task-oriented framework for evaluating theme detection systems: A discussion paper","abstract":"This paper discusses the inherent difficulties in evaluating systems for theme detection. Such systems are based essentially on unsupervised clustering aiming to discover the underlying structure in a corpus of texts. As the structures are precisely unknown beforehand, it is difficult to devise a satisfactory evaluation protocol. Several problems are posed by cluster evaluation: determining the optimal number of clusters, cluster content evaluation, topology of the discovered structure. Each of these problems has been studied separately but some of the proposed metrics portray significant flaws. Moreover, no benchmark has been commonly agreed upon. Finally, it is necessary to distinguish between task-oriented and activity-oriented evaluation as the two frameworks imply different evaluation protocols. Possible solutions to the activity-oriented evaluation can be sought from the data and text mining communities.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"delmonte-etal-2006-another","url":"https:\/\/aclanthology.org\/W06-2302.pdf","title":"Another Evaluation of Anaphora Resolution Algorithms and a Comparison with GETARUNS' Knowledge Rich Approach","abstract":"In this paper we will present an evaluation of current state-of-the-art algorithms for Anaphora Resolution based on a segment of Susanne corpus (itself a portion of Brown Corpus), a much more comparable text type to what is usually required at an international level for s u c h a p p l i c a t i o n d o m a i n s a s Question\/Answering, Information Extraction, Text Understanding, Language Learning. The portion of text chosen has an adequate size which lends itself to significant statistical measurements: it is portion A, counting 35,000 tokens and some 1000 third person pronominal expressions. The algorithms will then be compared to our system, GETARUNS, which incorporates an AR algorithm at the end of a pipeline of interconnected modules that instantiate standard architectures for NLP. Fmeasure values reached by our system are significantly higher (75%) than the other ones.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Thanks to three anonymous reviewers who helped us improve the overall layout of the paper.","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"elbers-etal-2012-proper","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/547_Paper.pdf","title":"Proper Language Resource Centers","abstract":"Language resource centers allow researchers to reliably deposit their structured data together with associated meta data and run services operating on this deposited data. We are looking into possibilities to create long-term persistency of both the deposited data and the services operating on this data. Challenges, both technical and non-technical, that need to be solved are the need to replicate more than just the data, proper identification of the digital objects in a distributed environment by making use of persistent identifiers and the setup of a proper authentication and authorization domain including the management of the authorization information on the digital objects. We acknowledge the investment that most language resource centers have made in their current infrastructure. Therefore one of the most important requirements is the loose coupling with existing infrastructures without the need to make many changes. This shift from a single language resource center into a federated environment of many language resource centers is discussed in the context of a real world center: The Language Archive supported by the Max Planck Institute for Psycholinguistics.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pizzolli-strapparava-2019-personality","url":"https:\/\/aclanthology.org\/W19-3411.pdf","title":"Personality Traits Recognition in Literary Texts","abstract":"Interesting stories often are built around interesting characters. Finding and detailing what makes an interesting character is a real challenge, but certainly a significant cue is the character personality traits. Our exploratory work tests the adaptability of the current personality traits theories to literal characters, focusing on the analysis of utterances in theatre scripts. And, at the opposite, we try to find significant traits for interesting characters. Our preliminary results demonstrate that our approach is reasonable. Using machine learning for gaining insight into the personality traits of fictional characters can make sense.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-etal-2020-adviser","url":"https:\/\/aclanthology.org\/2020.acl-demos.31.pdf","title":"ADVISER: A Toolkit for Developing Multi-modal, Multi-domain and Socially-engaged Conversational Agents","abstract":"We present ADVISER 1-an open-source, multi-domain dialog system toolkit that enables the development of multi-modal (incorporating speech, text and vision), sociallyengaged (e.g. emotion recognition, engagement level prediction and backchanneling) conversational agents. The final Python-based implementation of our toolkit is flexible, easy to use, and easy to extend not only for technically experienced users, such as machine learning researchers, but also for less technically experienced users, such as linguists or cognitive scientists, thereby providing a flexible platform for collaborative research.","label_nlp4sg":1,"task":["Developing Multi - modal , Multi - domain and Socially - engaged Conversational Agents"],"method":["toolkit"],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mielens-etal-2015-parse","url":"https:\/\/aclanthology.org\/P15-1134.pdf","title":"Parse Imputation for Dependency Annotations","abstract":"Syntactic annotation is a hard task, but it can be made easier by allowing annotators flexibility to leave aspects of a sentence underspecified. Unfortunately, partial annotations are not typically directly usable for training parsers. We describe a method for imputing missing dependencies from sentences that have been partially annotated using the Graph Fragment Language, such that a standard dependency parser can then be trained on all annotations. We show that this strategy improves performance over not using partial annotations for English, Chinese, Portuguese and Kinyarwanda, and that performance competitive with state-of-the-art unsupervised and weakly-supervised parsers can be reached with just a few hours of annotation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by the U. S. Army Research Laboratory and the U. S. Army Research Office under contract\/grant number W911NF-10-1-0533","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lyu-etal-2005-modeling","url":"https:\/\/aclanthology.org\/O05-4004.pdf","title":"Modeling Pronunciation Variation for Bi-Lingual Mandarin\/Taiwanese Speech Recognition","abstract":"In this paper, a bilingual large vocaburary speech recognition experiment based on the idea of modeling pronunciation variations is described. The two languages under study are Mandarin Chinese and Taiwanese (Min-nan). These two languages are basically mutually unintelligible, and they have many words with the same Chinese characters and the same meanings, although they are pronounced differently. Observing the bilingual corpus, we found five types of pronunciation variations for Chinese characters. A one-pass, three-layer recognizer was developed that includes a combination of bilingual acoustic models, an integrated pronunciation model, and a tree-structure based searching net. The recognizer's performance was evaluated under three different pronunciation models. The results showed that the character error rate with integrated pronunciation models was better than that with pronunciation models, using either the knowledge-based or the data-driven approach. The relative frequency ratio was also used as a measure to choose the best number of pronunciation variations for each Chinese character. Finally, the best character error rates in Mandarin and Taiwanese testing sets were found to be 16.2% and 15.0%, respectively, when the average number of pronunciations for one Chinese character was 3.9.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"falk-etal-2021-automatic","url":"https:\/\/aclanthology.org\/2021.iwcs-1.23.pdf","title":"Automatic Classification of Attributes in German Adjective-Noun Phrases","abstract":"Adjectives such as heavy (as in heavy rain) and windy (as in windy day) provide possible values for the attributes intensity and climate, respectively. The attributes themselves are not overtly realized and are in this sense implicit. While these attributes can be easily inferred by humans, their automatic classification poses a challenging task for computational models. We present the following contributions: (1) We gain new insights into the attribute selection task for German. More specifically, we develop computational models for this task that are able to generalize to unseen data. Moreover, we show that classification accuracy depends, inter alia, on the degree of polysemy of the lexemes involved, on the generalization potential of the training data and on the degree of semantic transparency of the adjective-noun pairs in question. (2) We provide the first resource for computational and linguistic experiments with German adjective-noun pairs that can be used for attribute selection and related tasks. In order to safeguard against unwelcome memorization effects, we present an automatic data augmentation method based on a lexical resource that can increase the size of the training data to a large extent.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank our student assistants Daniela Rossmann, Alina Leippert and Mareile Winkler for their help with the annotations. We are also very grateful to the anonymous reviewers for their insightful and helpful comments that helped us to improve the paper. Financial support of the research reported here has been provided by the grant Modellierung lexikalisch-semantischer Beziehungen von Kollokationen awarded by the Deutsche Forschungsgemeinschaft (DFG).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mittal-1992-elaboration","url":"https:\/\/aclanthology.org\/P92-1049.pdf","title":"Elaboration in Object Descriptions Through Examples","abstract":"Examples are often used along with textual descriptions to help convey particular ideas-especially in instructional or explanatory contexts. These accompanying examples reflect information in the surrounding text, and in turn, also influence the text. Sometimes, examples replace possible (textual) elaborations in the description. It is thus clear that if object descriptions are to be generated, the system must incorporate strategies to handle examples. In this work, we shall investigate some of these issues in the generation of object descriptions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Thanks to C6cile Paris for critical discussions, different perspectives and bright ideas. This work was supported in part by the NASA-Ames grant NCC 2-520 and under DARPA contract DABT63-91 42-0025.","year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sevcikova-zabokrtsky-2014-word","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/501_Paper.pdf","title":"Word-Formation Network for Czech","abstract":"In the present paper, we describe the development of the lexical network DeriNet, which captures core word-formation relations on the set of around 266 thousand Czech lexemes. The network is currently limited to derivational relations because derivation is the most frequent and most productive word-formation process in Czech. This limitation is reflected in the architecture of the network: each lexeme is allowed to be linked up with just a single base word; composition as well as combined processes (composition with derivation) are thus not included. After a brief summarization of theoretical descriptions of Czech derivation and the state of the art of NLP approaches to Czech derivation, we discuss the linguistic background of the network and introduce the formal structure of the network and the semi-automatic annotation procedure. The network was initialized with a set of lexemes whose existence was supported by corpus evidence. Derivational links were created using three sources of information: links delivered by a tool for morphological analysis, links based on an automatically discovered set of derivation rules, and on a grammar-based set of rules. Finally, we propose some research topics which could profit from the existence of such lexical network.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research has been supported by GA\u010cR P406\/12\/P175. The work has been using language resources developed and\/or stored and\/or distributed by the LINDAT-Clarin project of the Ministry of Education of the Czech Republic (project LM2010013).","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cardellino-etal-2017-legal","url":"https:\/\/aclanthology.org\/E17-2041.pdf","title":"Legal NERC with ontologies, Wikipedia and curriculum learning","abstract":"In this paper, we present a Wikipediabased approach to develop resources for the legal domain. We establish a mapping between a legal domain ontology, LKIF (Hoekstra et al., 2007), and a Wikipediabased ontology, YAGO (Suchanek et al., 2007), and through that we populate LKIF. Moreover, we use the mentions of those entities in Wikipedia text to train a specific Named Entity Recognizer and Classifier. We find that this classifier works well in the Wikipedia, but, as could be expected, performance decreases in a corpus of judgments of the European Court of Human Rights. However, this tool will be used as a preprocess for human annotation. We resort to a technique called curriculum learning aimed to overcome problems of overfitting by learning increasingly more complex concepts. However, we find that in this particular setting, the method works best by learning from most specific to most general concepts, not the other way round.","label_nlp4sg":1,"task":["develop resources for the legal domain"],"method":["curriculum learning","Named Entity Recognizer","Classifier"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"nishino-etal-2019-generating","url":"https:\/\/aclanthology.org\/D19-1674.pdf","title":"Generating Natural Anagrams: Towards Language Generation Under Hard Combinatorial Constraints","abstract":"An anagram is a sentence or a phrase that is made by permutating the characters of an input sentence or a phrase. For example, \"Trims cash\" is an anagram of \"Christmas\". Existing automatic anagram generation methods can find possible combinations of words form an anagram. However, they do not pay much attention to the naturalness of the generated anagrams. In this paper, we show that simple depth-first search can yield natural anagrams when it is combined with modern neural language models. Human evaluation results show that the proposed method can generate significantly more natural anagrams than baseline methods.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their insightful comments.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"neubig-etal-2011-safety","url":"https:\/\/aclanthology.org\/I11-1108.pdf","title":"Safety Information Mining --- What can NLP do in a disaster---","abstract":"This paper describes efforts of NLP researchers to create a system to aid the relief efforts during the 2011 East Japan Earthquake. Specifically, we created a system to mine information regarding the safety of people in the disaster-stricken area from Twitter, a massive yet highly unorganized information source. We describe the large scale collaborative effort to rapidly create robust and effective systems for word segmentation, named entity recognition, and tweet classification. As a result of our efforts, we were able to effectively deliver new information about the safety of over 100 people in the disasterstricken area to a central repository for safety information.","label_nlp4sg":1,"task":["Safety Information Mining"],"method":["system to mine information"],"goal1":"Sustainable Cities and Communities","goal2":null,"goal3":null,"acknowledgments":"While too numerous to list here, the authors would like to sincerely thank all of the over 65 participants in the project. Without their generous contributions of time, resources, and expertise, the work described here would have never been possible. Finally, we thank Taiichi Hashimoto, Atsushi Fujita, Shinsuke Mori, and anonymous reviewers for their helpful comments on this manuscript.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":1,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"harbusch-etal-2003-domain","url":"https:\/\/aclanthology.org\/W03-2509.pdf","title":"Domain-Specific Disambiguation for Typing with Ambiguous Keyboards","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wiebe-etal-1998-mapping","url":"https:\/\/aclanthology.org\/W98-1126.pdf","title":"Mapping Collocational Properties into Machine Learning Features","abstract":"This paper investigates interactions between collocational properties and methods for organizing them into features for machine learning. In experiments performing an event categorization task, Wiebe et al. (1997a) found that different organizations are best for different properties. This paper presents a statistical analysis of the results across different machine learning algorithms. In the experiments, the relationship between property and organization was strikingly consistent across algorithms. This prompted further analysis of this relationship, and an investigation of criteria for recognizing beneficial ways to include collocational properties in machine learning experiments. While many types of collocational properties and methods of organizing them into features have been used in NLP, systematic investigations of their interaction are rare.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by the Office of Naval Research under grant number N00014-95-1-0776. We thank Julie Maples for her work developing the annotation instructions and manually annotating the data, and Lei Duan for his work implementing the original experiments.","year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"joubert-lafourcade-2012-new","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/167_Paper.pdf","title":"A new dynamic approach for lexical networks evaluation","abstract":"Since September 2007, a large scale lexical network for French is under construction with methods based on popular consensus by means of games (under the JeuxDeMots project). To assess the quality of such a resource built by non-expert users (players of the games), we decided to adopt an approach similar to its construction, that is to say an evaluation by laymen on open class vocabulary. This evaluation is done using a Tip of the Tongue tool.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"fleischhauer-2020-predicative","url":"https:\/\/aclanthology.org\/2020.paclic-1.63.pdf","title":"Predicative multi-word expressions in Persian","abstract":"Persian, like many other Asian languages, licenses the use of bare nouns in object position. Such sequences are often treated as multiword expressions (compound verbs\/light verb constructions, and pseudo-incorporation constructions). In the paper, I argue against a uniform treatment of all 'bare noun + verb' sequences in contemporary Persian. The paper presents criteria which allow to distinguish light verb constructions from other superficially similarly looking predicational construction types.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research was carried out as part of the research project 'Funktionsverbgef\u00fcge: Familien & Komposition' ('Light verb constructions: Families & composition'; HE 8721\/1-1) funded by the Deutsche Forschungsgemeinschaft (DFG). I like to thank Mozhgan Neisani for help with the language data.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zlatev-1994-english","url":"https:\/\/aclanthology.org\/W93-0428.pdf","title":"From English to PFO: A Formal Semantic Parser","abstract":"present a formalism called PFO (Predicate logic with Flexibly binding Operators) which is said to be well-suited for formalizing the semantics of natural languages. Among other things, PFO permits a compositional formalization of \"donkey sentences\" of the type If a farmer owns a donkey he beats it.\nIn this paper we present a formal procedure and its computer implementation (written in PROLOG) that translates from a limited fragment of English to PFO, i.e. a formal semantic parser. The translation is done in two steps: first a DCG grammar delivers a parse tree for the sentence; then a number of translation rules that operate on (sub)trees apply to the analysed sentence in all possible orders which may give rise to different \"interpretations\". For example the sentence Every man does not love a woman receives 6 different formalizations corresponding to the 6 possible orders of applying the universal quantification rule, the existence quantification rule and the negation rule. Other ambiguities which the parser accounts for are those between anaphoric and deictic interpretations of pronouns: for the sentence in the first paragraph the parser will provide a formalization in which the variable for he is co-indexed with that for farmer (the \"anaphoric\" interpretation) and a formalization with a new variable (the \"deictic\" one).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sarkar-etal-2000-experiments","url":"https:\/\/aclanthology.org\/W00-1605.pdf","title":"Some Experiments on Indicators of Parsing Complexity for Lexicalized Grammars","abstract":"In this paper, we identify syntactic lexical ambiguity and sentence complexity as factors that contribute to parsing complexity in fully lexicalized grammar formalisms such as Lexicalized Tree Adjoining Grammars. We also report on experiments that explore the effects of these factors on parsing complexity. We discuss how these constraints can be exploited in improving efficiency of parsers for such grammar formalisms.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ide-etal-2003-international","url":"https:\/\/aclanthology.org\/W03-0804.pdf","title":"International Standard for a Linguistic Annotation Framework","abstract":"This paper describes the outline of a linguistic annotation framework under development by ISO TC37 SC WG1-1. This international standard will provide an architecture for the creation, annotation, and manipulation of linguistic resources and processing software. The outline described here results from a meeting of approximately 20 experts in the field, who determined the principles and fundamental structure of the framework. The goal is to provide maximum flexibility for encoders and annotators, while at the same time enabling interchange and re-use of annotated linguistic resources.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"post-etal-2012-constructing","url":"https:\/\/aclanthology.org\/W12-3152.pdf","title":"Constructing Parallel Corpora for Six Indian Languages via Crowdsourcing","abstract":"Recent work has established the efficacy of Amazon's Mechanical Turk for constructing parallel corpora for machine translation research. We apply this to building a collection of parallel corpora between English and six languages from the Indian subcontinent: Bengali, Hindi, Malayalam, Tamil, Telugu, and Urdu. These languages are low-resource, under-studied, and exhibit linguistic phenomena that are difficult for machine translation. We conduct a variety of baseline experiments and analysis, and release the data to the community.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Lexi Birch for discussions about strategies for selecting and assembling the data sets. This research was supported in part by gifts from Google and Microsoft, the Euro-MatrixPlus project funded by the European Commission (7th Framework Programme), and a DARPA grant entitled \"Crowdsourcing Translation\". The views in this paper are the authors' alone.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"trajanovski-etal-2021-text","url":"https:\/\/aclanthology.org\/2021.naacl-industry.1.pdf","title":"When does text prediction benefit from additional context? An exploration of contextual signals for chat and email messages","abstract":"Email and chat communication tools are increasingly important for completing daily tasks. Accurate real-time phrase completion can save time and bolster productivity. Modern text prediction algorithms are based on large language models which typically rely on the prior words in a message to predict a completion. We examine how additional contextual signals (from previous messages, time, and subject) affect the performance of a commercial text prediction model. We compare contextual text prediction in chat and email messages from two of the largest commercial platforms Microsoft Teams and Outlook, finding that contextual signals contribute to performance differently between these scenarios. On emails, time context is most beneficial with small relative gains of 2% over baseline. Whereas, in chat scenarios, using a tailored set of previous messages as context yields relative improvements over the baseline between 9.3% and 18.6% across various critical serviceoriented text prediction metrics.","label_nlp4sg":1,"task":["text prediction"],"method":["exploration of contextual signals"],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":"We would like to thank the members of Microsoft Search, Assistant and Intelligence (MSAI) group for their useful comments and suggestions.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sasano-etal-2013-simple","url":"https:\/\/aclanthology.org\/I13-1019.pdf","title":"A Simple Approach to Unknown Word Processing in Japanese Morphological Analysis","abstract":"This paper presents a simple but effective approach to unknown word processing in Japanese morphological analysis, which handles 1) unknown words that are derived from words in a pre-defined lexicon and 2) unknown onomatopoeias. Our approach leverages derivation rules and onomatopoeia patterns, and correctly recognizes certain types of unknown words. Experiments revealed that our approach recognized about 4,500 unknown words in 100,000 Web sentences with only 80 harmful side effects and a 6% loss in speed.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jensen-etal-2020-buhscitu","url":"https:\/\/aclanthology.org\/2020.semeval-1.104.pdf","title":"Buhscitu at SemEval-2020 Task 7: Assessing Humour in Edited News Headlines Using Hand-Crafted Features and Online Knowledge Bases","abstract":"This paper describes our system to assess humour intensity in edited news headlines as part of a participation in the 7th task of SemEval-2020 on \"Humor, Emphasis and Sentiment\". Various factors need to be accounted for in order to assess the funniness of an edited headline. We propose an architecture that uses hand-crafted features, knowledge bases and a language model to understand humour, and combines them in a regression model. Our system outperforms two baselines. In general, automatic humour assessment remains a difficult task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the HPC support at ITU, especially Frey Alfredsson, for support for the computational resources used in this work.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"singh-bandyopadhyay-2008-morphology","url":"https:\/\/aclanthology.org\/I08-3015.pdf","title":"Morphology Driven Manipuri POS Tagger","abstract":"A good POS tagger is a critical component of a machine translation system and other related NLP applications where an appropriate POS tag will be assigned to individual words in a collection of texts. There is not enough POS tagged corpus available in Manipuri language ruling out machine learning approaches for a POS tagger in the language. A morphology driven Manipuri POS tagger that uses three dictionaries containing root words, prefixes and suffixes has been designed and implemented using the affix information irrespective of the context of the words. We have tested the current POS tagger on 3784 sentences containing 10917 unique words. The POS tagger demonstrated an accuracy of 69%. Among the incorrectly tagged 31% words, 23% were unknown words (includes 9% named entities) and 8% known words were wrongly tagged.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"maynard-etal-2003-multilingual","url":"https:\/\/aclanthology.org\/E03-2009.pdf","title":"Multilingual adaptations of a reusable information extraction tool","abstract":"Hebrew:-.5 pnn s5 no royipt 5m5 513-. 'PC Hindi: 4 isbi T 1tsd I 31 t 4tsi d4f Japanese: f1,14\/37...tIt's:641-1-0-t-tiafokt*-)(-mth..\" Korean: 1-1-L-W-21ff VoiR. Marathi: 4t ti-1 \/9113) 9I-Wa, 1:MT TA-d-ff41- .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jones-etal-2020-call","url":"https:\/\/aclanthology.org\/2020.lrec-1.816.pdf","title":"Call My Net 2: A New Resource for Speaker Recognition","abstract":"We introduce the Call My Net 2 (CMN2) Corpus, a new resource for speaker recognition featuring Tunisian Arabic conversations between friends and family, incorporating both traditional telephony and VoIP data. The corpus contains data from over 400 Tunisian Arabic speakers collected via a custom-built platform deployed in Tunis, with each speaker making 10 or more calls each lasting up to 10 minutes. Calls include speech in various realistic and natural acoustic settings, both noisy and non-noisy. Speakers used a variety of handsets, including landline and mobile devices, and made VoIP calls from tablets or computers. All calls were subject to a series of manual and automatic quality checks, including speech duration, audio quality, language identity and speaker identity. The CMN2 corpus has been used in two NIST Speaker Recognition Evaluations (SRE18 and SRE19), and the SRE test sets as well as the full CMN2 corpus will be published in the Linguistic Data Consortium Catalog. We describe CMN2 corpus requirements, the telephone collection platform, and procedures for call collection. We review properties of the CMN2 dataset and discuss features of the corpus that distinguish it from prior SRE collection efforts, including some of the technical challenges encountered with collecting VoIP data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"LDC would like to thank Craig Greenberg and Omid Sadjadi at NIST, and Doug Reynolds and Elliot Singer at Lincoln Laboratories, MIT for their contributions to corpus planning and feedback on collected data. The authors gratefully acknowledge the contributions of Dr. Mohamed Maamouri who provided expert input on aspects of Tunisian Arabic and oversaw the corpus collection efforts in Tunis.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-dolan-2011-collecting","url":"https:\/\/aclanthology.org\/P11-1020.pdf","title":"Collecting Highly Parallel Data for Paraphrase Evaluation","abstract":"A lack of standard datasets and evaluation metrics has prevented the field of paraphrasing from making the kind of rapid progress enjoyed by the machine translation community over the last 15 years. We address both problems by presenting a novel data collection framework that produces highly parallel text data relatively inexpensively and on a large scale. The highly parallel nature of this data allows us to use simple n-gram comparisons to measure both the semantic adequacy and lexical dissimilarity of paraphrase candidates. In addition to being simple and efficient to compute, experiments show that these metrics correlate highly with human judgments.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to everyone in the NLP group at Microsoft Research and Natural Language Learning group at UT Austin for helpful discussions and feedback. We thank Chris Brockett, Raymond Mooney, Katrin Erk, Jason Baldridge and the anonymous reviewers for helpful comments on a previous draft.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"racca-etal-2011-give","url":"https:\/\/aclanthology.org\/W11-2848.pdf","title":"The GIVE-2.5 C Generation System","abstract":"In this paper we describe the C generation system from the Universidad Nacional de C\u00f3rdoba (Argentina) as embodied during the 2011 GIVE 2.5 challenge. The C system has two distinguishing characteristics. First, its navigation and referring strategies are based on the area visible to the player, making the system independent of GIVE's internal representation of areas (such as rooms). As a result, the system portability to other virtual environments is enhanced. Second, the system adapts classical grounding models to the task of instruction giving in virtual worlds. The simple grounding processes implemented (for referents, game concepts and game progress) seem to have an impact on the evaluation results.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tian-etal-2020-joint","url":"https:\/\/aclanthology.org\/2020.coling-main.187.pdf","title":"Joint Chinese Word Segmentation and Part-of-speech Tagging via Multi-channel Attention of Character N-grams","abstract":"Chinese word segmentation (CWS) and part-of-speech (POS) tagging are two fundamental tasks for Chinese language processing. Previous studies have demonstrated that jointly performing them can be an effective one-step solution to both tasks and this joint task can benefit from a good modeling of contextual features such as n-grams. However, their work on modeling such contextual features is limited to concatenating the features or their embeddings directly with the input embeddings without distinguishing whether the contextual features are important for the joint task in the specific context. Therefore, their models for the joint task could be misled by unimportant contextual information. In this paper, we propose a character-based neural model for the joint task enhanced by multi-channel attention of n-grams. In the attention module, n-gram features are categorized into different groups according to several criteria, and n-grams in each group are weighted and distinguished according to their importance for the joint task in the specific context. To categorize n-grams, we try two criteria in this study, i.e., n-gram frequency and length, so that n-grams having different capabilities of carrying contextual information are discriminatively learned by our proposed attention module. Experimental results on five benchmark datasets for CWS and POS tagging demonstrate that our approach outperforms strong baseline models and achieves state-of-the-art performance on all five datasets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by The Chinese University of Hong Kong (Shenzhen) under University Development Fund UDF01001809.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pasquer-etal-2018-varide","url":"https:\/\/aclanthology.org\/W18-4932.pdf","title":"VarIDE at PARSEME Shared Task 2018: Are Variants Really as Alike as Two Peas in a Pod?","abstract":"We describe the VarIDE system (standing for Variant IDEntification) which participated in edition 1.1 of the PARSEME shared task on automatic identification of verbal multiword expressions (VMWEs). Our system focuses on the task of VMWE variant identification by using morphosyntactic information in the training data to predict if candidates extracted from the test corpus could be idiomatic, thanks to a naive Bayes classifier. We report results for 19 languages.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the IC1207 PARSEME COST action 9 and by the PARSEME-FR project (ANR-14-CERA-0001). 10 ","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"merrill-2019-sequential","url":"https:\/\/aclanthology.org\/W19-3901.pdf","title":"Sequential Neural Networks as Automata","abstract":"This work attempts to explain the types of computation that neural networks can perform by relating them to automata. We first define what it means for a real-time network with bounded precision to accept a language. A measure of network memory follows from this definition. We then characterize the classes of languages acceptable by various recurrent networks, attention, and convolutional networks. We find that LSTMs function like counter machines and relate convolutional networks to the subregular hierarchy. Overall, this work attempts to increase our understanding and ability to interpret neural networks through the lens of theory. These theoretical insights help explain neural computation, as well as the relationship between neural networks and natural language grammar.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Thank you to Dana Angluin and Robert Frank for their insightful advice and support on this project.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"salesky-etal-2019-exploring","url":"https:\/\/aclanthology.org\/P19-1179.pdf","title":"Exploring Phoneme-Level Speech Representations for End-to-End Speech Translation","abstract":"Previous work on end-to-end translation from speech has primarily used frame-level features as speech representations, which creates longer, sparser sequences than text. We show that a na\u00efve method to create compressed phoneme-like speech representations is far more effective and efficient for translation than traditional frame-level speech features. Specifically, we generate phoneme labels for speech frames and average consecutive frames with the same label to create shorter, higher-level source sequences for translation. We see improvements of up to 5 BLEU on both our high and low resource language pairs, with a reduction in training time of 60%. Our improvements hold across multiple data sizes and two language pairs.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their helpful comments.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"delmonte-marchesi-2017-semantically","url":"https:\/\/aclanthology.org\/W17-7402.pdf","title":"A semantically-based approach to the annotation of narrative style","abstract":"This work describes the annotation of the novel \"The Solid Mandala\" (Patrick White, 1966), carried out combining sentiment and opinion mining on character level with the Appraisal Theory framework, here used to identify evaluative statements and their contribution to the social dimension of the text. Our approach was inspired by research on the correlation between White's style and the personality of his main characters. The annotation was manually executed by second author using an XML standard markup system and double-checked by first author. In this paper we comment on the selected features, focusing on the ones acquiring specialized meaning in the context of the novel, and provide results in terms of quantitative data. Comparing them, we are able to extract story units in which special or significant events take place, and to predict the presence or similar units in the narrative by detecting concentrations of features. Eventually collecting all annotations has made available a lexicon where all ambiguities are clearly identifiable and verifiable. The lexicon will be used in the future for the automatic annotation of other novels.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nn-1997-logos","url":"https:\/\/aclanthology.org\/A97-2009.pdf","title":"Logos Machine Translation System","abstract":"Logos Corporation has been involved in machine translation R&D for over 27 years. From an English-Vietnamese system produced in 1972 to the latest releases of our software we have striven to produce the best machine translation (MT) software available. Today the Logos system is one of the best commercial MT systems on the market. Our architecture coupled with experience in the commercial sector has made us a leader in providing solutions in our users' translation work. Logos does not claim to replace human translators; rather, we seek to enhance the professional translator's work environment.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"beigman-klebanov-beigman-2010-empirical","url":"https:\/\/aclanthology.org\/N10-1067.pdf","title":"Some Empirical Evidence for Annotation Noise in a Benchmarked Dataset","abstract":"A number of recent articles in computational linguistics venues called for a closer examination of the type of noise present in annotated datasets used for benchmarking (Reidsma and Carletta, 2008; Beigman Klebanov and Beigman, 2009). In particular, Beigman Klebanov and Beigman articulated a type of noise they call annotation noise and showed that in worst case such noise can severely degrade the generalization ability of a linear classifier (Beigman and Beigman Klebanov, 2009). In this paper, we provide quantitative empirical evidence for the existence of this type of noise in a recently benchmarked dataset. The proposed methodology can be used to zero in on unreliable instances, facilitating generation of cleaner gold standards for benchmarking.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers of this and the previous draft for helping us improve the paper significantly. We also thank Amar Cheema for his advice on AMT.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"abdullah-chali-2020-towards","url":"https:\/\/aclanthology.org\/2020.inlg-1.11.pdf","title":"Towards Generating Query to Perform Query Focused Abstractive Summarization using Pre-trained Model","abstract":"Query Focused Abstractive Summarization (QFAS) represents an abstractive summary from the source document based on a given query. To measure the performance of abstractive summarization tasks, different datasets have been broadly used. However, for QFAS tasks, only a limited number of datasets have been used, which are comparatively small and provide single sentence summaries. This paper presents a query generation approach, where we considered most similar words between documents and summaries for generating queries. By implementing our query generation approach, we prepared two relatively large datasets, namely CNN\/DailyMail and Newsroom which contain multiple sentence summaries and can be used for future QFAS tasks. We also implemented a pre-processing approach to perform QFAS tasks using a pretrained language model, BERTSUM. In our pre-processing approach, we sorted the sentences of the documents from the most queryrelated sentences to the less query-related sentences. Then, we fine-tuned the BERT-SUM model for generating the abstractive summaries. We also experimented on one of the largely used datasets, Debatepedia, to compare our QFAS approach with other models. The experimental results show that our approach outperforms the state-of-the-art models on three ROUGE scores.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their useful comments. The research reported in this paper was conducted at the University of Lethbridge and supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada discovery grant and the University of Lethbridge.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kulkarni-etal-2018-annotated","url":"https:\/\/aclanthology.org\/N18-2016.pdf","title":"An Annotated Corpus for Machine Reading of Instructions in Wet Lab Protocols","abstract":"We describe an effort to annotate a corpus of natural language instructions consisting of 622 wet lab protocols to facilitate automatic or semi-automatic conversion of protocols into a machine-readable format and benefit biological research. Experimental results demonstrate the utility of our corpus for developing machine learning approaches to shallow semantic parsing of instructional texts. We make our annotated Wet Lab Protocol Corpus available to the research community. 1 1 The dataset is available on the authors' websites.","label_nlp4sg":1,"task":["Machine Reading of Instructions in Wet Lab Protocols"],"method":["Annotated Corpus"],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":"We would like to thank the annotators: Bethany Toma, Esko Kautto, Sanaya Shroff, Alex Jacobs, Berkay Kaplan, Colins Sullivan, Junfa Zhu, Neena Baliga and Vardaan Gangal. We would like to thank Marie-Catherine de Marneffe and anonymous reviewers for their feedback.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mayfield-black-2019-stance","url":"https:\/\/aclanthology.org\/W19-2108.pdf","title":"Stance Classification, Outcome Prediction, and Impact Assessment: NLP Tasks for Studying Group Decision-Making","abstract":"In group decision-making, the nuanced process of conflict and resolution that leads to consensus formation is closely tied to the quality of decisions made. Behavioral scientists rarely have rich access to process variables, though, as unstructured discussion transcripts are difficult to analyze. Here, we define ways for NLP researchers to contribute to the study of groups and teams. We introduce three tasks alongside a large new corpus of over 400,000 group debates on Wikipedia. We describe the tasks and their importance, then provide baselines showing that BERT contextualized word embeddings consistently outperform other language representations.","label_nlp4sg":1,"task":["Stance Classification","Studying Group Decision - Making"],"method":["BERT","word embeddings"],"goal1":"Partnership for the goals","goal2":"Peace, Justice and Strong Institutions","goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":1} {"ID":"christodoulopoulos-2018-knowledge","url":"https:\/\/aclanthology.org\/W18-4007.pdf","title":"Knowledge Representation and Extraction at Scale","abstract":"is a Research Scientist at Amazon Research Cambridge (UK), working on knowledge extraction and verification. He got his PhD at the University of Edinburgh, where he studied the underlying structure of syntactic categories across languages. Before joining Amazon, he was a postdoctoral researcher at the University of Illinois working on semantic role labeling and psycholinguistic models of language acquisition. He has experience in science communication including giving public talks and producing a science podcast.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"weischedel-black-1980-responding","url":"https:\/\/aclanthology.org\/J80-2003.pdf","title":"Responding Intelligently to Unparsable Inputs","abstract":"All natural language systems are likely to receive inputs for which they are unprepared. The system must be able to respond to such inputs by explicitly indicating the reasons the input could not be understood, so that the user will have precise information for trying to rephrase the input. If natural language communication to data bases, to expert consultant systems, or to any other practical system is to be accepted by other than computer personnel, this is an absolute necessity. This paper presents several ideas for dealing with parts of this broad problem. One is the use of presupposition to detect user assumptions. The second is relaxation of tests while parsing. The third is a general technique for responding intelligently when no parse can be found. All of these ideas have been implemented and tested in one of two natural language systems. Some of the ideas are heuristics that might be employed by humans; others are engineering solutions for the problem of practical natural language systems. This paper presents three ideas for giving useful feedback when a user exceeds the system's model. The ideas help to identify and explain the system's problem in processing an input in many cases, but do not perform the next step, which is suggesting how the user might rephrase the input. These ideas have been tested in one of two systems: (1) an intelligent tutor for instruction in a foreign language and (2) a system which computes the","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors gratefully acknowledge the many valuable contributions of the referees and George Heidorn to improving the exposition. Norm Sondheimer also contributed much in many discussions of our ideas.","year":1980,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"behera-etal-2013-satty","url":"https:\/\/aclanthology.org\/S13-2037.pdf","title":"SATTY : Word Sense Induction Application in Web Search Clustering","abstract":"The aim of this paper is to perform Word Sense induction (WSI); which clusters web search results and produces a diversified list of search results. It describes the WSI system developed for Task 11 of SemEval-2013. This paper implements the idea of monotone submodular function optimization using greedy algorithm.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lascarides-asher-1993-semantics","url":"https:\/\/aclanthology.org\/E93-1030.pdf","title":"A Semantics and Pragmatics for the Pluperfect","abstract":"We offer a semantics and pragmatics of the pluperfect in narrative discourse. We rexamine in a formal model of implicature, how the reader's knowledge about the discourse, Gricean-maxims and causation contribute to the meaning of the pluperfect. By placing the analysis in a theory where the interactions among these knowledge resources can be precisely computed, we overcome some problems with previous Reichenbachian approaches.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Thanks to Mario Borillo, Myriam Bras, Mimo Caenepeel, Uwe Reyle and two anonymous reviewers for their helpful comments on earlier drafts of this paper.","year":1993,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"orasan-etal-2003-build","url":"https:\/\/aclanthology.org\/E03-1064.pdf","title":"How to build a QA system in your back-garden: application for Romanian","abstract":"Even though the question answering (QA) field appeared only in recent years, there are systems for English which obtain good results for opendomain questions. The situation is very different for other languages, mainly due to the lack of NLP resources which are normally used by QA systems. In this paper, we present a project which develops a QA system for Romanian. The challenges we face and decisions we have to make are discussed.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"huang-etal-2022-mtl","url":"https:\/\/aclanthology.org\/2022.nlp4convai-1.11.pdf","title":"MTL-SLT: Multi-Task Learning for Spoken Language Tasks","abstract":"Language understanding in speech-based systems has attracted extensive interest from both academic and industrial communities in recent years with the growing demand for voice-based applications. Prior works focus on independent research by the automatic speech recognition (ASR) and natural language processing (NLP) communities, or on jointly modeling the speech and NLP problems focusing on a single dataset or single NLP task. To facilitate the development of spoken language research, we introduce MTL-SLT, a multi-task learning framework for spoken language tasks. MTL-SLT takes speech as input, and outputs transcription, intent, named entities, summaries, and answers to text queries, supporting the tasks of spoken language understanding, spoken summarization and spoken question answering respectively. The proposed framework benefits from three key aspects: 1) pre-trained sub-networks of ASR model and language model; 2) multitask learning objective to exploit shared knowledge from different tasks; 3) end-to-end training of ASR and downstream NLP task based on sequence loss. We obtain state-of-the-art results on spoken language understanding tasks such as SLURP and ATIS. Spoken summarization results are reported on a new dataset: Spoken-Gigaword.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-etal-2020-dusql","url":"https:\/\/aclanthology.org\/2020.emnlp-main.562.pdf","title":"DuSQL: A Large-Scale and Pragmatic Chinese Text-to-SQL Dataset","abstract":"Due to the lack of labeled data, previous research on text-to-SQL parsing mainly focuses on English. Representative English datasets include ATIS, WikiSQL, Spider, etc. This paper presents DuSQL, a larges-scale and pragmatic Chinese dataset for the cross-domain text-to-SQL task, containing 200 databases, 813 tables, and 23,797 question\/SQL pairs. Our new dataset has three major characteristics. First, by manually analyzing questions from several representative applications, we try to figure out the true distribution of SQL queries in real-life needs. Second, DuSQL contains a considerable proportion of SQL queries involving row or column calculations, motivated by our analysis on the SQL query distributions. Finally, we adopt an effective data construction framework via human-computer collaboration. The basic idea is automatically generating SQL queries based on the SQL grammar and constrained by the given database. This paper describes in detail the construction process and data statistics of DuSQL. Moreover, we present and compare performance of several open-source textto-SQL parsers with minor modification to accommodate Chinese, including a simple yet effective extension to IRNet for handling calculation SQL queries.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank three anonymous reviewers for their helpful feedback and discussion on this work. Zhenghua Li and Min Zhang were supported by National Natural Science Foundation of China (Grant No. 61525205, 61876116), and a Project Funded by the Priority Academic Program Development (PAPD) of Jiangsu Higher Education Institutions.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"patwa-etal-2020-semeval","url":"https:\/\/aclanthology.org\/2020.semeval-1.100.pdf","title":"SemEval-2020 Task 9: Overview of Sentiment Analysis of Code-Mixed Tweets","abstract":"In this paper, we present the results of the SemEval-2020 Task 9 on Sentiment Analysis of Code-Mixed Tweets (SentiMix 2020). 1 We also release and describe our Hinglish (Hindi-English) and Spanglish (Spanish-English) corpora annotated with word-level language identification and sentence-level sentiment labels. These corpora are comprised of 20K and 19K examples, respectively. The sentiment labels are-Positive, Negative, and Neutral. SentiMix attracted 89 submissions in total including 61 teams that participated in the Hinglish contest and 28 submitted systems to the Spanglish competition. The best performance achieved was 75.0% F1 score for Hinglish and 80.6% F1 for Spanglish. We observe that BERT-like models and ensemble methods are the most common and successful approaches among the participants.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"saravani-etal-2021-investigation","url":"https:\/\/aclanthology.org\/2021.insights-1.15.pdf","title":"An Investigation into the Contribution of Locally Aggregated Descriptors to Figurative Language Identification","abstract":"In natural language understanding, topics that touch upon figurative language and pragmatics are notably difficult. We probe a novel use of locally aggregated descriptors-specifically, an architecture called NeXtVLAD-motivated by its accomplishments in computer vision, achieve tremendous success in the FigLang2020 sarcasm detection task. The reported F 1 score of 93.1% is 14% higher than the next best result. We specifically investigate the extent to which the novel architecture is responsible for this boost, and find that it does not provide statistically significant benefits. Deep learning approaches are expensive, and we hope our insights highlighting the lack of benefits from introducing a resourceintensive component will aid future research to distill the effective elements from long and complex pipelines, thereby providing a boost to the wider research community.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by funds from U.S. National Science Foundation (NSF) under award number CNS 2027750, CNS 1822118, and SES 1834597, and from NIST, Statnett, Cyber Risk Research, AMI, and ARL.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ishihara-etal-2018-neural","url":"https:\/\/aclanthology.org\/N18-1047.pdf","title":"Neural Tensor Networks with Diagonal Slice Matrices","abstract":"Although neural tensor networks (NTNs) have been successful in many natural language processing tasks, they require a large number of parameters to be estimated, which often results in overfitting and long training times. We address these issues by applying eigendecomposition to each slice matrix of a tensor to reduce the number of parameters. We evaluate our proposed NTN models in two tasks. First, the proposed models are evaluated in a knowledge graph completion task. Second, a recursive NTN (RNTN) extension of the proposed models is evaluated on a logical reasoning task. The experimental results show that our proposed models learn better and faster than the original (R)NTNs.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-etal-2016-system","url":"https:\/\/aclanthology.org\/W16-4608.pdf","title":"System Description of bjtu\\_nlp Neural Machine Translation System","abstract":"This paper presents our machine translation system that developed for the WAT2016 evaluation tasks of ja-en, ja-zh, en-ja, zh-ja, JPCja-en, JPCja-zh, JPCen-ja, JPCzh-ja. We build our system based on encoder-decoder framework by integrating recurrent neural network (RNN) and gate recurrent unit (GRU), and we also adopt an attention mechanism for solving the problem of information loss. Additionally, we propose a simple translation-specific approach to resolve the unknown word translation problem. Experimental results show that our system performs better than the baseline statistical machine translation (SMT) systems in each task. Moreover, it shows that our proposed approach of unknown word translation performs effectively improvement of translation results.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ma-etal-2017-detect","url":"https:\/\/aclanthology.org\/P17-1066.pdf","title":"Detect Rumors in Microblog Posts Using Propagation Structure via Kernel Learning","abstract":"How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-ofthe-art rumor detection models.","label_nlp4sg":1,"task":["identifying rumors"],"method":["Kernel Learning","propagation trees"],"goal1":"Peace, Justice and Strong Institutions","goal2":"Peace, Justice and Strong Institutions","goal3":null,"acknowledgments":"This work is partly supported by General Research Fund of Hong Kong (14232816). We would like to thank anonymous reviewers for the insightful comments.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"sennrich-etal-2013-exploiting","url":"https:\/\/aclanthology.org\/R13-1079.pdf","title":"Exploiting Synergies Between Open Resources for German Dependency Parsing, POS-tagging, and Morphological Analysis","abstract":"We report on the recent development of ParZu, a German dependency parser. We discuss the effect of POS tagging and morphological analysis on parsing performance, and present novel ways of improving performance of the components, including the use of morphological features for POS-tagging, the use of syntactic information to select good POS sequences from an n-best list, and using parsed text as training data for POS tagging and statistical parsing. We also describe our efforts towards reducing the dependency on restrictively licensed and closed-source NLP resources.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was funded by the Swiss National Science Foundation under grant 105215_126999.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hopkins-etal-2017-beyond","url":"https:\/\/aclanthology.org\/D17-1083.pdf","title":"Beyond Sentential Semantic Parsing: Tackling the Math SAT with a Cascade of Tree Transducers","abstract":"We present an approach for answering questions that span multiple sentences and exhibit sophisticated cross-sentence anaphoric phenomena, evaluating on a rich source of such questions-the math portion of the Scholastic Aptitude Test (SAT). By using a tree transducer cascade as its basic architecture, our system (called EU-CLID) propagates uncertainty from multiple sources (e.g. coreference resolution or verb interpretation) until it can be confidently resolved. Experiments show the first-ever results (43% recall and 91% precision) on SAT algebra word problems. We also apply EUCLID to the public Dolphin algebra question set, and improve the state-of-the-art F 1-score from 73.9% to 77.0%.","label_nlp4sg":1,"task":["Sentential Semantic Parsing"],"method":["Tree Transducers"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank Luke Zettlemoyer, Jayant Krishnamurthy, Oren Etzioni, and the anonymous reviewers for valuable feedback on earlier drafts of the paper.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"becker-heckmann-2000-parsing","url":"https:\/\/aclanthology.org\/2000.iwpt-1.29.pdf","title":"Parsing Mildly Context-sensitive RMS","abstract":"We introduce Recursive Matrix Systems (RMS) which encompass mildly context-sensitive for malisms and present efficient parsing algorithms for linear and context-free variants of RMS. The time complexities are O(n 2h+ I), and O(n 3h) respectively, where his the height of the matrix. It is possible to represent Tree Adjoining Grammars (TAG [1], MC-TAG [2], and R-TAG [3]) as RMS uniformly.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"evert-2014-distributional","url":"https:\/\/aclanthology.org\/C14-2024.pdf","title":"Distributional Semantics in R with the wordspace Package","abstract":"This paper introduces the wordspace package, which turns Gnu R into an interactive laboratory for research in distributional semantics. The package includes highly efficient implementations of a carefully chosen set of key functions, allowing it to scale up to real-life data sets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"asgari-etal-2020-topic","url":"https:\/\/aclanthology.org\/2020.nlpmc-1.9.pdf","title":"Topic-Based Measures of Conversation for Detecting Mild CognitiveImpairment","abstract":"Conversation is a complex cognitive task that engages multiple aspects of cognitive functions to remember the discussed topics, monitor the semantic and linguistic elements, and recognize others' emotions. In this paper, we propose a computational method based on the lexical coherence of consecutive utterances to quantify topical variations in semistructured conversations of older adults with cognitive impairments. Extracting the lexical knowledge of conversational utterances, our method generates a set of novel conversational measures that indicate underlying cognitive deficits among subjects with mild cognitive impairment (MCI). Our preliminary results verify the utility of the proposed conversation-based measures in distinguishing MCI from healthy controls.","label_nlp4sg":1,"task":["Detecting Mild CognitiveImpairment"],"method":["Topic - Based Measures"],"goal1":"Good Health and Well-Being","goal2":"Reduced Inequalities","goal3":null,"acknowledgments":"This work was supported by Oregon Roybal Center for Aging and Technology Pilot Program award P30 AG008017-30 in addition to NIH-NIA Aging awards R01-AG051628, and R01-AG056102.","year":2020,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":1,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"saito-1992-interactive","url":"https:\/\/aclanthology.org\/C92-3165.pdf","title":"Interactive Speech Understanding","abstract":"This paper introduces at robust interactive method for speech understatnding. The generatlized LR patrsing is enhanced ill this approach. Patrsing proceeds fl'om left to right correcting minor errors. When at very noisy portion is detected, the patrser skips that portion using a .fake nonterminal symbol. The unidentified portion is resolved by re-utterance of thatt portion which is parsed very efliciently by using the parse record of the first utterance. The user does not have to speak the whole sentence again. This method is also catpatble of hatndling unknown words, which is imlmrtatnt in pra.ctical systems. 1)erected unknown words earn I)e incrementatlly incorporatted into the dictionary after the interatction with tile user. A pilot system has shown great elfectiveness of this atpproach.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"de-kok-etal-2017-distributional","url":"https:\/\/aclanthology.org\/W17-7603.pdf","title":"Distributional regularities of verbs and verbal adjectives: Treebank evidence and broader implications","abstract":"Word formation processes such as derivation and compounding yield realizations of lexical roots in different parts of speech and in different syntactic environments. Using verbal adjectives as a case study and treebanks of Dutch and German as data sources, similarities and divergences in syntactic distributions across different realizations of lexical roots are examined and the implications for computational modeling and for treebank construction are discussed.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Financial support for the research reported in this paper was provided by the German Research Foundation (DFG) as part of the Collaborative Research Center \"The Construction of Meaning\" (SFB 833), project A3.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rijhwani-etal-2020-soft","url":"https:\/\/aclanthology.org\/2020.acl-main.722.pdf","title":"Soft Gazetteers for Low-Resource Named Entity Recognition","abstract":"Traditional named entity recognition models use gazetteers (lists of entities) as features to improve performance. Although modern neural network models do not require such handcrafted features for strong performance, recent work (Wu et al., 2018) has demonstrated their utility for named entity recognition on English data. However, designing such features for low-resource languages is challenging, because exhaustive entity gazetteers do not exist in these languages. To address this problem, we propose a method of \"soft gazetteers\" that incorporates ubiquitously available information from English knowledge bases, such as Wikipedia, into neural named entity recognition models through cross-lingual entity linking. Our experiments on four low-resource languages show an average improvement of 4 points in F1 score. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ziering-van-der-plas-2014-good","url":"https:\/\/aclanthology.org\/C14-1099.pdf","title":"What good are `Nominalkomposita' for `noun compounds': Multilingual Extraction and Structure Analysis of Nominal Compositions using Linguistic Restrictors","abstract":"Finding a definition of compoundhood that is cross-lingually valid is a non-trivial task as shown by linguistic literature. We present an iterative method for defining and extracting English noun compounds in a multilingual setting. We show how linguistic criteria can be used to extract compounds automatically and vice versa how the results of this extraction can shed new lights on linguistic theories about compounding. The extracted compound nouns and their multilingual contexts are a rich source that serves several purposes. In an additional case study we show how the database serves to predict the internal structure of tripartite noun compounds using spelling variations across languages, which leads to a precision of over 91%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was funded and supported by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) as part of the SFB 732. We thank the anonymous reviewers for their comments. We also thank Gianina Iordachioaia for her helpful input and interesting discussion.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yan-etal-2020-global","url":"https:\/\/aclanthology.org\/2020.findings-emnlp.331.pdf","title":"Global Bootstrapping Neural Network for Entity Set Expansion","abstract":"Bootstrapping for entity set expansion (ESE) has been studied for a long period, which expands new entities using only a few seed entities as supervision. Recent end-to-end bootstrapping approaches have shown their advantages in information capturing and bootstrapping process modeling. However, due to the sparse supervision problem, previous endto-end methods often only leverage information from near neighborhoods (local semantics) rather than those propagated from the co-occurrence structure of the whole corpus (global semantics). To address this issue, this paper proposes Global Bootstrapping Network (GBN) with the \"pre-training and fine-tuning\" strategies for effective learning. Specifically, it contains a global-sighted encoder to capture and encode both local and global semantics into entity embedding, and an attention-guided decoder to sequentially expand new entities based on these embeddings. The experimental results show that the GBN learned by \"pretraining and fine-tuning\" strategies achieves state-of-the-art performance on two bootstrapping datasets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"alexeeva-etal-2020-mathalign","url":"https:\/\/aclanthology.org\/2020.lrec-1.269.pdf","title":"MathAlign: Linking Formula Identifiers to their Contextual Natural Language Descriptions","abstract":"Extending machine reading approaches to extract mathematical concepts and their descriptions is useful for a variety of tasks, ranging from mathematical information retrieval to increasing accessibility of scientific documents for the visually impaired. This entails segmenting mathematical formulae into identifiers and linking them to their natural language descriptions. We propose a rule-based approach for this task, which extracts L A T E X representations of formula identifiers and links them to their in-text descriptions, given only the original PDF and the location of the formula of interest. We also present a novel evaluation dataset for this task, as well as the tool used to create it.","label_nlp4sg":1,"task":["Linking Formula Identifiers to their Contextual Natural Language Descriptions"],"method":["rule - based approach"],"goal1":"Industry, Innovation and Infrastructure","goal2":"Quality Education","goal3":null,"acknowledgments":"We thank the anonymous reviewers for their constructive feedback. This work is supported by the Defense Advanced Research Projects Agency (DARPA) as part of the Automated Scientific Knowledge Extraction (ASKE) program under agreement number HR00111990011. Marco Valenzuela-Esc\u00e1rcega declares a financial interest in LUM.AI. This interest has been properly disclosed to the University of Arizona Institutional Review Committee and is managed in accordance with its conflict of interest policies.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"barreiro-cabral-2009-reescreve","url":"https:\/\/aclanthology.org\/2009.mtsummit-btm.1.pdf","title":"ReEscreve: a Translator-friendly Multi-purpose Paraphrasing Software Tool","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"shahmohammadi-etal-2021-learning","url":"https:\/\/aclanthology.org\/2021.conll-1.12.pdf","title":"Learning Zero-Shot Multifaceted Visually Grounded Word Embeddings via Multi-Task Training","abstract":"Language grounding aims at linking the symbolic representation of language (e.g., words) into the rich perceptual knowledge of the outside world. The general approach is to embed both textual and visual information into a common space-the grounded space-confined by an explicit relationship. We argue that since concrete and abstract words are processed differently in the brain, such approaches sacrifice the abstract knowledge obtained from textual statistics in the process of acquiring perceptual information. The focus of this paper is to solve this issue by implicitly grounding the word embeddings. Rather than learning two mappings into a joint space, our approach integrates modalities by implicit alignment. This is achieved by learning a reversible mapping between the textual and the grounded space by means of multi-task training. Intrinsic and extrinsic evaluations show that our way of visual grounding is highly beneficial for both abstract and concrete words. Our embeddings are correlated with human judgments and outperform previous works using pretrained word embeddings on a wide range of benchmarks. Our grounded embeddings are publicly available here.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been supported by EXC number 2064\/1 -Project number 390727645, as well as by the German Federal Ministry of Education and Research (BMBF): T\u00fcbingen AI Center, FKZ: 01IS18039A. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Hassan Shahmohammadi. The third author was supported by ERC-WIDE (European Research Council -Wide Incremental learning with Discrimination nEtworks), grant number 742545.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bhatta-etal-2020-nepali","url":"https:\/\/aclanthology.org\/2020.rocling-1.23.pdf","title":"Nepali Speech Recognition Using CNN, GRU and CTC","abstract":"Communication is an important part of life. To use communication technology efficiently we need to know how to use them or how to instruct these devices to perform tasks. Automatic speech recognition plays an important role in interaction with the technology. Nepali speech recognition involves in conversion of Nepali speech to its correct Nepali transcriptions. The purposed model consists of CNN, GRU and CTC network. The feature in the raw audio is extracted by using MFCC algorithm. CNN is for learning high level features. GRU is responsible for constructing the acoustic model. CTC is responsible for decoding. The dataset consists of 18 female speakers. It is provided by Open Speech and Language Resources. The build model can predict the with the WER of 11%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhelezniak-etal-2019-correlations","url":"https:\/\/aclanthology.org\/D19-1008.pdf","title":"Correlations between Word Vector Sets","abstract":"Similarity measures based purely on word embeddings are comfortably competing with much more sophisticated deep learning and expert-engineered systems on unsupervised semantic textual similarity (STS) tasks. In contrast to commonly used geometric approaches, we treat a single word embedding as e.g. 300 observations from a scalar random variable. Using this paradigm, we first illustrate that similarities derived from elementary pooling operations and classic correlation coefficients yield excellent results on standard STS benchmarks, outperforming many recently proposed methods while being much faster and trivial to implement. Next, we demonstrate how to avoid pooling operations altogether and compare sets of word embeddings directly via correlation operators between reproducing kernel Hilbert spaces. Just like cosine similarity is used to compare individual word vectors, we introduce a novel application of the centered kernel alignment (CKA) as a natural generalisation of squared cosine similarity for sets of word vectors. Likewise, CKA is very easy to implement and enjoys very strong empirical results.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the three anonymous reviewers for their useful feedback and suggestions.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"belinkov-etal-2015-vectorslu","url":"https:\/\/aclanthology.org\/S15-2048.pdf","title":"VectorSLU: A Continuous Word Vector Approach to Answer Selection in Community Question Answering Systems","abstract":"Continuous word and phrase vectors have proven useful in a number of NLP tasks. Here we describe our experience using them as a source of features for the SemEval-2015 task 3, consisting of two community question answering subtasks: Answer Selection for categorizing answers as potential, good, and bad with regards to their corresponding questions; and YES\/NO inference for predicting a yes, no, or unsure response to a YES\/NO question using all of its good answers. Our system ranked 6th and 1st in the English answer selection and YES\/NO inference subtasks respectively, and 2nd in the Arabic answer selection subtask.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by the Qatar Computing Research Institute (QCRI). We would like to thank Alessandro Moschitti, Preslav Nakov, Llu\u00eds M\u00e0rquez, Massimo Nicosia, and other members of the QCRI Arabic Language Technologies group for their collaboration on this project.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"shen-etal-2018-learning","url":"https:\/\/aclanthology.org\/D18-1210.pdf","title":"Learning Context-Sensitive Convolutional Filters for Text Processing","abstract":"Convolutional neural networks (CNNs) have recently emerged as a popular building block for natural language processing (NLP). Despite their success, most existing CNN models employed in NLP share the same learned (and static) set of filters for all input sentences. In this paper, we consider an approach of using a small meta network to learn contextaware convolutional filters for text processing. The role of meta network is to abstract the contextual information of a sentence or document into a set of input-aware filters. We further generalize this framework to model sentence pairs, where a bidirectional filter generation mechanism is introduced to encapsulate co-dependent sentence representations. In our benchmarks on four different tasks, including ontology classification, sentiment analysis, answer sentence selection, and paraphrase identification, our proposed model, a modified CNN with context-aware filters, consistently outperforms the standard CNN and attentionbased CNN baselines. By visualizing the learned context-aware filters, we further validate and rationalize the effectiveness of proposed framework.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgments This research was supported in part by DARPA, DOE, NIH, ONR and NSF.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"camacho-collados-etal-2019-relational","url":"https:\/\/aclanthology.org\/P19-1318.pdf","title":"Relational Word Embeddings","abstract":"While word embeddings have been shown to implicitly encode various forms of attributional knowledge, the extent to which they capture relational information is far more limited. In previous work, this limitation has been addressed by incorporating relational knowledge from external knowledge bases when learning the word embedding. Such strategies may not be optimal, however, as they are limited by the coverage of available resources and conflate similarity with other forms of relatedness. As an alternative, in this paper we propose to encode relational knowledge in a separate word embedding, which is aimed to be complementary to a given standard word embedding. This relational word embedding is still learned from co-occurrence statistics, and can thus be used even when no external knowledge base is available. Our analysis shows that relational word vectors do indeed capture information that is complementary to what is encoded in standard word embeddings.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgments. Jose Camacho-Collados and Steven Schockaert were supported by ERC Starting Grant 637277.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wei-etal-2020-uncertainty","url":"https:\/\/aclanthology.org\/2020.emnlp-main.216.pdf","title":"Uncertainty-Aware Semantic Augmentation for Neural Machine Translation","abstract":"As a sequence-to-sequence generation task, neural machine translation (NMT) naturally contains intrinsic uncertainty, where a single sentence in one language has multiple valid counterparts in the other. However, the dominant methods for NMT only observe one of them from the parallel corpora for the model training but have to deal with adequate variations under the same meaning at inference. This leads to a discrepancy of the data distribution between the training and the inference phases. To address this problem, we propose uncertainty-aware semantic augmentation, which explicitly captures the universal semantic information among multiple semantically-equivalent source sentences and enhances the hidden representations with this information for better translations. Extensive experiments on various translation tasks reveal that our approach significantly outperforms the strong baselines and the existing methods.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank all of the anonymous reviewers for their invaluable suggestions and helpful comments. This work is supported by the National Key Research and Development Programs under Grant No. 2017YFB0803301, No. 2016YFB0801003 and No. 2018YFB1403202. ","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhang-etal-2019-neural","url":"https:\/\/aclanthology.org\/J19-2004.pdf","title":"Neural Models of Text Normalization for Speech Applications","abstract":"Machine learning, including neural network techniques, have been applied to virtually every domain in natural language processing. One problem that has been somewhat resistant to effective machine learning solutions is text normalization for speech applications such as text-to-speech synthesis (TTS). In this application, one must decide, for example, that 123 is verbalized as one hundred twenty three in 123 pages but as one twenty three in 123 King Ave. For this task, state-of-the-art industrial systems depend heavily on handwritten language-specific grammars. We propose neural network models that treat text normalization for TTS as a sequence-tosequence problem, in which the input is a text token in context, and the output is the verbalization of that token. We find that the most effective model, in accuracy and efficiency, is one where the sentential context is computed once and the results of that computation are combined with the computation of each token in sequence to compute the verbalization. This model allows for a great deal of flexibility in terms of representing the context, and also allows us to integrate tagging and segmentation into the process. These models perform very well overall, but occasionally they will predict wildly inappropriate verbalizations, such as reading 3 cm as three kilometers. Although rare, such verbalizations are a major issue for TTS applications. We thus use finite-state covering grammars to guide the neural models, either during training and decoding, or just during decoding, away from such \"unrecoverable\" errors. Such grammars can largely be learned from data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors wish to thank Navdeep Jaitly for his collaboration in the early stages of this project. We thank Michael Riley and colleagues at DeepMind for much discussion as this work evolved. We acknowledge audiences at Johns Hopkins University, the City University of New York, Gothenburg University, and Chalmers University for comments and feedback on presentations of this work. Alexander Gutkin assisted with the initial data preparation. The initial tokenization phase of our covering grammars for measure expressions was augmented with grammars developed by Mark Epstein for information extraction. Finally, Shankar Kumar provided extensive help with the transformer models including training reported in Section 4.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kilickaya-etal-2016-leveraging","url":"https:\/\/aclanthology.org\/W16-3204.pdf","title":"Leveraging Captions in the Wild to Improve Object Detection","abstract":"In this study, we explore whether the captions in the wild can boost the performance of object detection in images. Captions that accompany images usually provide significant information about the visual content of the image, making them an important resource for image understanding. However, captions in the wild are likely to include numerous types of noises which can hurt visual estimation. In this paper, we propose data-driven methods to deal with the noisy captions and utilize them to improve object detection. We show how a pre-trained state-of-theart object detector can take advantage of noisy captions. Our experiments demonstrate that captions provide promising cues about the visual content of the images and can aid in improving object detection.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by The Scientific and Technological Research Council of Turkey (TUBITAK), Career Development Award 113E116.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lefer-grabar-2014-evaluative","url":"https:\/\/aclanthology.org\/2014.lilt-11.7.pdf","title":"Evaluative prefixes in translation: From automatic alignment to semantic categorization","abstract":"This article aims to assess to what extent translation can shed light on the semantics of French evaluative prefixation by adopting No\u00ebl (2003)'s 'translations as evidence for semantics' approach. In French, evaluative prefixes can be classified along two dimensions (cf. (Fradin and Montermini 2009)): (1) a quantity dimension along a maximum\/minimum axis and the semantic values big and small, and (2) a quality dimension along a positive\/negative axis and the values good (excess; higher degree) and bad (lack; lower degree). In order to provide corpus-based insights into this semantic categorization, we analyze French evaluative prefixes alongside their English translation equivalents in a parallel corpus. To do so, we focus on periphrastic translations, as they are likely to 'spell out' the meaning of the French prefixes. The data used were extracted from the Europarl parallel corpus (Koehn 2005; Cartoni and Meyer 2012). Using a tailormade program, we first aligned the French prefixed words with the corresponding word(s) in English target sentences, before proceeding to the evaluation of the aligned sequences and the manual analysis of the bilingual data. Results confirm that translation data can be used as evidence for semantics in morphological research and help refine existing semantic descriptions of evaluative prefixes.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The present study was carried out within the WOG Contragram framework. The two authors thank the anonymous reviewers and the guesteditors of the special issue for their helpful comments and suggestions.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hobbs-kameyama-1990-translation","url":"https:\/\/aclanthology.org\/C90-3028.pdf","title":"Translation by Abduction","abstract":"Machine Translation and World Knowledge. Many existing approaches to machine translation take for granted that the information presented in the output is found somewhere in the input, and, moreover, that such information should be expressed at a single representational level, say, in terms of the parse trees or of \"semantic\" mssertions. Languages, however, not only express the equivalent information by drastically different linguistic means, but also often disagree in what distinctions should be expressed linguistically at all. For example, in translating from Japanese to English, it is often necessary to supply determiners for noun phr;tses, and this ira general cannot be (lone without deep understanding of the source ~ text. Similarly, in translating fl'om English to Japanese, politeness considerations, which in English are implicit in tile social situation and explicit in very diffuse wws ira, for examl)le, tile heavy use of hypotheticals, must be realized grammatically in Japanese. Machine translation therefore requires that the appropriate infer-(noes be drawn and that the text be interpreted to stone depth (see Oviatt, 1988) . Recently, an elegant approach to inference in discourse interpretation has been developed at a number of sites (e.g., ltobbs et al., 1988; Charniak and Goldman, 1988; Norvig, 1987) , all based on tim notion of abduction, and we have begun to explore its potential application to machine translation. We argue that this approach provides the possibility of deep reasoning and of mapping between the languages at a variety of levels. (See also Kaplan et al., 1988, on the latter point.) 1\nInterpretation as Abduction. Abductive inferenee is inference to the best explanation. The easiest way to understand it is to compare it with two words it rhymes with---deduction and induction. Deduction is; when from a specific fa.ct p(A) and a gen- When the observational evidence, the thing to be interpreted, is a natural language text, we must provide the best explanation of why the text would be true. In the TACITUS Project at SRI, we have developed a scheme for abductive inference thatyields a significant simplification in the description of interpretation processes and a significant extension of the range of phenomena that can be captured. It has been implemented in the TACITUS System (Itobbs et , 1990 Stickel, 1989 ) and has been applied to several varieties of text. The framework suggests the integrated treatment of syntax, semantics, and pragmattes described below. Our principal aim in this paper is to examine the utility of this frmnework as a model for translation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1990,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"stefanik-etal-2022-adaptor","url":"https:\/\/aclanthology.org\/2022.acl-demo.26.pdf","title":"Adaptor: Objective-Centric Adaptation Framework for Language Models","abstract":"Progress in natural language processing research is catalyzed by the possibilities given by the widespread software frameworks. This paper introduces the AdaptOr library 1 that transposes the traditional model-centric approach composed of pre-training + fine-tuning steps to objective-centric approach, composing the training process by applications of selected objectives. We survey research directions that can benefit from enhanced objective-centric experimentation in multi-task training, custom objectives development, dynamic training curricula, or domain adaptation. AdaptOr aims to ease the reproducibility of these research directions in practice. Finally, we demonstrate the practical applicability of AdaptOr in selected unsupervised domain adaptation scenarios.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pirrelli-battista-1996-monotonic","url":"https:\/\/aclanthology.org\/C96-1015.pdf","title":"Monotonic Paradigmatic Schemata in Italian Verb Inflection","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tyers-etal-2017-ud","url":"https:\/\/aclanthology.org\/W17-7604.pdf","title":"UD Annotatrix: An annotation tool for Universal Dependencies","abstract":"In this paper we introduce the UD A annotation tool for manual annotation of Universal Dependencies. This tool has been designed with the aim that it should be tailored to the needs of the Universal Dependencies (UD) community, including that it should operate in fullyoffline mode, and is freely-available under the GNU GPL licence. 1 In this paper, we provide some background to the tool, an overview of its development, and background on how it works. We compare it with some other widely-used tools which are used for Universal Dependencies annotation, describe some features unique to UD A , and finally outline some avenues for future work and provide a few concluding remarks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the Google Summer of Code and the Apertium project for supporting the development of A . In addition we would like to thank Filip Ginter and Alexandre Rademaker for extremely helpful discussions and suggestions. Tai Warner, Sushain Cherivirala and Kevin Unhammer have also contributed code. And finally, we would like to thank our users, in particular Jack Rueter and Katya Aplonova, for their helpful bug reports and encouragement.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-etal-2021-winnowing-knowledge","url":"https:\/\/aclanthology.org\/2021.findings-emnlp.100.pdf","title":"Winnowing Knowledge for Multi-choice Question Answering","abstract":"We tackle multi-choice question answering. Acquiring related commonsense knowledge to the question and options facilitates the recognition of the correct answer. However, the current reasoning models suffer from the noises in the retrieved knowledge. In this paper, we propose a novel encoding method which is able to conduct interception and soft filtering. This contributes to the harvesting and absorption of representative information with less interference from noises. We experiment on Com-monsenseQA. Experimental results illustrate that our method yields substantial and consistent improvements compared to the strong Bert, RoBERTa and Albert-based baselines. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank all reviewers for their insightful comments, as well as the great efforts our colleagues have made so far. This work is supported by the national Natural Science Foundation of China (NSFC) and Major National Science and Technology project of China, via Grant Nos.62076174, 61836007, 2020YBF1313601.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-etal-2010-resolving","url":"https:\/\/aclanthology.org\/D10-1085.pdf","title":"Resolving Event Noun Phrases to Their Verbal Mentions","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tendeau-1997-earley","url":"https:\/\/aclanthology.org\/1997.iwpt-1.23.pdf","title":"An Earley Algorithm for Generic Attribute Augmented Grammars and Applications","abstract":"We describe an extension of Earley' s algorithm which computes the decoration of a shared forest in a generic domain. At tribute computations are defined by a morphism from leftmost derivations to the generic domain, which leaves the computations independent from (even if guided by) the parsing strategy. The approach is illustrated by the example of a definite clause grammar, seen as CF-grammars decorated by attributes.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"smith-2003-automatic","url":"https:\/\/aclanthology.org\/N03-4012.pdf","title":"Automatic Extraction of Semantic Networks from Text using Leximancer","abstract":"Leximancer is a software system for performing conceptual analysis of text data in a largely language independent manner. The system is modelled on Content Analysis and provides unsupervised and supervised analysis using seeded concept classifiers. Unsupervised ontology discovery is a key component.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jagarlamudi-etal-2011-bilingual","url":"https:\/\/aclanthology.org\/P11-2026.pdf","title":"From Bilingual Dictionaries to Interlingual Document Representations","abstract":"Mapping documents into an interlingual representation can help bridge the language barrier of a cross-lingual corpus. Previous approaches use aligned documents as training data to learn an interlingual representation, making them sensitive to the domain of the training data. In this paper, we learn an interlingual representation in an unsupervised manner using only a bilingual dictionary. We first use the bilingual dictionary to find candidate document alignments and then use them to find an interlingual representation. Since the candidate alignments are noisy, we develop a robust learning algorithm to learn the interlingual representation. We show that bilingual dictionaries generalize to different domains better: our approach gives better performance than either a word by word translation method or Canonical Correlation Analysis (CCA) trained on a different domain.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"choi-palmer-2011-getting","url":"https:\/\/aclanthology.org\/P11-2121.pdf","title":"Getting the Most out of Transition-based Dependency Parsing","abstract":"This paper suggests two ways of improving transition-based, non-projective dependency parsing. First, we add a transition to an existing non-projective parsing algorithm, so it can perform either projective or non-projective parsing as needed. Second, we present a bootstrapping technique that narrows down discrepancies between gold-standard and automatic parses used as features. The new addition to the algorithm shows a clear advantage in parsing speed. The bootstrapping technique gives a significant improvement to parsing accuracy, showing near state-of-theart performance with respect to other parsing approaches evaluated on the same data set.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We gratefully acknowledge the support of the National Science Foundation Grants CISE-IIS-RI-0910992, Richer Representations for Machine Translation, a subcontract from the Mayo Clinic and Harvard Children's Hospital based on a grant from the ONC, 90TR0002\/01, Strategic Health Advanced Research Project Area 4: Natural Language Processing, and a grant from the Defense Advanced Research Projects Agency (DARPA\/IPTO) under the GALE program, DARPA\/CMO Contract No. HR0011-06-C-0022, subcontract from BBN, Inc. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pecina-schlesinger-2006-combining","url":"https:\/\/aclanthology.org\/P06-2084.pdf","title":"Combining Association Measures for Collocation Extraction","abstract":"We introduce the possibility of combining lexical association measures and present empirical results of several methods employed in automatic collocation extraction. First, we present a comprehensive summary overview of association measures and their performance on manually annotated data evaluated by precision-recall graphs and mean average precision. Second, we describe several classification methods for combining association measures, followed by their evaluation and comparison with individual measures. Finally, we propose a feature selection algorithm significantly reducing the number of combined measures with only a small performance degradation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been supported by the Ministry of Education of the Czech Republic, projects MSM 0021620838 and LC 536. We would like to thank our advisor Jan Haji\u010d, our colleagues, and anonymous reviewers for their valuable comments.","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"stojanovski-etal-2020-contracat","url":"https:\/\/aclanthology.org\/2020.coling-main.417.pdf","title":"ContraCAT: Contrastive Coreference Analytical Templates for Machine Translation","abstract":"Recent high scores on pronoun translation using context-aware neural machine translation have suggested that current approaches work well. ContraPro is a notable example of a contrastive challenge set for English\u2192German pronoun translation. The high scores achieved by transformer models may suggest that they are able to effectively model the complicated set of inferences required to carry out pronoun translation. This entails the ability to determine which entities could be referred to, identify which entity a sourcelanguage pronoun refers to (if any), and access the target-language grammatical gender for that entity. We first show through a series of targeted adversarial attacks that in fact current approaches are not able to model all of this information well. Inserting small amounts of distracting information is enough to strongly reduce scores, which should not be the case. We then create a new template test set Contracat, designed to individually assess the ability to handle the specific steps necessary for successful pronoun translation. Our analyses show that current approaches to context-aware nmt rely on a set of surface heuristics, which break down when translations require real reasoning. We also propose an approach for augmenting the training data, with some improvements.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This project has received funding from the European Research Council under the European Union's Horizon 2020 research and innovation program (grant agreement \u2116 640550). This work was also supported by DFG (grant FR 2829\/4-1). We thank Alexandra Chronopoulou for the valuable comments and helpful feedback.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ueki-etal-1999-sharing","url":"https:\/\/aclanthology.org\/1999.mtsummit-1.80.pdf","title":"Sharing syntactic structures","abstract":"Bracketed corpora are a very useful resource for natural language processing, but hard to build efficiently, leading to quantitative insufficiency for practical use. Disparities in morphological information, such as word segmentation and part-of-speech tag sets, are also troublesome. An application specific to a particular corpus often cannot be applied to another corpus. In this paper, we sketch out a method to build a corpus that has a fixed syntactic structure but varying morphological annotation based on the different tag set schemes utilized. Our system uses a two layered grammar, one layer of which is made up of replaceable tag-set-dependent rules while the other has no such tag set dependency. The input sentences of our system are bracketed corresponding to structural information of corpus. The parser can work using any tag set and grammar, and using the same input bracketing, we obtain corpus that shares partial syntactic structure.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1999,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"krotova-etal-2020-joint","url":"https:\/\/aclanthology.org\/2020.lrec-1.543.pdf","title":"A Joint Approach to Compound Splitting and Idiomatic Compound Detection","abstract":"Applications such as machine translation, speech recognition, and information retrieval require efficient handling of noun compounds as they are one of the possible sources for out-of-vocabulary (OOV) words. In-depth processing of noun compounds requires not only splitting them into smaller components (or even roots) but also the identification of instances that should remain unsplitted as they are of idiomatic nature. We develop a twofold deep learning-based approach of noun compound splitting and idiomatic compound detection for the German language that we train using a newly collected corpus of annotated German compounds. Our neural noun compound splitter operates on a sub-word level and outperforms the current state of the art by about 5%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"moore-etal-1997-commandtalk","url":"https:\/\/aclanthology.org\/A97-1001.pdf","title":"CommandTalk: A Spoken-Language Interface for Battlefield Simulations","abstract":"CommandTalk is a spoken-language interface to battlefield simulations that allows the use of ordinary spoken English to create forces and control measures, assign missions to forces, modify missions during execution, and control simulation system functions. CommandTalk combines a number of separate components integrated through the use of the Open Agent Architecture, including the Nuance speech recognition system, the Gemini naturallanguage parsing and interpretation system, a contextual-interpretation modhle, a \"push-to-talk\" agent, the ModSAF battlefield simulator, and \"Start-It\" (a graphical processing-spawning agent). Com-mandTalk is installed at a number of Government and contractor sites, including NRaD and the Marine Corps Air Ground Combat Center. It is currently being extended to provide exercise-time control of all simulated U.S. forces in DARPA's STOW 97 demonstration. Put Checkpoint 1 at 937 965. Create a point called Checkpoint 2 at 930 960. Objective Alpha is 92 96. Charlie 4 5, at my command, advance in a column to Checkpoint 1. Next, proceed to Checkpoint 2. Then assault Objective Alpha. Charlie 4 5, move out. With the simulation under way, the user can exercise direct control over the simulated forces by giving commands such as the following for immediate execution: Charlie 4 5, speed up. Change formation to echelon right. Get in a line. Withdraw to Checkpoint 2. Examples of voice commands for controlling Mod-SAF system functions include the following: Show contour lines. Center on M1 platoon.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"higy-etal-2021-discrete","url":"https:\/\/aclanthology.org\/2021.blackboxnlp-1.11.pdf","title":"Discrete representations in neural models of spoken language","abstract":"The distributed and continuous representations used by neural networks are at odds with representations employed in linguistics, which are typically symbolic. Vector quantization has been proposed as a way to induce discrete neural representations that are closer in nature to their linguistic counterparts. However, it is not clear which metrics are the best-suited to analyze such discrete representations. We compare the merits of four commonly used metrics in the context of weakly supervised models of spoken language. We compare the results they show when applied to two different models, while systematically studying the effect of the placement and size of the discretization layer. We find that different evaluation regimes can give inconsistent results. While we can attribute them to the properties of the different metrics in most cases, one point of concern remains: the use of minimal pairs of phoneme triples as stimuli disadvantages larger discrete unit inventories, unlike metrics applied to complete utterances. Furthermore, while in general vector quantization induces representations that correlate with units posited in linguistics, the strength of this correlation is only moderate.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Bertrand Higy was supported by a NWO\/E-Science Center grant number 027.018.G03.We would also like to thank multiple anonymous reviewers for their useful comments which helped us improve this paper.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chakraborti-tendulkar-2013-parallels","url":"https:\/\/aclanthology.org\/W13-1916.pdf","title":"Parallels between Linguistics and Biology","abstract":"In this paper we take a fresh look at parallels between linguistics and biology. We expect that this new line of thinking will propel cross fertilization of two disciplines and open up new research avenues.","label_nlp4sg":1,"task":["Parallels between Linguistics and Biology"],"method":["Analysis"],"goal1":"Good Health and Well-Being","goal2":"Quality Education","goal3":null,"acknowledgments":"AVT is supported by Innovative Young Biotechnologist Award (IYBA) by Department of Biotechnology, Government of India.","year":2013,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"elmahdy-etal-2014-automatic","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/434_Paper.pdf","title":"Automatic Long Audio Alignment and Confidence Scoring for Conversational Arabic Speech","abstract":"In this paper, a framework for long audio alignment for conversational Arabic speech is proposed. Accurate alignments help in many speech processing tasks such as audio indexing, speech recognizer acoustic model (AM) training, audio summarizing and retrieving, etc. We have collected more than 1,400 hours of conversational Arabic besides the corresponding human generated non-aligned transcriptions. Automatic audio segmentation is performed using a split and merge approach. A biased language model (LM) is trained using the corresponding text after a pre-processing stage. Because of the dominance of non-standard Arabic in conversational speech, a graphemic pronunciation model (PM) is utilized. The proposed alignment approach is performed in two passes. Firstly, a generic standard Arabic AM is used along with the biased LM and the graphemic PM in a fast speech recognition pass. In a second pass, a more restricted LM is generated for each audio segment, and unsupervised acoustic model adaptation is applied. The recognizer output is aligned with the processed transcriptions using Levenshtein algorithm. The proposed approach resulted in an initial alignment accuracy of 97.8-99.0% depending on the amount of disfluencies. A confidence scoring metric is proposed to accept\/reject aligner output. Using confidence scores, it was possible to reject the majority of mis-aligned segments resulting in alignment accuracy of 99.0-99.8% depending on the speech domain and the amount of disfluencies.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This publication was made possible by a grant from the Qatar National Research Fund under its National Priorities Research Program (NPRP) award number NPRP 09-410-1-069. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the Qatar National Research Fund. We would like also to acknowledge the European Language Resources Association (ELRA) for providing us with MSA speech data resources used in AM training.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bhargava-kondrak-2011-pronounce","url":"https:\/\/aclanthology.org\/P11-1041.pdf","title":"How do you pronounce your name? Improving G2P with transliterations","abstract":"Grapheme-to-phoneme conversion (G2P) of names is an important and challenging problem. The correct pronunciation of a name is often reflected in its transliterations, which are expressed within a different phonological inventory. We investigate the problem of using transliterations to correct errors produced by state-of-the-art G2P systems. We present a novel re-ranking approach that incorporates a variety of score and n-gram features, in order to leverage transliterations from multiple languages. Our experiments demonstrate significant accuracy improvements when re-ranking is applied to n-best lists generated by three different G2P programs.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to Sittichai Jiampojamarn and Shane Bergsma for the very helpful discussions. This research was supported by the Natural Sciences and Engineering Research Council of Canada.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"specia-etal-2020-findings","url":"https:\/\/aclanthology.org\/2020.wmt-1.4.pdf","title":"Findings of the WMT 2020 Shared Task on Machine Translation Robustness","abstract":"We report the findings of the second edition of the shared task on improving robustness in Machine Translation (MT). The task aims to test current machine translation systems in their ability to handle challenges facing MT models to be deployed in the real world, including domain diversity and non-standard texts common in user generated content, especially in social media. We cover two language pairs-English-German and English-Japanese and provide test sets in zero-shot and few-shot variants. Participating systems are evaluated both automatically and manually, with an additional human evaluation for \"catastrophic errors\". We received 59 submissions by 11 participating teams from a variety of types of institutions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Lucia Specia was supported by funding from the Bergamot project (EU H2020 Grant No. 825303). We thank Facebook for funding the human evaluation. We thank Khetam Al Sharou for her help with","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-hovy-2014-sentiment","url":"https:\/\/aclanthology.org\/D14-1053.pdf","title":"Sentiment Analysis on the People's Daily","abstract":"We propose a semi-supervised bootstrapping algorithm for analyzing China's foreign relations from the People's Daily. Our approach addresses sentiment target clustering, subjective lexicons extraction and sentiment prediction in a unified framework. Different from existing algorithms in the literature, time information is considered in our algorithm through a hierarchical bayesian model to guide the bootstrapping approach. We are hopeful that our approach can facilitate quantitative political analysis conducted by social scientists and politicians.","label_nlp4sg":1,"task":["Sentiment Analysis"],"method":["semi - supervised bootstrapping algorithm","hierarchical bayesian model"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"The authors want to thank Bishan Yang and Claire Cardie for useful comments and discussions. The authors are thankful for suggestions offered by EMNLP reviewers.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"artiles-etal-2009-role","url":"https:\/\/aclanthology.org\/D09-1056.pdf","title":"The role of named entities in Web People Search","abstract":"The ambiguity of person names in the Web has become a new area of interest for NLP researchers. This challenging problem has been formulated as the task of clustering Web search results (returned in response to a person name query) according to the individual they mention. In this paper we compare the coverage, reliability and independence of a number of features that are potential information sources for this clustering task, paying special attention to the role of named entities in the texts to be clustered. Although named entities are used in most approaches, our results show that, independently of the Machine Learning or Clustering algorithm used, named entity recognition and classification per se only make a small contribution to solve the problem.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been partially supported by the Regional Government of Madrid, project MAVIR S0505-TIC0267.","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"artola-etal-2002-class","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2002\/pdf\/200.pdf","title":"A Class Library for the Integration of NLP Tools: Definition and implementation of an Abstract Data Type Collection for the manipulation of SGML documents in a context of stand-off linguistic annotation","abstract":"In this paper we present a program library conceived and implemented to represent and manipulate the information exchanged in the process of integration of NLP tools. It is currently used to integrate the tools developed for Basque processing during the last ten years at our research group. In our opinion, the program library is general enough to be used in similar processes of integration of NLP tools or in the design of new applications built on them. The program library constitutes a class library that provides the programmer with the elements s\/he needs when manipulating SGML documents in a context of stand-off linguistic annotation, where linguistic analyses obtained at different phases (morphology, lemmatization, processing of multiword lexical units, surface syntax, and so on) are represented by well-defined typed features structures. Due to the complexity of the information to be exchanged among the different tools, feature structures (FS) are used to represent it. Feature structures provide us with a well-formalized basis for the exchange of linguistic information among the different text analysis tools. Feature structures are coded in SGML following the TEI's DTD for Fs, and Feature-System Declarations (FSD) have been thoroughly specified. So, TEI-P3 conformant feature structures constitute the representation schema for the different documents that convey the information from one linguistic tool to the next in the language processing chain. The tools integrated so far are a lexical database, a tokenizer, a wide-coverage morphosyntactic analyzer, a general purpose tagger\/lemmatizer and a shallow syntactic parser. The type of information contained in the documents exchanged among these tools has been analyzed and characterized using a set of Abstract Data Types.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is being carried out in the project G19\/99, supported by the University of the Basque Country and by the Spanish Ministry.","year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"guo-etal-2018-multi","url":"https:\/\/aclanthology.org\/D18-1498.pdf","title":"Multi-Source Domain Adaptation with Mixture of Experts","abstract":"We propose a mixture-of-experts approach for unsupervised domain adaptation from multiple sources. The key idea is to explicitly capture the relationship between a target example and different source domains. This relationship, expressed by a point-to-set metric, determines how to combine predictors trained on various domains. The metric is learned in an unsupervised fashion using metatraining. Experimental results on sentiment analysis and part-of-speech tagging demonstrate that our approach consistently outperforms multiple baselines and can robustly handle negative transfer. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank MIT NLP group and the anonymous reviewers for their helpful comments. We also thank Shiyu Chang and Mo Yu for insightful discussions on metric learning. This work is supported by the MIT-IBM Watson AI Lab. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"samuelsdorff-1967-application","url":"https:\/\/aclanthology.org\/C67-1017.pdf","title":"The Application of FORTRAN to Automatic Translation","abstract":"A multitude of problem-oriented programming languages are being created in order to facilitate programming in various fields.Th~se languages have two advantages for non-programmers who need a computer for solving their problems: I. they are not forced to spend much time on learning a complicated machine language; 2. they do not necessarily have to alter their programs when they are obliged to use a different machine. These advantages seem to justify the labour of writing a multitude of compilers that translate each problem-oriented language into the machine language of each machine.\nThe existence of various problem-oriented languages, however, makes it difficult for the users of computers in various fields to exchange their experience. In addition there is the danger that a certain machine possesses only a limited number of compilers, and that therefore a program may have to be rewritten when for some reason or other a different machine has to be used. The question therefore arises whether it is not possible to have the advantages of problem-orlented languages without multiplying their number.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1967,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"simmons-bennett-novak-1975-semantically","url":"https:\/\/aclanthology.org\/J75-2007.pdf","title":"Semantically Analyzing an English Subset for the Clowns Microworld","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1975,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"roit-etal-2020-controlled","url":"https:\/\/aclanthology.org\/2020.acl-main.626.pdf","title":"Controlled Crowdsourcing for High-Quality QA-SRL Annotation","abstract":"Question-answer driven Semantic Role Labeling (QA-SRL) was proposed as an attractive open and natural flavour of SRL, potentially attainable from laymen. Recently, a large-scale crowdsourced QA-SRL corpus and a trained parser were released. Trying to replicate the QA-SRL annotation for new texts, we found that the resulting annotations were lacking in quality, particularly in coverage, making them insufficient for further research and evaluation. In this paper, we present an improved crowdsourcing protocol for complex semantic annotation, involving worker selection and training, and a data consolidation phase. Applying this protocol to QA-SRL yielded highquality annotation with drastically higher coverage, producing a new gold evaluation dataset. We believe that our annotation protocol and gold standard will facilitate future replicable research of natural semantic annotations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by an Intel Labs grant, the Israel Science Foundation grant 1951\/17 and the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600\/1-1).","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"heeringa-etal-2006-evaluation","url":"https:\/\/aclanthology.org\/W06-1108.pdf","title":"Evaluation of String Distance Algorithms for Dialectology","abstract":"We examine various string distance measures for suitability in modeling dialect distance, especially its perception. We find measures superior which do not normalize for word length, but which are are sensitive to order. We likewise find evidence for the superiority of measures which incorporate a sensitivity to phonological context, realized in the form of n-gramsalthough we cannot identify which form of context (bigram, trigram, etc.) is best. However, we find no clear benefit in using gradual as opposed to binary segmental difference when calculating sequence distances.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We inspect an example to illustrate these issues. We compare the Frisian (Grouw), [mOlk@], with the Haarlem pronunciation [mEl@k]. The Levenshtein algorithm may align the pronunciations as follows:","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zenkel-etal-2021-automatic-bilingual","url":"https:\/\/aclanthology.org\/2021.findings-emnlp.299.pdf","title":"Automatic Bilingual Markup Transfer","abstract":"We describe the task of bilingual markup transfer, which involves placing markup tags from a source sentence into a fixed target translation. This task arises in practice when a human translator generates the target translation without markup, and then the system infers the placement of markup tags. This task contrasts from previous work in which markup transfer is performed jointly with machine translation. We propose two novel metrics and evaluate several approaches based on unsupervised word alignments as well as a supervised neural sequence-to-sequence model. Our best approach achieves an average accuracy of 94.7% across six language pairs, indicating its potential usefulness for real-world localization tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"araki-etal-2014-evaluation","url":"https:\/\/aclanthology.org\/W14-2910.pdf","title":"Evaluation for Partial Event Coreference","abstract":"This paper proposes an evaluation scheme to measure the performance of a system that detects hierarchical event structure for event coreference resolution. We show that each system output is represented as a forest of unordered trees, and introduce the notion of conceptual event hierarchy to simplify the evaluation process. We enumerate the desiderata for a similarity metric to measure the system performance. We examine three metrics along with the desiderata, and show that metrics extended from MUC and BLANC are more adequate than a metric based on Simple Tree Matching.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of the DARPA or the US government. We would like to thank anonymous reviewers for their helpful comments.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-2009-use","url":"https:\/\/aclanthology.org\/D09-1134.pdf","title":"On the Use of Virtual Evidence in Conditional Random Fields","abstract":"Virtual evidence (VE), first introduced by (Pearl, 1988), provides a convenient way of incorporating prior knowledge into Bayesian networks. This work generalizes the use of VE to undirected graphical models and, in particular, to conditional random fields (CRFs). We show that VE can be naturally encoded into a CRF model as potential functions. More importantly, we propose a novel semisupervised machine learning objective for estimating a CRF model integrated with VE. The objective can be optimized using the Expectation-Maximization algorithm while maintaining the discriminative nature of CRFs. When evaluated on the CLASSIFIEDS data, our approach significantly outperforms the best known solutions reported on this task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"swanson-etal-2014-identifying","url":"https:\/\/aclanthology.org\/W14-4323.pdf","title":"Identifying Narrative Clause Types in Personal Stories","abstract":"This paper describes work on automatically identifying categories of narrative clauses in personal stories written by ordinary people about their daily lives and experiences. We base our approach on Labov & Waletzky's theory of oral narrative which categorizes narrative clauses into subtypes, such as ORIENTATION, ACTION and EVALUATION. We describe an experiment where we annotate 50 personal narratives from weblogs and experiment with methods for achieving higher annotation reliability. We use the resulting annotated corpus to train a classifier to automatically identify narrative categories, achieving a best average F-score of .658, which rises to an F-score of .767 on the cases with the highest annotator agreement. We believe the identified narrative structure will enable new types of computational analysis of narrative discourse.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by NSF Grants IIS #1002921 and IIS #123855. The content of this publication does not necessarily reflect the position or policy of the government, and no official endorsement should be inferred.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ren-etal-2020-two","url":"https:\/\/aclanthology.org\/2020.coling-main.142.pdf","title":"A Two-phase Prototypical Network Model for Incremental Few-shot Relation Classification","abstract":"Relation Classification (RC) plays an important role in natural language processing (NLP). Current conventional supervised and distantly supervised RC models always make a closed-world assumption which ignores the emergence of novel relations in an open environment. To incrementally recognize the novel relations, current two solutions (i.e, retraining and lifelong learning) are designed but suffer from the lack of large-scale labeled data for novel relations. Meanwhile, prototypical network enjoys better performance on both fields of deep supervised learning and few-shot learning. However, it still suffers from the incompatible feature embedding problem when the novel relations come in. Motivated by them, we propose a two-phase prototypical network with prototype attention alignment and triplet loss to dynamically recognize the novel relations with a few support instances meanwhile without catastrophic forgetting. Extensive experiments are conducted to evaluate the effectiveness of our proposed model.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the Fundamental Research Funds for the Central Universities, SCUT (No.2017ZD048, D2182480), the National Key Research and Development Program of China, the Science and Technology Programs of Guangzhou (No.201704030076, 201802010027, 201902010046), National Natural Science Foundation of China (62076100) and the Hong Kong Research Grants Council (project no. C1031-18G).","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ribeiro-etal-2020-mapping","url":"https:\/\/aclanthology.org\/2020.lrec-1.67.pdf","title":"Mapping the Dialog Act Annotations of the LEGO Corpus into ISO 24617-2 Communicative Functions","abstract":"ISO 24617-2, the ISO standard for dialog act annotation, sets the ground for more comparable research in the area. However, the amount of data annotated according to it is still reduced, which impairs the development of approaches for automatic recognition. In this paper, we describe a mapping of the original dialog act labels of the LEGO corpus, which have been neglected, into the communicative functions of the standard. Although this does not lead to a complete annotation according to the standard, the 347 dialogs provide a relevant amount of data that can be used in the development of automatic communicative function recognition approaches, which may lead to a wider adoption of the standard. Using the 17 English dialogs of the DialogBank as gold standard, our preliminary experiments have shown that including the mapped dialogs during the training phase leads to improved performance while recognizing communicative functions in the Task dimension.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by national funds through Funda\u00e7\u00e3o para a Ci\u00eancia e a Tecnologia (FCT), under project UIDB\/50021\/2020.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wiegreffe-etal-2021-measuring","url":"https:\/\/aclanthology.org\/2021.emnlp-main.804.pdf","title":"Measuring Association Between Labels and Free-Text Rationales","abstract":"In interpretable NLP, we require faithful rationales that reflect the model's decision-making process for an explained instance. While prior work focuses on extractive rationales (a subset of the input words), we investigate their less-studied counterpart: free-text natural language rationales. We demonstrate that pipelines, models for faithful rationalization on information-extraction style tasks, do not work as well on \"reasoning\" tasks requiring free-text rationales. We turn to models that jointly predict and rationalize, a class of widely used high-performance models for freetext rationalization. We investigate the extent to which the labels and rationales predicted by these models are associated, a necessary property of faithful explanation. Via two tests, robustness equivalence and feature importance agreement, we find that state-ofthe-art T5-based joint models exhibit desirable properties for explaining commonsense question-answering and natural language inference, indicating their potential for producing faithful free-text rationales. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to Jonathan Berant, Peter Hase, Alon Jacovi, Yuval Pinter, Mark Riedl, Vered Shwartz, Ian Stewart, Swabha Swayamdipta, and Byron Wallace for feedback on the draft. We thank members of the AllenNLP team at the Allen Institute for Artificial Intelligence (AI2), members of the Entertainment Intelligence lab at Georgia Tech, and reviewers for valuable feedback and discussions. This work was done while SW was an intern at AI2.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"urieli-2014-improving","url":"https:\/\/aclanthology.org\/W14-6103.pdf","title":"Improving the parsing of French coordination through annotation standards and targeted features","abstract":"In the present study we explore various methods for improving the transition-based parsing of coordinated structures in French. Features targeting syntactic parallelism in coordinated structures are used as additional features when training the statistical model, but also as an efficient means to find and correct annotation errors in training corpora. In terms of annotation, we compare four different annotations for coordinated structures, demonstrate the importance of globally unambiguous annotation for punctuation, and discuss the decision process of a transition-based parser for coordination, explaining why certain annotations consistently out-perform others. We compare the gains provided by different annotation standards, by targeted features, and by using a wider beam. Our best configuration gives a 37.28% reduction in the coordination error rate, when compared to the baseline SPMRL test corpus for French after manual corrections.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"I would like to thank the anonymous reviewers for their in-depth reading and many helpful suggestions.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"aumiller-etal-2020-unihd","url":"https:\/\/aclanthology.org\/2020.sdp-1.29.pdf","title":"UniHD@CL-SciSumm 2020: Citation Extraction as Search","abstract":"This work presents the entry by the team from Heidelberg University in the CL-SciSumm 2020 shared task at the Scholarly Document Processing workshop at EMNLP 2020. As in its previous iterations, the task is to highlight relevant parts in a reference paper, depending on a citance text excerpt from a citing paper. We participated in tasks 1A (cited text span identification) and 1B (citation context classification). Contrary to most previous works, we frame Task 1A as a search relevance problem, and introduce a 2-step re-ranking approach, which consists of a preselection based on BM25 in addition to positional document features, and a top-k re-ranking with BERT. For Task 1B, we follow previous submissions in applying methods that deal well with low resources and imbalanced classes.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"clippinger-jr-1980-meaning","url":"https:\/\/aclanthology.org\/J80-2006.pdf","title":"Meaning and Discourse - A Computer Model of Psychoanalytic Speech and Cognition","abstract":"Colby, and Schank; he offers homage to HACKER and kudos to CONNIVER; he ignores both linguistics and AI work in natural-language generation; he invents a grammar of English; he performs validation tests on a hand-simulated program; and he closes by warning us about ignoring the social impact of computers in the future. All of this is background to a program that models one, halting paragraph of speech by a depressed patient whose request to change the form in which she pays her therapist is, we are told in great detail, a desire for intercourse.","label_nlp4sg":1,"task":["Psychoanalytic Speech and Cognition"],"method":[],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":1980,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kamath-etal-2019-reversing","url":"https:\/\/aclanthology.org\/P19-1556.pdf","title":"Reversing Gradients in Adversarial Domain Adaptation for Question Deduplication and Textual Entailment Tasks","abstract":"Adversarial domain adaptation has been recently introduced as an effective technique for textual matching tasks, such as question deduplication (Shah et al., 2018). Here we investigate the use of gradient reversal on adversarial domain adaptation to explicitly learn both shared and unshared (domain specific) representations between two textual domains. In doing so, gradient reversal learns features that explicitly compensate for domain mismatch, while still distilling domain specific knowledge that can improve target domain accuracy. We evaluate reversing gradients for adversarial adaptation on multiple domains, and demonstrate that it significantly outperforms other methods on question deduplication as well as on recognizing textual entailment (RTE) tasks, achieving up to 7% absolute boost in base model accuracy on some datasets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ke-etal-2003-optimization","url":"https:\/\/aclanthology.org\/J03-1001.pdf","title":"Optimization Models of Sound Systems Using Genetic Algorithms","abstract":"In this study, optimization models using genetic algorithms (GAs) are proposed to study the configuration of vowels and tone systems. As in previous explanatory models that have been used to study vowel systems, certain criteria, which are assumed to be the principles governing the structure of sound systems, are used to predict optimal vowels and tone systems. In most of the earlier studies only one criterion has been considered. When two criteria are considered, they are often combined into one scalar function. The GA model proposed for the study of tone systems uses a Pareto ranking method that is highly applicable for dealing with optimization problems having multiple criteria. For optimization of tone systems, perceptual contrast and markedness complexity are considered simultaneously. Although the consistency between the predicted systems and the observed systems is not as significant as those obtained for vowel systems, further investigation along this line is promising.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is supported in part by two grants from the City University of Hong Kong, nos. 7100096 and 9010001. The second author is also supported by a grant from the Ministry of Education, Science, Sports and Culture of Japan, no. 11610512. We thank C. C. Cheng for providing us with his tone database of Chinese dialects. We are thankful to Lisa Husmann and James Minett for their kind help in preparing this article. Also we greatly appreciate the three reviewers for their very helpful comments and suggestions.","year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kulagina-1961-construction","url":"https:\/\/aclanthology.org\/1961.earlymt-1.32.pdf","title":"Construction of a textual analysis algorithm with the aid of a computing machine","abstract":"MISS KULAGINA described methods for the determination of syntactic analysis algorithms by a computer. A text was prepared by a linguist, to show the government relations between its words. This text was examined by the computer, and, with the aid of the given government relations, configuration tables were produced. Then, forgetting, as it were, the governments previously given, the computer analysed the text using the configuration tables. It compared its results with the human ones, and pointed out sentences it had erroneously analysed.\nThis work was done with texts of 500 words in French, English, German and Russian. Four variants of the program were used, scanning the text to the left or the right, and looking for the governing word of a given word, either first on its right, then on its left, or first on its left, then on its right. In the 500 word tests, about 40 errors were made. There was not much fluctuation in this number of errors, but the variant which gave best results was the one which went from left to right through the text, and looked for the governing word, first on the right and then on the left of the governed one.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1961,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-etal-2020-hitrans","url":"https:\/\/aclanthology.org\/2020.coling-main.370.pdf","title":"HiTrans: A Transformer-Based Context- and Speaker-Sensitive Model for Emotion Detection in Conversations","abstract":"Emotion detection in conversations (EDC) is to detect the emotion for each utterance in conversations that have multiple speakers. Different from the traditional non-conversational emotion detection, the model for EDC should be context-sensitive (e.g., understanding the whole conversation rather than one utterance) and speaker-sensitive (e.g., understanding which utterance belongs to which speaker). In this paper, we propose a transformer-based context-and speakersensitive model for EDC, namely HiTrans, which consists of two hierarchical transformers. We utilize BERT as the low-level transformer to generate local utterance representations, and feed them into another high-level transformer so that utterance representations could be sensitive to the global context of the conversation. Moreover, we exploit an auxiliary task to make our model speaker-sensitive, called pairwise utterance speaker verification (PUSV), which aims to classify whether two utterances belong to the same speaker. We evaluate our model on three benchmark datasets, namely EmoryNLP, MELD and IEMOCAP. Results show that our model outperforms previous state-of-the-art models.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wei-etal-2020-iterative","url":"https:\/\/aclanthology.org\/2020.emnlp-main.474.pdf","title":"Iterative Domain-Repaired Back-Translation","abstract":"In this paper, we focus on the domain-specific translation with low resources, where indomain parallel corpora are scarce or nonexistent. One common and effective strategy for this case is exploiting in-domain monolingual data with the back-translation method. However, the synthetic parallel data is very noisy because they are generated by imperfect out-of-domain systems, resulting in the poor performance of domain adaptation. To address this issue, we propose a novel iterative domain-repaired back-translation framework, which introduces the Domain-Repair (DR) model to refine translations in synthetic bilingual data. To this end, we construct corresponding data for the DR model training by round-trip translating the monolingual sentences, and then design the unified training framework to optimize paired DR and NMT models jointly. Experiments on adapting NMT models between specific domains and from the general domain to specific domains demonstrate the effectiveness of our proposed approach, achieving 15.79 and 4.47 BLEU improvements on average over unadapted models and back-translation. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for the helpful comments. This work is supported by National Key R&D Program of China (2018YFB1403202).","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kibble-power-2000-integrated","url":"https:\/\/aclanthology.org\/W00-1411.pdf","title":"An integrated framework for text planning and pronominalisation","abstract":"This paper describes an implemented system which uses centering theory for planning of coherent texts and choice of referring expressions. We argue that text and sentence planning need to be driven in part by the goal of maintaining referential continuity and thereby facilitating pronoun resolution: obtaining a favourable ordering of clauses, and of arguments within clauses, is likely to increase opportunities for non-ambiguous pronoun use. Centering theory provides the basis for such an integrated approach. Generating coherent texts according to centering theory is treated as a constraint satisfaction problem.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the UK EPSRC under grant references L51126, L77102 (Kibble) and M36960 (Power).","year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-etal-2019-bigodm","url":"https:\/\/aclanthology.org\/W19-3220.pdf","title":"BIGODM System in the Social Media Mining for Health Applications Shared Task 2019","abstract":"In this study, we describe our methods to automatically classify Twitter posts conveying events of adverse drug reaction (ADR). Based on our previous experience in tackling the ADR classification task, we empirically applied the vote-based undersampling ensemble approach along with linear support vector machine (SVM) to develop our classifiers as part of our participation in ACL 2019 Social Media Mining for Health Applications (SMM4H) shared task 1. The best-performed model on the test sets were trained on a merged corpus consisting of the datasets released by SMM4H 2017 and 2019. By using VUE, the corpus was randomly under-sampled with 2:1 ratio between the negative and positive classes to create an ensemble using the linear kernel trained with features including bag-of-word, domain knowledge, negation and word embedding. The best performing model achieved an F-measure of 0.551 which is about 5% higher than the average F-scores of 16 teams.","label_nlp4sg":1,"task":["Social Media Mining"],"method":["ensemble","support vector machine"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dakota-kubler-2021-whats","url":"https:\/\/aclanthology.org\/2021.scil-1.29.pdf","title":"What's in a Span? Evaluating the Creativity of a Span-Based Neural Constituency Parser","abstract":"Constituency parsing is generally evaluated superficially, particularly in a multiple language setting, with only F-scores being reported. As new state-of-the-art chart-based parsers have resulted in a transition from traditional PCFG-based grammars to span-based approaches (Stern et al., 2017; Gaddy et al., 2018), we do not have a good understanding of how such fundamentally different approaches interact with various treebanks as results show improvements across treebanks (Kitaev and Klein, 2018), but it is unclear what influence annotation schemes have on various treebank performance (Kitaev et al., 2019). In particular, a span-based parser's capability of creating novel rules is an unknown factor. We perform an analysis of how span-based parsing performs across 11 treebanks in order to examine the overall behavior of this parsing approach and the effect of the treebanks' specific annotations on results. We find that the parser tends to prefer flatter trees, but the approach works well because it is robust enough to adapt to differences in annotation schemes across treebanks and languages.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank the members of the Uppsala NLP Parsing Group: Joakim Nivre, Sara Stymne, Artur Kulmizev and Ali Basirat, as well as the reviewers for their comments. The first author is supported by the Swedish strategic research programme eSSENCE.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sotnikova-etal-2021-analyzing","url":"https:\/\/aclanthology.org\/2021.findings-acl.355.pdf","title":"Analyzing Stereotypes in Generative Text Inference Tasks","abstract":"Stereotypes are inferences drawn about people based on their demographic attributes, which may result in harms to users when a system is deployed. In generative language-inference tasks, given a premise, a model produces plausible hypotheses that follow either logically (natural language inference) or commonsensically (commonsense inference). Such tasks are therefore a fruitful setting in which to explore the degree to which NLP systems encode stereotypes. In our work, we study how stereotypes manifest when the potential targets of stereotypes are situated in real-life, neutral contexts. We collect human judgments on the presence of stereotypes in generated inferences, and compare how perceptions of stereotypes vary due to annotator positionality. Domain Target Categories Gender man, woman, non-binary person, trans man, trans woman, cis man, cis woman","label_nlp4sg":1,"task":["Generative Text Inference"],"method":["Analyzing Stereotypes"],"goal1":"Reduced Inequalities","goal2":null,"goal3":null,"acknowledgments":"The authors are grateful to all the reviewers who have provided helpful suggestions to improve this work. We also thank the CLIP lab at the University of Maryland for comments on previous drafts.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":1,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"fischer-1979-powerful","url":"https:\/\/aclanthology.org\/P79-1028.pdf","title":"Powerful ideas in computational linquistics - Implications for problem solving, and education","abstract":"It is our firm belief that solving problems in the domain of computational linguistics (CL) can provide a set of metaphors or powerful ideas which are of great importance to many fields. We have taught several experimental classes to students from high schools and universities and s major part of our work was centered around problems dealing with language. We have set up an experimental Language Laboratory in which the students can explore existing computer programs, modify them, design new ones and implement them. The goal was that the student should gain a deeper understanding of language itself and that he\/she should learn general and transferable problem solving skills. exercise in pattern matching and symbol manipulation,\nwhere certain keywords trigger a few prestored answers. It may also serve as an example for how little machinery is necessary to create the illusion of understanding.","label_nlp4sg":1,"task":["computational linguistics"],"method":[],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":null,"year":1979,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"stallard-etal-2012-unsupervised","url":"https:\/\/aclanthology.org\/P12-2063.pdf","title":"Unsupervised Morphology Rivals Supervised Morphology for Arabic MT","abstract":"If unsupervised morphological analyzers could approach the effectiveness of supervised ones, they would be a very attractive choice for improving MT performance on low-resource inflected languages. In this paper, we compare performance gains for state-of-the-art supervised vs. unsupervised morphological analyzers, using a state-of-theart Arabic-to-English MT system. We apply maximum marginal decoding to the unsupervised analyzer, and show that this yields the best published segmentation accuracy for Arabic, while also making segmentation output more stable. Our approach gives an 18% relative BLEU gain for Levantine dialectal Arabic. Furthermore, it gives higher gains for Modern Standard Arabic (MSA), as measured on NIST MT-08, than does MADA (Habash and Rambow, 2005), a leading supervised MSA segmenter.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This material is based upon work supported by DARPA under Contract Nos. HR0011-12-C00014 and HR0011-12-C00015, and by ONR MURI Contract No. W911NF-10-1-0533. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the US government. We thank Rabih Zbib for his help with interpreting Levantine Arabic segmentation output.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tillmann-xia-2003-phrase","url":"https:\/\/aclanthology.org\/N03-2036.pdf","title":"A Phrase-based Unigram Model for Statistical Machine Translation","abstract":"In this paper, we describe a phrase-based unigram model for statistical machine translation that uses a much simpler set of model parameters than similar phrase-based models. The units of translation are blocks-pairs of phrases. During decoding, we use a block unigram model and a word-based trigram language model. During training, the blocks are learned from source interval projections using an underlying word alignment. We show experimental results on block selection criteria based on unigram counts and phrase length.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partially supported by DARPA and monitored by SPAWAR under contract No. N66001-99-2-8916.","year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gomez-etal-2017-discriminating","url":"https:\/\/aclanthology.org\/W17-1217.pdf","title":"Discriminating between Similar Languages Using a Combination of Typed and Untyped Character N-grams and Words","abstract":"This paper presents the CIC UALG's system that took part in the Discriminating between Similar Languages (DSL) shared task, held at the VarDial 2017 Workshop. This year's task aims at identifying 14 languages across 6 language groups using a corpus of excerpts of journalistic texts. Two classification approaches were compared: a single-step (all languages) approach and a two-step (language group and then languages within the group) approach. Features exploited include lexical features (unigrams of words) and character n-grams. Besides traditional (untyped) character n-grams, we introduce typed character n-grams in the DSL task. Experiments were carried out with different feature representation methods (binary and raw term frequency), frequency threshold values, and machine-learning algorithms-Support Vector Machines (SVM) and Multinomial Naive Bayes (MNB). Our best run in the DSL task achieved 91.46% accuracy.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partially supported by the Mexican Government (Conacyt projects 240844 and 20161958, SIP-IPN 20151406, 20161947, 20161958, 20151589, 20162204, and 20162064, SNI, COFAA-IPN) and by the Portuguese Government, through Funda\u00e7\u00e3o para a Ci\u00eancia e a Tecnologia (FCT) with reference UID\/CEC\/50021\/2013.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-etal-2021-input","url":"https:\/\/aclanthology.org\/2021.acl-short.97.pdf","title":"Input Representations for Parsing Discourse Representation Structures: Comparing English with Chinese","abstract":"Neural semantic parsers have obtained acceptable results in the context of parsing DRSs (Discourse Representation Structures). In particular models with character sequences as input showed remarkable performance for English. But how does this approach perform on languages with a different writing system, like Chinese, a language with a large vocabulary of characters? Does rule-based tokenisation of the input help, and which granularity is preferred: characters, or words? The results are promising. Even with DRSs based on English, good results for Chinese are obtained. Tokenisation offers a small advantage for English, but not for Chinese. Overall, characters are preferred as input, both for English and Chinese.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was funded by the NWO-VICI grant \"Lost in Translation-Found in Meaning\" (288-89-003). The first author is supported by the China Scholarship Council (CSC201904890008). Arianna Bisazza was partly funded by the Netherlands Organization for Scientific Research (NWO) under project number 639.021.646. The Tesla K40 GPU used in this work was kindly donated to us by the NVIDIA Corporation. We would like to thank the Center for Information Technology of the University of Groningen for their support and for providing access to the Peregrine high performance computing cluster. Finally, we thank the anonymous reviewers for their insightful comments.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nomoto-matsumoto-1998-discourse","url":"https:\/\/aclanthology.org\/W98-1125.pdf","title":"Discourse Parsing: A Decision Tree Approach","abstract":"The paper presents a new statistical method, for parsing discourse. A parse of discourse is defined as a set of semantic dependencies among sentences that make up the discourse. A collection of news articles from a Japanese economics daily are manually marked for dependency and used as a training\/testing corpus. We use a C4.5 decision tree method to develop a model of sentential dependencies. However, rather than to use class decisions made by C4.5, we exploit information on class distributions to rank possible dependencies among sentences according to their probabilistic strength and take a parse to be a set of highest ranking dependencies. We also study effects of features such as clue words, distance and similarity on the performance of the discourse parser. Experiments have found that the method performs reasonably well on diverse text types, scoring an accuracy rate of over 60%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"eragani-etal-2014-hindi","url":"https:\/\/aclanthology.org\/W14-5146.pdf","title":"Hindi Word Sketches","abstract":"Word sketches are one-page automatic, corpus-based summaries of a word's grammatical and collocational behaviour. These are widely used for studying a language and in lexicography. Sketch Engine is a leading corpus tool which takes as input a corpus and generates word sketches for the words of that language. It also generates a thesaurus and 'sketch differences', which specify similarities and differences between near-synonyms. In this paper, we present the functionalities of Sketch Engine for Hindi. We collected HindiWaC, a web crawled corpus for Hindi with 240 million words. We lemmatized, POS tagged the corpus and then loaded it into Sketch Engine.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"muti-barron-cedeno-2022-checkpoint","url":"https:\/\/aclanthology.org\/2022.acl-srw.37.pdf","title":"A Checkpoint on Multilingual Misogyny Identification","abstract":"We address the problem of identifying misogyny in tweets in mono and multilingual settings in three languages: English, Italian and Spanish. We explore model variations considering single and multiple languages both in the pre-training of the transformer and in the training of the downstream task to explore the feasibility of detecting misogyny through a transfer learning approach across multiple languages. That is, we train monolingual transformers with monolingual data and multilingual transformers with both monolingual and multilingual data. Our models reach state-of-the-art performance on all three languages. The single-language BERT models perform the best, closely followed by different configurations of multilingual BERT models. The performance drops in zero-shot classification across languages. Our error analysis shows that multilingual and monolingual models tend to make the same mistakes.","label_nlp4sg":1,"task":["Misogyny Identification"],"method":["transfer learning","monolingual transformers","multilingual transformers","BERT","multilingual BERT"],"goal1":"Gender Equality","goal2":null,"goal3":null,"acknowledgments":null,"year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":1,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hu-etal-2021-collaborative","url":"https:\/\/aclanthology.org\/2021.nlp4convai-1.11.pdf","title":"Collaborative Data Relabeling for Robust and Diverse Voice Apps Recommendation in Intelligent Personal Assistants","abstract":"Intelligent personal assistants (IPAs) such as Amazon Alexa, Google Assistant and Apple Siri extend their built-in capabilities by supporting voice apps developed by third-party developers. Sometimes the smart assistant is not able to successfully respond to user voice commands (aka utterances). There are many reasons including automatic speech recognition (ASR) error, natural language understanding (NLU) error, routing utterances to an irrelevant voice app or simply that the user is asking for a capability that is not supported yet. The failure to handle a voice command leads to customer frustration. In this paper, we introduce a fallback skill recommendation system to suggest a voice app to a customer for an unhandled voice command. One of the prominent challenges of developing a skill recommender system for IPAs is partial observation. To solve the partial observation problem, we propose collaborative data relabeling (CDR) method. In addition, CDR also improves the diversity of the recommended skills. We evaluate the proposed method both offline and online. The offline evaluation results show that the proposed system outperforms the baselines. The online A\/B testing results show significant gain of customer experience metrics.","label_nlp4sg":1,"task":["Voice Apps Recommendation"],"method":["Collaborative Data Relabeling"],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jiao-etal-2020-tinybert","url":"https:\/\/aclanthology.org\/2020.findings-emnlp.372.pdf","title":"TinyBERT: Distilling BERT for Natural Language Understanding","abstract":"Language model pre-training, such as BERT, has significantly improved the performances of many natural language processing tasks. However, pre-trained language models are usually computationally expensive, so it is difficult to efficiently execute them on resourcerestricted devices. To accelerate inference and reduce model size while maintaining accuracy, we first propose a novel Transformer distillation method that is specially designed for knowledge distillation (KD) of the Transformer-based models. By leveraging this new KD method, the plenty of knowledge encoded in a large \"teacher\" BERT can be effectively transferred to a small \"student\" Tiny-BERT. Then, we introduce a new two-stage learning framework for TinyBERT, which performs Transformer distillation at both the pretraining and task-specific learning stages. This framework ensures that TinyBERT can capture the general-domain as well as the task-specific knowledge in BERT. TinyBERT 4 1 with 4 layers is empirically effective and achieves more than 96.8% the performance of its teacher BERT BASE on GLUE benchmark, while being 7.5x smaller and 9.4x faster on inference. TinyBERT 4 is also significantly better than 4-layer state-of-the-art baselines on BERT distillation, with only \u223c28% parameters and \u223c31% inference time of them. Moreover, TinyBERT 6 with 6 layers performs on-par with its teacher BERT BASE .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported in part by NSFC NO.61832020, No.61821003, 61772216, National Science and Technology Major Project No.2017ZX01032-101, Fundamental Research Funds for the Central Universities.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"arnold-etal-1988-relaxed","url":"https:\/\/aclanthology.org\/1988.tmi-1.6.pdf","title":"`Relaxed' compositionality in machine translation","abstract":"An approach to translation is described that embodies certain principles about translation, in particular, the principle of 'compositionality', with the capacity for dealing with problematic\/exceptional and apparently 'non-compositional' phenomena in such a way that the treatment of both 'regular' phenomena, and other problem cases, is not affected. The discussion focusses on the translation between a class of adverbs in Dutch (e.g. 'graag') and the corresponding complex sentential structures in English ('like to'). A detailed discussion of the phenomenon is included, including aspects that are not adequately treated here.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1988,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"czarnowska-etal-2019-dont","url":"https:\/\/aclanthology.org\/D19-1090.pdf","title":"Don't Forget the Long Tail! A Comprehensive Analysis of Morphological Generalization in Bilingual Lexicon Induction","abstract":"Human translators routinely have to translate rare inflections of words-due to the Zipfian distribution of words in a language. When translating from Spanish, a good translator would have no problem identifying the proper translation of a statistically rare inflection such as hablar\u00e1mos. Note the lexeme itself, hablar, is relatively common. In this work, we investigate whether state-of-the-art bilingual lexicon inducers are capable of learning this kind of generalization. We introduce 40 morphologically complete dictionaries in 10 languages 1 and evaluate three of the state-of-the-art models on the task of translation of less frequent morphological forms. We demonstrate that the performance of state-of-the-art models drops considerably when evaluated on infrequent morphological inflections and then show that adding a simple morphological constraint at training time improves the performance, proving that the bilingual lexicon inducers can benefit from better encoding of morphology.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lagoutte-etal-2012-composing","url":"https:\/\/aclanthology.org\/E12-1082.pdf","title":"Composing extended top-down tree transducers","abstract":"A composition procedure for linear and nondeleting extended top-down tree transducers is presented. It is demonstrated that the new procedure is more widely applicable than the existing methods. In general, the result of the composition is an extended top-down tree transducer that is no longer linear or nondeleting, but in a number of cases these properties can easily be recovered by a post-processing step.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yamashita-etal-2020-cross","url":"https:\/\/aclanthology.org\/2020.coling-main.415.pdf","title":"Cross-lingual Transfer Learning for Grammatical Error Correction","abstract":"In this study, we explore cross-lingual transfer learning in grammatical error correction (GEC) tasks. Many languages lack the resources required to train GEC models. Cross-lingual transfer learning from high-resource languages (the source models) is effective for training models of low-resource languages (the target models) for various tasks. However, in GEC tasks, the possibility of transferring grammatical knowledge (e.g., grammatical functions) across languages is not evident. Therefore, we investigate cross-lingual transfer learning methods for GEC. Our results demonstrate that transfer learning from other languages can improve the accuracy of GEC. We also demonstrate that proximity to source languages has a significant impact on the accuracy of correcting certain types of errors.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We gratefully thank Yangyang Xi and Lang-8 contributors for sharing their data. This work has been partly supported by the programs of the Grant-in-Aid for Scientific Research from the Japan Society for the Promotion of Science (JSPS KAKENHI) Grant Numbers 19K12099 and 19KK0286. ","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"vig-etal-2021-summvis","url":"https:\/\/aclanthology.org\/2021.acl-demo.18.pdf","title":"SummVis: Interactive Visual Analysis of Models, Data, and Evaluation for Text Summarization","abstract":"Novel neural architectures, training strategies, and the availability of large-scale corpora haven been the driving force behind recent progress in abstractive text summarization. However, due to the black-box nature of neural models, uninformative evaluation metrics, and scarce tooling for model and data analysis, the true performance and failure modes of summarization models remain largely unknown. To address this limitation, we introduce SUMMVIS, an open-source tool for visualizing abstractive summaries that enables fine-grained analysis of the models, data, and evaluation metrics associated with text summarization. Through its lexical and semantic visualizations, the tools offers an easy entry point for in-depth model prediction exploration across important dimensions such as factual consistency or abstractiveness. The tool together with several pre-computed model outputs is available at https:\/\/summvis.com.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Michael Correll for his insightful feedback.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"padhi-etal-2020-learning","url":"https:\/\/aclanthology.org\/2020.acl-main.354.pdf","title":"Learning Implicit Text Generation via Feature Matching","abstract":"Generative feature matching network (GFMN) is an approach for training implicit generative models for images by performing moment matching on features from pre-trained neural networks. In this paper, we present new GFMN formulations that are effective for sequential data. Our experimental results show the effectiveness of the proposed method, Se-qGFMN, for three distinct generation tasks in English: unconditional text generation, classconditional text generation, and unsupervised text style transfer. SeqGFMN is stable to train and outperforms various adversarial approaches for text generation and text style transfer.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hughes-etal-2004-management","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/274.pdf","title":"Management of Metadata in Linguistic Fieldwork: Experience from the ACLA Project","abstract":"Many linguistic research projects collect large amounts of multimodal data in digital formats. Despite the plethora of data collection applications available, it is often difficult for researchers to identify and integrate applications which enable the management of collections of multimodal data in addition to facilitating the actual collection process itself. In research projects that involve substantial data analysis, data management becomes a critical issue. Whilst best practice recommendations in regard to data formats themselves are propagated through projects such as EMELD, HRELP and DOBES, there is little corresponding information available regarding best practice for field metadata management beyond the provision of standards by entities such as OLAC and IMDI. These general problems are further exacerbated in the context of multiple researchers in geographically-disparate or connectivity-challenged locations. We describe the design of a solution for a group of researchers collecting data on child language acquisition in Australian indigenous communities. We describe the context, identify pertinent issues, outline the mechanics of a solution, and finally report the implementation. In doing so, we provide an alternative model and an open source software application suite which aims to be sufficiently general that other research groups may consider adopting some or all of the infrastructure.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research described in this paper has been supported by the Australian Research Council Discovery Project Grant DP0343189.The authors wish to acknowledge the contributions of Felicity Meakins and Samantha Disbray (U.Melbourne), and Karin Moses (Latrobe U.), who form the initial user group for this application.","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hiraoka-etal-2020-optimizing","url":"https:\/\/aclanthology.org\/2020.findings-emnlp.120.pdf","title":"Optimizing Word Segmentation for Downstream Task","abstract":"In traditional NLP, we tokenize a given sentence as a preprocessing, and thus the tokenization is unrelated to a target downstream task. To address this issue, we propose a novel method to explore a tokenization which is appropriate for the downstream task. Our proposed method, optimizing tokenization (Op-Tok), is trained to assign a high probability to such appropriate tokenization based on the downstream task loss. OpTok can be used for any downstream task which uses a vector representation of a sentence such as text classification. Experimental results demonstrate that OpTok improves the performance of sentiment analysis and textual entailment. In addition, we introduce OpTok into BERT, the state-ofthe-art contextualized embeddings and report a positive effect.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"These research results were obtained from the commissioned research by National Institute of Information and Communications Technology (NICT), Japan.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sanguinetti-etal-2018-postwita","url":"https:\/\/aclanthology.org\/L18-1279.pdf","title":"PoSTWITA-UD: an Italian Twitter Treebank in Universal Dependencies","abstract":"Due to the spread of social media-based applications and the challenges posed by the treatment of social media texts in NLP tools, tailored approaches and ad hoc resources are required to provide the proper coverage of specific linguistic phenomena. Various attempts to produce this kind of specialized resources and tools are described in literature. However, most of these attempts mainly focus on PoS-tagged corpora and only a few of them deal with syntactic annotation. This is particularly true for the Italian language, for which such a resource is currently missing. We thus propose the development of PoSTWITA-UD, a collection of tweets annotated according to a well-known dependency-based annotation format: the Universal Dependencies. The goal of this work is manifold, and it mainly consists in creating a resource that, especially for Italian, can be exploited for the training of NLP systems so as to enhance their performance on social media texts. In this paper we focus on the current state of the resource.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work of Cristina Bosco and Manuela Sanguinetti has been partially funded by Fondazione CRT (Hate Speech and Social Media, project n. 2016.0688) and by Progetto di Ateneo\/CSP 2016 (Immigrants, Hate and Prejudice in Social Media, project S1618 L2 BOSC 01).","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"purver-etal-2001-means","url":"https:\/\/aclanthology.org\/W01-1616.pdf","title":"On the Means for Clarification in Dialogue","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2001,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kramer-liu-2021-data","url":"https:\/\/aclanthology.org\/2021.scil-1.31.pdf","title":"A Data-driven Approach to Crosslinguistic Structural Biases","abstract":"Introduction. Ueno and Polinsky (2009) propose two structural biases that may facilitate processing efficiency: a pro-drop bias, which states that both SOV and SVO languages will use more pro-drop with transitive structures than with intransitive structures, and an intransitive bias, which states that SOV languages will use more intransitive structures than SVO languages. Corpus data comparing English and Spanish (SVO) to Japanese and Turkish (SOV) supported their predictions. Here, we expand upon these results by using naturalistic corpora and computational tools to investigate whether and to what extent subject drop (as opposed to pro-drop; see below) and intransitive biases are present at a larger cross-linguistic scale.\nHypotheses and predictions. Our hypotheses differ slightly from those of Ueno and Polinsky (2009) due to a key difference in method. We use a data-driven approach to determine presence of subject drop and transitivity: if a verb appears in an OV, VO, or V structure, the subject has been dropped (regardless of the particular grammatical or discourse reasons), and if a verb appears in an SV or VS structure, it is intransitive. In contrast, in Ueno and Polinsky (2009) , the transitivity of each verb was annotated manually. This is a potential issue because there is not a clear cross-linguistic distinction between object drop and intransitivity, indicating in turn that the presence or absence of object drop in their study was decided in a more subjective manner. An additional caveat of the method employed by Ueno and Polinsky (2009) is their treatment of word order. In this study, word order was coded categorically as either SOV or SVO. However, the use of categorical typological variables can lead to data reduction and, consequently, statistical bias, for example in the form of bimodal distributions (W\u00e4lchli, 2009) . Computational analysis using gradient measures of word order can reduce bias and allow for testing more fine-grained predictions (Levshina, 2019) . We thus examine how both categorical (dominant word order) and continuous measures (headedness) predict these two biases.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sadoun-2016-semi","url":"https:\/\/aclanthology.org\/W16-4713.pdf","title":"A semi automatic annotation approach for ontological and terminological knowledge acquisition","abstract":"We propose a semi-automatic method for the acquisition of specialised ontological and terminological knowledge. An ontology and a terminology are automatically built from domain experts' annotations. The ontology formalizes the common and shared conceptual vocabulary of those experts. Its associated terminology defines a glossary linking annotated terms to their semantic categories. These two resources evolve incrementally and are used for an automatic annotation of a new corpus at each iteration. The annotated corpus concerns the evaluation of French higher education and science institutions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"u-etal-2008-statistical","url":"https:\/\/aclanthology.org\/I08-1068.pdf","title":"Statistical Machine Translation Models for Personalized Search","abstract":"Web search personalization has been well studied in the recent few years. Relevance feedback has been used in various ways to improve relevance of search results. In this paper, we propose a novel usage of relevance feedback to effectively model the process of query formulation and better characterize how a user relates his query to the document that he intends to retrieve using a noisy channel model. We model a user profile as the probabilities of translation of query to document in this noisy channel using the relevance feedback obtained from the user. The user profile thus learnt is applied in a re-ranking phase to rescore the search results retrieved using an underlying search engine. We evaluate our approach by conducting experiments using relevance feedback data collected from users using a popular search engine. The results have shown improvement over baseline, proving that our approach can be applied to personalization of web search. The experiments have also resulted in some valuable observations that learning these user profiles using snippets surrounding the results for a query gives better performance than learning from entire document collection.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"berment-boitet-2012-heloise-ariane","url":"https:\/\/aclanthology.org\/C12-3002.pdf","title":"Heloise --- An Ariane-G5 Compatible Rnvironment for Developing Expert MT Systems Online","abstract":"Heloise is a reengineering of the specialised languages for linguistic programming (SLLPs) of Ariane-G5 running both Linux and Windows. Heloise makes the core of Ariane-G5 available to anyone willing to develop \"expert\" (i.e. relying on linguistic expertise) operational machine translation (MT) systems in that framework, used with success since the 80's to build many prototypes and a few systems of the \"multilevel transfer\" and \"interlingua\" architecture. This initiative is part of the movement to reduce the digital divide by providing easily understandable tools that allow the development of lingware for poorly-resourced languages (\u03c0-languages). This demo article presents Heloise and provides some information about ongoing development using it.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"aggarwal-etal-2019-ltl","url":"https:\/\/aclanthology.org\/S19-2121.pdf","title":"LTL-UDE at SemEval-2019 Task 6: BERT and Two-Vote Classification for Categorizing Offensiveness","abstract":"This paper describes LTL-UDE's systems for the SemEval 2019 Shared Task 6. We present results for Subtask A and C. In Subtask A, we experiment with an embedding representation of postings and use a Multi-Layer Perceptron and BERT to categorize postings. Our best result reaches the 10th place (out of 103) using BERT. In Subtask C, we applied a two-vote classification approach with minority fallback, which is placed on the 19th rank (out of 65).","label_nlp4sg":1,"task":["Categorizing Offensiveness"],"method":["BERT","Multi - Layer Perceptron","BERT"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"rehm-etal-2019-developing","url":"https:\/\/aclanthology.org\/W19-2207.pdf","title":"Developing and Orchestrating a Portfolio of Natural Legal Language Processing and Document Curation Services","abstract":"We present a portfolio of natural legal language processing and document curation services currently under development in a collaborative European project. First, we give an overview of the project and the different use cases, while, in the main part of the article, we focus upon the 13 different processing services that are being deployed in different prototype applications using a flexible and scalable microservices architecture. Their orchestration is operationalised using a content and document curation workflow manager.","label_nlp4sg":1,"task":["Developing and Orchestrating a Portfolio of Natural Legal Language Processing"],"method":["processing services"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"This work has been partially funded by the project LYNX, which has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement no. 780602. For more information please see http:\/\/www.lynx-project.eu.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"bell-etal-2014-uedin","url":"https:\/\/aclanthology.org\/2014.iwslt-evaluation.3.pdf","title":"The UEDIN ASR systems for the IWSLT 2014 evaluation","abstract":"This paper describes the University of Edinburgh (UEDIN) ASR systems for the 2014 IWSLT Evaluation. Notable features of the English system include deep neural network acoustic models in both tandem and hybrid configuration with the use of multi-level adaptive networks, LHUC adaptation and Maxout units. The German system includes lightly supervised training and a new method for dictionary generation. Our voice activity detection system now uses a semi-Markov model to incorporate a prior on utterance lengths. There are improvements of up to 30% relative WER on the tst2013 English test set.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We have described our ASR systems for the English and German 2014 IWSLT evaluation. Improvements to our English system, most particularly the use of AMI data, and the deployment of hybrid DNNs with LHUC and sequence training, result in a relative WER reduction of around 30% on the challenging tst2013 evaluation set compared to our 2013 system. We intend to carry over these benefits to our German system, where a lack of suitable training data remains a challenge.In the future, we plan to further investigate methods for robust DNN training and adaptation when the training data is limited or poorly-transcribed, something which should enable us to develop systems in new languages more rapidly. We also plan to work on removing the dependence on a dictionary completely, perhaps by adapting grapheme-based models. We also aim to re-incorporate RNN language models in our most competitive English system.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dernoncourt-etal-2017-neural","url":"https:\/\/aclanthology.org\/E17-2110.pdf","title":"Neural Networks for Joint Sentence Classification in Medical Paper Abstracts","abstract":"Existing models based on artificial neural networks (ANNs) for sentence classification often do not incorporate the context in which sentences appear, and classify sentences individually. However, traditional sentence classification approaches have been shown to greatly benefit from jointly classifying subsequent sentences, such as with conditional random fields. In this work, we present an ANN architecture that combines the effectiveness of typical ANN models to classify sentences in isolation, with the strength of structured prediction. Our model outperforms the state-ofthe-art results on two different datasets for sequential sentence classification in medical abstracts.","label_nlp4sg":1,"task":["Joint Sentence Classification"],"method":["Neural Networks"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"The authors thank the anonymous reviewers for their insightful comments. The project was supported by Philips Research. The content is solely the responsibility of the authors and does not necessarily represent the official views of Philips Research.","year":2017,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dimov-etal-2020-nopropaganda","url":"https:\/\/aclanthology.org\/2020.semeval-1.194.pdf","title":"NoPropaganda at SemEval-2020 Task 11: A Borrowed Approach to Sequence Tagging and Text Classification","abstract":"This paper describes our contribution to SemEval-2020 Task 11: Detection Of Propaganda Techniques In News Articles. We start with simple LSTM baselines and move to an autoregressive transformer decoder to predict long continuous propaganda spans for the first subtask. We also adopt an approach from relation extraction by enveloping spans mentioned above with special tokens for the second subtask of propaganda technique classification. Our models report an F-score of 44.6% and a micro-averaged F-score of 58.2% for those tasks accordingly.","label_nlp4sg":1,"task":["Detection Of Propaganda"],"method":["LSTM","autoregressive transformer"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"liang-etal-2022-bisyn","url":"https:\/\/aclanthology.org\/2022.findings-acl.144.pdf","title":"BiSyn-GAT+: Bi-Syntax Aware Graph Attention Network for Aspect-based Sentiment Analysis","abstract":"Aspect-based sentiment analysis (ABSA) is a fine-grained sentiment analysis task that aims to align aspects and corresponding sentiments for aspect-specific sentiment polarity inference. It is challenging because a sentence may contain multiple aspects or complicated (e.g., conditional, coordinating, or adversative) relations. Recently, exploiting dependency syntax information with graph neural networks has been the most popular trend. Despite its success, methods that heavily rely on the dependency tree pose challenges in accurately modeling the alignment of the aspects and their words indicative of sentiment, since the dependency tree may provide noisy signals of unrelated associations (e.g., the \"conj\" relation between \"great\" and \"dreadful\" in Figure 2). In this paper, to alleviate this problem, we propose a Bi-Syntax aware Graph Attention Network (BiSyn-GAT+). Specifically, BiSyn-GAT+ fully exploits the syntax information (e.g., phrase segmentation and hierarchical structure) of the constituent tree of a sentence to model the sentiment-aware context of every single aspect (called intracontext) and the sentiment relations across aspects (called inter-context) for learning. Experiments on four benchmark datasets demonstrate that BiSyn-GAT+ outperforms the stateof-the-art methods consistently.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by the National Natural Science Foundation of China under Grant No.61602197, Grant No.L1924068, Grant No.61772076, in part by CCF-AFSG Research Fund under Grant No.RF20210005, and in part by the fund of Joint Laboratory of HUST and Pingan Property & Casualty Research (HPL). The authors would also like to thank the anonymous reviewers for their comments on improving the quality of this paper.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jurczyk-choi-2017-cross","url":"https:\/\/aclanthology.org\/W17-5407.pdf","title":"Cross-genre Document Retrieval: Matching between Conversational and Formal Writings","abstract":"This paper challenges a cross-genre document retrieval task, where the queries are in formal writing and the target documents are in conversational writing. In this task, a query, is a sentence extracted from either a summary or a plot of an episode in a TV show, and the target document consists of transcripts from the corresponding episode. To establish a strong baseline, we employ the current state-of-the-art search engine to perform document retrieval on the dataset collected for this work. We then introduce a structure reranking approach to improve the initial ranking by utilizing syntactic and semantic structures generated by NLP tools. Our evaluation shows an improvement of more than 4% when the structure reranking is applied, which is very promising.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yu-etal-2021-interpretable","url":"https:\/\/aclanthology.org\/2021.ranlp-1.179.pdf","title":"Interpretable Propaganda Detection in News Articles","abstract":"Online users today are exposed to misleading and propagandistic news articles and media posts on a daily basis. To counter thus, a number of approaches have been designed aiming to achieve a healthier and safer online news and media consumption. Automatic systems are able to support humans in detecting such content; yet, a major impediment to their broad adoption is that besides being accurate, the decisions of such systems need also to be interpretable in order to be trusted and widely adopted by users. Since misleading and propagandistic content influences readers through the use of a number of deception techniques, we propose to detect and to show the use of such techniques as a way to offer interpretability. In particular, we define qualitatively descriptive features and we analyze their suitability for detecting deception techniques. We further show that our interpretable features can be easily combined with pre-trained language models, yielding state-of-the-art results.","label_nlp4sg":1,"task":["Propaganda Detection"],"method":["interpretability"],"goal1":"Peace, Justice and Strong Institutions","goal2":"Peace, Justice and Strong Institutions","goal3":null,"acknowledgments":"This research is part of the Tanbih mega-project, 4 which aims to limit the impact of \"fake news\", propaganda, and media bias by making users aware of what they are reading, thus promoting media literacy and critical thinking. It is developed in collaboration between the Qatar Computing Research Institute, HBKU and the MIT Computer Science and Artificial Intelligence Laboratory.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"seo-etal-2022-debiasing","url":"https:\/\/aclanthology.org\/2022.findings-acl.65.pdf","title":"Debiasing Event Understanding for Visual Commonsense Tasks","abstract":"We study event understanding as a critical step towards visual commonsense tasks. Meanwhile, we argue that current object-based event understanding is purely likelihood-based, leading to incorrect event prediction, due to biased correlation between events and objects. We propose to mitigate such biases with do-calculus, proposed in causality research, but overcoming its limited robustness, by an optimized aggregation with association-based prediction. We show the effectiveness of our approach, intrinsically by comparing our generated events with ground-truth event annotation, and extrinsically by downstream commonsense tasks. * Equal contribution \u2020 Corresponding author (GOLD) \"3 kneels next to the bed 1 is lying in\" \/ (confounder) chair (B-GEN) \"sit up from the couch\" (D-GEN) \"lying in the bed\"","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by Microsoft Research Asia, SNU-NAVER Hyperscale AI Center, and IITP grants funded by the Korea government (MSIT) [2021-0-02068 SNU AIHub, IITP-2022-2020-0-01789].","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ovrelid-etal-2018-lia","url":"https:\/\/aclanthology.org\/L18-1710.pdf","title":"The LIA Treebank of Spoken Norwegian Dialects","abstract":"This article presents the LIA treebank of transcribed spoken Norwegian dialects. It consists of dialect recordings made in the period between 1950-1990, which have been digitised, transcribed, and subsequently annotated with morphological and dependency-style syntactic analysis as part of the LIA (Language Infrastructure made Accessible) project at the University of Oslo. In this article, we describe the LIA material of dialect recordings and its transcription, transliteration and further morphosyntactic annotation. We focus in particular on the extension of the native NDT annotation scheme to spoken language phenomena, such as pauses and various types of disfluencies, and present the subsequent conversion of the treebank to the Universal Dependencies scheme. The treebank currently consists of 13,608 tokens, distributed over 1396 segments taken from three different dialects of spoken Norwegian. The LIA treebank annotation is an ongoing effort and future releases will extend on the current data set.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hall-klein-2012-training","url":"https:\/\/aclanthology.org\/D12-1105.pdf","title":"Training Factored PCFGs with Expectation Propagation","abstract":"PCFGs can grow exponentially as additional annotations are added to an initially simple base grammar. We present an approach where multiple annotations coexist, but in a factored manner that avoids this combinatorial explosion. Our method works with linguisticallymotivated annotations, induced latent structure, lexicalization, or any mix of the three. We use a structured expectation propagation algorithm that makes use of the factored structure in two ways. First, by partitioning the factors, it speeds up parsing exponentially over the unfactored approach. Second, it minimizes the redundancy of the factors during training, improving accuracy over an independent approach. Using purely latent variable annotations, we can efficiently train and parse with up to 8 latent bits per symbol, achieving F1 scores up to 88.4 on the Penn Treebank while using two orders of magnitudes fewer parameters compared to the na\u00efve approach. Combining latent, lexicalized, and unlexicalized annotations, our best parser gets 89.4 F1 on all sentences from section 23 of the Penn Treebank.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Slav Petrov, David Burkett, Adam Pauls, Greg Durrett and the anonymous reviewers for helpful comments. We would also like to thank Daphne Koller for originally suggesting the assumed density filtering approach. This work was partially supported by BBN under DARPA contract HR0011-12-C-0014, and by an NSF fellowship to the first author.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lee-etal-2021-unifying","url":"https:\/\/aclanthology.org\/2021.naacl-main.432.pdf","title":"On Unifying Misinformation Detection","abstract":"In this paper, we introduce UNIFIEDM2, a general-purpose misinformation model that jointly models multiple domains of misinformation with a single, unified setup. The model is trained to handle four tasks: detecting news bias, clickbait, fake news and verifying rumors. By grouping these tasks together, UNIFIEDM2 learns a richer representation of misinformation, which leads to stateof-the-art or comparable performance across all tasks. Furthermore, we demonstrate that UNIFIEDM2's learned representation is helpful for few-shot learning of unseen misinformation tasks\/datasets and model's generalizability to unseen events. * Work partially done while interning at Facebook AI. \u2020 Work partially done while working at Facebook AI.","label_nlp4sg":1,"task":["Misinformation Detection"],"method":[],"goal1":"Peace, Justice and Strong Institutions","goal2":"Peace, Justice and Strong Institutions","goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"chen-qian-2019-transfer","url":"https:\/\/aclanthology.org\/P19-1052.pdf","title":"Transfer Capsule Network for Aspect Level Sentiment Classification","abstract":"Aspect-level sentiment classification aims to determine the sentiment polarity of a sentence towards an aspect. Due to the high cost in annotation, the lack of aspect-level labeled data becomes a major obstacle in this area. On the other hand, document-level labeled data like reviews are easily accessible from online websites. These reviews encode sentiment knowledge in abundant contexts. In this paper, we propose a Transfer Capsule Network (Tran-sCap) model for transferring document-level knowledge to aspect-level sentiment classification. To this end, we first develop an aspect routing approach to encapsulate the sentence-level semantic representations into semantic capsules from both aspect-level and document-level data. We then extend the dynamic routing approach to adaptively couple the semantic capsules with the class capsules under the transfer learning framework. Experiments on SemEval datasets demonstrate the effectiveness of TransCap.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work described in this paper is supported by the NSFC projects (61572376, 91646206), and the 111 project (B07037).","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kanayama-etal-2014-learning","url":"https:\/\/aclanthology.org\/W14-4202.pdf","title":"Learning from a Neighbor: Adapting a Japanese Parser for Korean Through Feature Transfer Learning","abstract":"We present a new dependency parsing method for Korean applying cross-lingual transfer learning and domain adaptation techniques. Unlike existing transfer learning methods relying on aligned corpora or bilingual lexicons, we propose a feature transfer learning method with minimal supervision, which adapts an existing parser to the target language by transferring the features for the source language to the target language. Specifically, we utilize the Triplet\/Quadruplet Model, a hybrid parsing algorithm for Japanese, and apply a delexicalized feature transfer for Korean. Experiments with Penn Korean Treebank show that even using only the transferred features from Japanese achieves a high accuracy (81.6%) for Korean dependency parsing. Further improvements were obtained when a small annotated Korean corpus was combined with the Japanese training corpus, confirming that efficient crosslingual transfer learning can be achieved without expensive linguistic resources.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhang-etal-2008-tree","url":"https:\/\/aclanthology.org\/P08-1064.pdf","title":"A Tree Sequence Alignment-based Tree-to-Tree Translation Model","abstract":"This paper presents a translation model that is based on tree sequence alignment, where a tree sequence refers to a single sequence of subtrees that covers a phrase. The model leverages on the strengths of both phrase-based and linguistically syntax-based method. It automatically learns aligned tree sequence pairs with mapping probabilities from word-aligned biparsed parallel texts. Compared with previous models, it not only captures non-syntactic phrases and discontinuous phrases with linguistically structured features, but also supports multi-level structure reordering of tree typology with larger span. This gives our model stronger expressive power than other reported models. Experimental results on the NIST MT-2005 Chinese-English translation task show that our method statistically significantly outperforms the baseline systems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"crocker-1991-multiple","url":"https:\/\/aclanthology.org\/E91-1032.pdf","title":"Multiple Interpreters in a Principle-Based Model of Sentence Processing","abstract":"This paper describes a computational model of human sentence processing based on the principles and parameters paradigm of current linguistic theory. The syntactic processing model posits four modules, recovering phrase structure, long-distance dependencies, coreference, and thematic structure. These four modules are implemented as recta-interpreters over their relevant components of the grammar, permitting variation in the deductive strategies employed by each module. These four interpreters are also 'coroutined' via the freeze directive of constraint logic programruing to achieve incremental interpretation across the modules.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"I would like to thank Elisabet Engdahl and Robin Cooper for their comments on various aspects of this work. This research was conducted under the support of an ORS award, an Edinburgh University Studentship and the Human Communication Research Centre.","year":1991,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"amiri-2019-neural","url":"https:\/\/aclanthology.org\/N19-1003.pdf","title":"Neural Self-Training through Spaced Repetition","abstract":"Self-training is a semi-supervised learning approach for utilizing unlabeled data to create better learners. The efficacy of self-training algorithms depends on their data sampling techniques. The majority of current sampling techniques are based on predetermined policies which may not effectively explore the data space or improve model generalizability. In this work, we tackle the above challenges by introducing a new data sampling technique based on spaced repetition that dynamically samples informative and diverse unlabeled instances with respect to individual learner and instance characteristics. The proposed model is specifically effective in the context of neural models which can suffer from overfitting and high-variance gradients when trained with small amount of labeled data. Our model outperforms current semi-supervised learning approaches developed for neural networks on publicly-available datasets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"I sincerely thank Mitra Mohtarami and anonymous reviewers for their insightful comments and constructive feedback.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhang-etal-2019-aspect","url":"https:\/\/aclanthology.org\/D19-1464.pdf","title":"Aspect-based Sentiment Classification with Aspect-specific Graph Convolutional Networks","abstract":"Due to their inherent capability in semantic alignment of aspects and their context words, attention mechanism and Convolutional Neural Networks (CNNs) are widely applied for aspect-based sentiment classification. However, these models lack a mechanism to account for relevant syntactical constraints and long-range word dependencies, and hence may mistakenly recognize syntactically irrelevant contextual words as clues for judging aspect sentiment. To tackle this problem, we propose to build a Graph Convolutional Network (GCN) over the dependency tree of a sentence to exploit syntactical information and word dependencies. Based on it, a novel aspectspecific sentiment classification framework is raised. Experiments on three benchmarking collections illustrate that our proposed model has comparable effectiveness to a range of state-of-the-art models 1 , and further demonstrate that both syntactical information and long-range word dependencies are properly captured by the graph convolution structure.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-poon-2018-deep","url":"https:\/\/aclanthology.org\/D18-1215.pdf","title":"Deep Probabilistic Logic: A Unifying Framework for Indirect Supervision","abstract":"Deep learning has emerged as a versatile tool for a wide range of NLP tasks, due to its superior capacity in representation learning. But its applicability is limited by the reliance on annotated examples, which are difficult to produce at scale. Indirect supervision has emerged as a promising direction to address this bottleneck, either by introducing labeling functions to automatically generate noisy examples from unlabeled text, or by imposing constraints over interdependent label decisions. A plethora of methods have been proposed, each with respective strengths and limitations. Probabilistic logic offers a unifying language to represent indirect supervision, but end-to-end modeling with probabilistic logic is often infeasible due to intractable inference and learning. In this paper, we propose deep probabilistic logic (DPL) as a general framework for indirect supervision, by composing probabilistic logic with deep learning. DPL models label decisions as latent variables, represents prior knowledge on their relations using weighted first-order logical formulas, and alternates between learning a deep neural network for the end task and refining uncertain formula weights for indirect supervision, using variational EM. This framework subsumes prior indirect supervision methods as special cases, and enables novel combination via infusion of rich domain and linguistic knowledge. Experiments on biomedical machine reading demonstrate the promise of this approach.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank David McAllester, Chris Quirk, and Scott Yih for useful discussions, and the three anonymous reviewers for helpful comments.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lin-su-1997-level","url":"https:\/\/aclanthology.org\/O97-1007.pdf","title":"A Level-synchronous Approach to Ill-formed Sentence Parsing","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lombardo-etal-2004-competence","url":"https:\/\/aclanthology.org\/W04-0301.pdf","title":"Competence and Performance Grammar in Incremental Processing","abstract":"The goal of this paper is to explore some consequences of the dichotomy between competence and performance from the point of view of incrementality. We introduce a TAG-based formalism that encodes a strong notion of incrementality directly into the operations of the formal system. A left-associative operation is used to build a lexicon of extended elementary trees. Extended elementary trees allow derivations in which a single fully connected structure is mantained through the course of a leftto-right word-byword derivation. In the paper, we describe the consequences of this view for semantic interpretation, and we also evaluate some of the computational consequences of enlarging the lexicon in this way.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"celikyilmaz-hakkani-tur-2012-joint","url":"https:\/\/aclanthology.org\/P12-1035.pdf","title":"A Joint Model for Discovery of Aspects in Utterances","abstract":"We describe a joint model for understanding user actions in natural language utterances. Our multi-layer generative approach uses both labeled and unlabeled utterances to jointly learn aspects regarding utterance's target domain (e.g. movies), intention (e.g., finding a movie) along with other semantic units (e.g., movie name). We inject information extracted from unstructured web search query logs as prior information to enhance the generative process of the natural language utterance understanding model. Using utterances from five domains, our approach shows up to 4.5% improvement on domain and dialog act performance over cascaded approach in which each semantic component is learned sequentially and a supervised joint learning model (which requires fully labeled data).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yu-etal-2020-named","url":"https:\/\/aclanthology.org\/2020.acl-main.577.pdf","title":"Named Entity Recognition as Dependency Parsing","abstract":"Named Entity Recognition (NER) is a fundamental task in Natural Language Processing, concerned with identifying spans of text expressing references to entities. NER research is often focused on flat entities only (flat NER), ignoring the fact that entity references can be nested, as in [Bank of [China]] (Finkel and Manning, 2009). In this paper, we use ideas from graph-based dependency parsing to provide our model a global view on the input via a biaffine model (Dozat and Manning, 2017). The biaffine model scores pairs of start and end tokens in a sentence which we use to explore all spans, so that the model is able to predict named entities accurately. We show that the model works well for both nested and flat NER through evaluation on 8 corpora and achieving SoTA performance on all of them, with accuracy gains of up to 2.2 percentage points.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by the DALI project, ERC Grant 695662.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"shinzato-torisawa-2004-acquiring","url":"https:\/\/aclanthology.org\/N04-1010.pdf","title":"Acquiring Hyponymy Relations from Web Documents","abstract":"This paper describes an automatic method for acquiring hyponymy relations from HTML documents on the WWW. Hyponymy relations can play a crucial role in various natural language processing systems. Most existing acquisition methods for hyponymy relations rely on particular linguistic patterns, such as \"NP such as NP\". Our method, however, does not use such linguistic patterns, and we expect that our procedure can be applied to a wide range of expressions for which existing methods cannot be used. Our acquisition algorithm uses clues such as itemization or listing in HTML documents and statistical measures such as document frequencies and verb-noun co-occurrences.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dahl-etal-1990-training","url":"https:\/\/aclanthology.org\/H90-1044.pdf","title":"Training and Evaluation of a Spoken Language Understanding System","abstract":"This paper describes our results on a spoken language application for finding directions. The spoken language system consists of the MIT SUMMIT speech recognition system ([20] ) loosely coupled to the UNISYS PUNDIT language understanding system ([9]) with SUMMIT providing the top N candidates (based on acoustic score) to the PUNDIT system. The direction finding capability is provided by an expert system which is also part of the MIT VOYAGER system [18] ). 1 One major goal in this research has been to understand issues of training vs. coverage in porting a language understanding system to a new domain. Specifically, we wished to determine how much data it takes to train a spoken language system to a given level of performance for a new domain. We can use the answer to this question in the process of designing data collection tasks to decide how much data to collect. We address a related question, that is, how to quantify the growth of a system as a function of training, in [12] .\nTo explore the relationship of training to coverage, we have developed a methodology to measure coverage of unseen material as a function of training material. Using successive batches of new material, we assessed coverage on a batch of unseen material, then trained on this material until we reached a certain level of coverage, then repeated the experiment on a new batch of material. The system coverage seemed to level off at about 70% coverage of unseen data after 1000 sentences of training data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1990,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wong-etal-2002-using","url":"https:\/\/aclanthology.org\/W02-1813.pdf","title":"Using the Segmentation Corpus to Define an Inventory of Concatenative Units for Cantonese Speech Synthesis","abstract":"The problem of word segmentation affects all aspects of Chinese language processing, including the development of text-to-speech synthesis systems. In synthesizing a Hong Kong Cantonese text, for example, words must be identified in order to model fusion of coda [p] with initial [h], and other similar effects that differentiate word-internal syllable boundaries from syllable edges that begin or end words. Accurate segmentation is necessary also for developing any list of words large enough to identify the wordinternal cross-syllable sequences that must be recorded to model such effects using concatenated synthesis units. This paper describes our use of the Segmentation Corpus to constrain such units.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by a grant from the University Grants Committee of Hong Kong to Y. S. Cheung and an SBC\/Ameritech Faculty Research Award to C. Brew and M. Beckman.","year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"fan-etal-2021-flexible","url":"https:\/\/aclanthology.org\/2021.rocling-1.5.pdf","title":"A Flexible and Extensible Framework for Multiple Answer Modes Question Answering","abstract":"This paper presents a framework to answer the questions that require various kinds of inference mechanisms (such as Extraction, Entailment-Judgement, and Summarization). Most of the previous approaches adopt a rigid framework which handles only one inference mechanism. Only a few of them adopt several answer generation modules for providing different mechanisms; however, they either lack an aggregation mechanism to merge the answers from various modules, or are too complicated to be implemented with neural networks. To alleviate the problems mentioned above, we propose a divide-andconquer framework, which consists of a set of various answer generation modules, a dispatch module, and an aggregation module. The answer generation modules are designed to provide different inference mechanisms, the dispatch module is used to select a few appropriate answer generation modules to generate answer candidates, and the aggregation module is employed to select the final answer. We test our framework on the 2020 Formosa Grand Challenge Contest dataset. Experiments show 1 https:\/\/fgc.stpi.narl.org.tw\/activity\/techai2018 that the proposed framework outperforms the state-of-the-art Roberta-large model by about 11.4%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pilehvar-etal-2013-align","url":"https:\/\/aclanthology.org\/P13-1132.pdf","title":"Align, Disambiguate and Walk: A Unified Approach for Measuring Semantic Similarity","abstract":"Semantic similarity is an essential component of many Natural Language Processing applications. However, prior methods for computing semantic similarity often operate at different levels, e.g., single words or entire documents, which requires adapting the method for each data type. We present a unified approach to semantic similarity that operates at multiple levels, all the way from comparing word senses to comparing text documents. Our method leverages a common probabilistic representation over word senses in order to compare different types of linguistic data. This unified representation shows state-ofthe-art performance on three tasks: semantic textual similarity, word similarity, and word sense coarsening.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors gratefully acknowledge the support of the ERC Starting Grant MultiJEDI No. 259234.We would like to thank Sameer S. Pradhan for providing us with an earlier version of the OntoNotes dataset.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kaplan-etal-1989-translation","url":"https:\/\/aclanthology.org\/E89-1037.pdf","title":"Translation by Structural Correspondences","abstract":"We sketch and illustrate an approach to machine translation that exploits the potential of simultaneous correspondences between separate levels of linguistic representation, as formalized in the LFG notion of codescriptions. The approach is illustrated with examples from English, German and French where the source and the target language sentence show noteworthy differences in linguistic analysis.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1989,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"egan-2000-foreign","url":"https:\/\/aclanthology.org\/2000.amta-workshop.1.pdf","title":"The foreign language challenge in the USG and machine translation.","abstract":"The internet is no longer English only. The data is voluminous and the number of proficient linguists cannot match the day to day needs of several government agencies. Handling foreign languages is not limited to translating documents but goes beyond the journalistic written formats. Military, diplomatic and official interactions in the US and abroad require more than one or two foreign language skills. The CHALLENGE is both managing the user's expectations and stimulating new areas for MT research and development.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"arefyev-etal-2021-nb","url":"https:\/\/aclanthology.org\/2021.emnlp-main.717.pdf","title":"NB-MLM: Efficient Domain Adaptation of Masked Language Models for Sentiment Analysis","abstract":"While Masked Language Models (MLM) are pre-trained on massive datasets, the additional training with the MLM objective on domain or task-specific data before fine-tuning for the final task is known to improve the final performance. This is usually referred to as the domain or task adaptation step. However, unlike the initial pre-training, this step is performed for each domain or task individually and is still rather slow, requiring several GPU days compared to several GPU hours required for the final task fine-tuning. We argue that the standard MLM objective leads to inefficiency when it is used for the adaptation step because it mostly learns to predict the most frequent words, which are not necessarily related to a final task. We propose a technique for more efficient adaptation that focuses on predicting words with large weights of the Naive Bayes classifier trained for the task at hand, which are likely more relevant than the most frequent words. The proposed method provides faster adaptation and better final performance for sentiment analysis compared to the standard approach.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are very grateful to our anonymous reviewers for insightful comments. The contribution of Nikolay Arefyev to the paper was partially made within the framework of the HSE University Basic Research Program. This research was supported in part through computational resources of HPC facilities at HSE University (Kostenetskiy et al., 2021) .","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"xie-etal-2021-pali","url":"https:\/\/aclanthology.org\/2021.semeval-1.93.pdf","title":"PALI at SemEval-2021 Task 2: Fine-Tune XLM-RoBERTa for Word in Context Disambiguation","abstract":"This paper presents the PALI team's winning system for SemEval-2021 Task 2: Multilingual and Cross-lingual Word-in-Context Disambiguation. We fine-tune XLM-RoBERTa model to solve the task of word in context disambiguation, i.e., to determine whether the target word in the two contexts contains the same meaning or not. In implementation, we first specifically design an input tag to emphasize the target word in the contexts. Second, we construct a new vector on the fine-tuned embeddings from XLM-RoBERTa and feed it to a fully-connected network to output the probability of whether the target word in the context has the same meaning or not. The new vector is attained by concatenating the embedding of the [CLS] token and the embeddings of the target word in the contexts. In training, we explore several tricks, such as the Ranger optimizer, data augmentation, and adversarial training, to improve the model prediction. Consequently, we attain the first place in all four cross-lingual tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"burstein-etal-2014-finding","url":"https:\/\/aclanthology.org\/W14-4906.pdf","title":"Finding your ``Inner-Annotator'': An Experiment in Annotator Independence for Rating Discourse Coherence Quality in Essays","abstract":"An experimental annotation method is described, showing promise for a subjective labeling taskdiscourse coherence quality of essays. Annotators developed personal protocols, reducing front-end resources: protocol development and annotator training. Substantial inter-annotator agreement was achieved for a 4-point scale. Correlational analyses revealed how unique linguistic phenomena were considered in annotation. Systems trained with the annotator data demonstrated utility of the data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-kit-2011-improving","url":"https:\/\/aclanthology.org\/I11-1141.pdf","title":"Improving Part-of-speech Tagging for Context-free Parsing","abstract":"In this paper, we propose a factored parsing model consisting of a lexical and a constituent model. The discriminative lexical model allows the parser to utilize rich contextual features beyond those encoded in the context-free grammar (CFG) in use. Experiment results reveal that our parser achieves statistically significant improvement in both parsing and tagging accuracy on both English and Chinese.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research described in this paper was partially supported by the Research Grants Council (RGC) of HKSAR, China, through the GRF grant 9041597.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"quasthoff-etal-2006-corpus","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2006\/pdf\/641_pdf.pdf","title":"Corpus Portal for Search in Monolingual Corpora","abstract":"A simple and flexible schema for storing and presenting monolingual language resources is proposed. In this format, data for 18 different languages is already available in various sizes. The data is provided free of charge for online use and download. The main target is to ease the application of algorithms for monolingual and interlingual studies.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"barbu-mititelu-2005-case","url":"https:\/\/aclanthology.org\/I05-7012.pdf","title":"A Case Study in Automatic Building of Wordnets","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"liu-etal-2019-vocabulary","url":"https:\/\/aclanthology.org\/P19-1367.pdf","title":"Vocabulary Pyramid Network: Multi-Pass Encoding and Decoding with Multi-Level Vocabularies for Response Generation","abstract":"We study the task of response generation. Conventional methods employ a fixed vocabulary and one-pass decoding, which not only make them prone to safe and general responses but also lack further refining to the first generated raw sequence. To tackle the above two problems, we present a Vocabulary Pyramid Network (VPN) which is able to incorporate multi-pass encoding and decoding with multi-level vocabularies into response generation. Specifically, the dialogue input and output are represented by multi-level vocabularies which are obtained from hierarchical clustering of raw words. Then, multi-pass encoding and decoding are conducted on the multilevel vocabularies. Since VPN is able to leverage rich encoding and decoding information with multi-level vocabularies, it has the potential to generate better responses. Experiments on English Twitter and Chinese Weibo datasets demonstrate that VPN remarkably outperforms strong baselines.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"liu-etal-2010-improving","url":"https:\/\/aclanthology.org\/N10-1042.pdf","title":"Improving Blog Polarity Classification via Topic Analysis and Adaptive Methods","abstract":"In this paper we examine different linguistic features for sentimental polarity classification, and perform a comparative study on this task between blog and review data. We found that results on blog are much worse than reviews and investigated two methods to improve the performance on blogs. First we explored information retrieval based topic analysis to extract relevant sentences to the given topics for polarity classification. Second, we adopted an adaptive method where we train classifiers from review data and incorporate their hypothesis as features. Both methods yielded performance gain for polarity classification on blog data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors thank the three anonymous reviewers for their suggestions.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rinsche-2005-computer","url":"https:\/\/aclanthology.org\/2005.mtsummit-posters.20.pdf","title":"Computer-Assisted Multingual E-communication in a Variety of Application Areas","abstract":"The paper describes the architecture and functionality of LTC Communicator, a software product from the Language Technology Centre Ltd, which offers an innovative and cost-effective response to the growing need for multilingual web based communication in various user contexts. LTC Communicator was originally developed to support software vendors operating in international markets facing the need to offer web based multilingual support to diverse customers in a variety of countries, where end users may not speak the same language as the helpdesk. This is followed by a short description of several additional application areas of this software for which LTC has received EU funding: The AMBIENT project carries out a market validation for multilingual and multimodal eLearning for business and innovation management, the EUCAM project tests multilingual eLearning in the automotive industry, including a major car manufacturer and the German and European Metal Workers Associations, and the ALADDIN project provides a mobile multilingual environment for tour guides, interacting between tour operators and tourists, with the objective of optimising their travel experience. Finally, a case study of multilingual email exchange in conjunction with web based product sales is described.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lee-etal-2008-exploring","url":"https:\/\/aclanthology.org\/O08-3001.pdf","title":"Exploring Shallow Answer Ranking Features in Cross-Lingual and Monolingual Factoid Question Answering","abstract":"Answer ranking is critical to a QA (Question Answering) system because it determines the final system performance. In this paper, we explore the behavior of shallow ranking features under different conditions. The features are easy to implement and are also suitable when complex NLP techniques or resources are not available for monolingual or cross-lingual tasks. We analyze six shallow ranking features, namely, SCO-QAT, keyword overlap, density, IR score, mutual information score, and answer frequency. SCO-QAT (Sum of Co-occurrence of Question and Answer Terms) is a new feature proposed by us that performed well in NTCIR CLQA. It is a co-occurrence based feature that does not need extra knowledge, word-ignoring heuristic rules, or special tools. Instead, for the whole corpus, SCO-QAT calculates co-occurrence scores based solely on the passage retrieval results. Our experiments show that there is no perfect shallow ranking feature for every condition. SCO-QAT performs the best in CC (Chinese-Chinese) QA, but it is not a good choice in E-C (English-Chinese) QA. Overall, Frequency is the best choice for E-C QA, but its performance is impaired when translation noise is present. We also found that passage depth has little impact on shallow ranking features, and that a proper answer filter with fined-grained answer types is important for E-C QA. We measured the performance of answer ranking in terms of a newly proposed metric EAA (Expected Answer Accuracy) to cope with cases of answers that have the same score after ranking.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by the National Science Council of Taiwan under Center of Excellence Grant NSC 95-2752-E-001-001-PAE, the Research Center for Humanities and Social Sciences, Academia Sinica, and Thematic program of Academia Sinica under Grant AS 95ASIA02. We would like to thank the Chinese Knowledge and Information Processing Group (CKIP) in Academia Sinica for providing us with AutoTag for Chinese word segmentation.","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kiritchenko-cherry-2011-lexically","url":"https:\/\/aclanthology.org\/P11-1075.pdf","title":"Lexically-Triggered Hidden Markov Models for Clinical Document Coding","abstract":"The automatic coding of clinical documents is an important task for today's healthcare providers. Though it can be viewed as multi-label document classification, the coding problem has the interesting property that most code assignments can be supported by a single phrase found in the input document. We propose a Lexically-Triggered Hidden Markov Model (LT-HMM) that leverages these phrases to improve coding accuracy. The LT-HMM works in two stages: first, a lexical match is performed against a term dictionary to collect a set of candidate codes for a document. Next, a discriminative HMM selects the best subset of codes to assign to the document by tagging candidates as present or absent. By confirming codes proposed by a dictionary, the LT-HMM can share features across codes, enabling strong performance even on rare codes. In fact, we are able to recover codes that do not occur in the training set at all. Our approach achieves the best ever performance on the 2007 Medical NLP Challenge test set, with an F-measure of 89.84.","label_nlp4sg":1,"task":["Document Coding"],"method":["Hidden Markov Models"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"Many thanks to Berry de Bruijn, Joel Martin, and the ACL-HLT reviewers for their helpful comments.","year":2011,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"aimetti-2009-modelling","url":"https:\/\/aclanthology.org\/E09-3001.pdf","title":"Modelling Early Language Acquisition Skills: Towards a General Statistical Learning Mechanism","abstract":"This paper reports the ongoing research of a thesis project investigating a computational model of early language acquisition. The model discovers word-like units from crossmodal input data and builds continuously evolving internal representations within a cognitive model of memory. Current cognitive theories suggest that young infants employ general statistical mechanisms that exploit the statistical regularities within their environment to acquire language skills. The discovery of lexical units is modelled on this behaviour as the system detects repeating patterns from the speech signal and associates them to discrete abstract semantic tags. In its current state, the algorithm is a novel approach for segmenting speech directly from the acoustic signal in an unsupervised manner, therefore liberating it from a pre-defined lexicon. By the end of the project, it is planned to have an architecture that is capable of acquiring language and communicative skills in an online manner, and carry out robust speech recognition. Preliminary results already show that this method is capable of segmenting and building accurate internal representations of important lexical units as 'emergent' properties from crossmodal data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"klein-manning-2002-generative","url":"https:\/\/aclanthology.org\/P02-1017.pdf","title":"A Generative Constituent-Context Model for Improved Grammar Induction","abstract":"We present a generative distributional model for the unsupervised induction of natural language syntax which explicitly models constituent yields and contexts. Parameter search with EM produces higher quality analyses than previously exhibited by unsupervised systems, giving the best published unsupervised parsing results on the ATIS corpus. Experiments on Penn treebank sentences of comparable length show an even higher F 1 of 71% on nontrivial brackets. We compare distributionally induced and actual part-of-speech tags as input data, and examine extensions to the basic model. We discuss errors made by the system, compare the system to previous models, and discuss upper bounds, lower bounds, and stability for this task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":" 9 The data likelihood is not shown exactly, but rather we show the linear transformation of it calculated by the system. 10 Pereira and Schabes (1992) find otherwise for PCFGs.","year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ws-1996-international","url":"https:\/\/aclanthology.org\/W96-0400.pdf","title":"Eighth International Natural Language Generation Workshop","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pusateri-glass-2004-modeling","url":"https:\/\/aclanthology.org\/W04-3011.pdf","title":"Modeling Prosodic Consistency for Automatic Speech Recognition: Preliminary Investigations","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"golan-etal-1988-active","url":"https:\/\/aclanthology.org\/C88-1042.pdf","title":"An Active Bilingual Lexicon for Machine Translation","abstract":"An approach to tile Transfer phase of a Machine Translation system is presented, where the bilingual lexicon plays an active role, guiding Transfer by means of executable descriptions of word senses. The means for lexical sense specification are, however, general enough and can in principle apply to other system arthitectures, e.g. in tile Generation phase if Transfer is intentionally kept minimal. The active lexicon is the one and only systea~ component which is exposed to users and can serve to linguistically control Transfer effects. A unified approach to lexicon creation and maintenance is proposed, which contains means to gradually refine sense specification and tailor the definitions to specific text domains. The underlying linguistic principles, the nature of sense distinction required tot translation, and tilt: formal structure of the lexicon are discussed. I\u00b0 httroduction While melbods of monolingual Analysis and Generation are also treated in other contexts, bilingual Transfer problems are hardly inw~stigated outside the context of Machine Translation. Research in Machine Translation can, in this case, make a specific contribution to Computational l,inguistics. The general issue here is tire formal representation of phrase structures and lexical units and the methodology for specifying transformations between these representations in two (or more) languages. The role of tile bilingual lexicon in the Transfer activity, attd its power to assist in the resolution of mapping problems, is a key element.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1988,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dabre-etal-2021-studying","url":"https:\/\/aclanthology.org\/2021.mtsummit-research.17.pdf","title":"Studying The Impact Of Document-level Context On Simultaneous Neural Machine Translation","abstract":"In a real-time simultaneous translation setting, neural machine translation (NMT) models start generating target language tokens from incomplete source language sentences, making them harder to translate, leading to poor translation quality. Previous research has shown that document-level NMT, comprising of sentence and context encoders and a decoder, leverages context from neighbouring sentences and helps improve translation quality. In simultaneous translation settings, the context from previous sentences should be even more critical. To this end, in this paper, we propose wait-k simultaneous document-level NMT where we keep the context encoder as it is and replace the source sentence encoder and target language decoder with their wait-k equivalents. We experiment with low and high resource settings using the Asian Language Treebank (ALT) and OpenSubtitles2018 corpora, where we observe minor improvements in translation quality. We then perform an analysis of the translations obtained using our models by focusing on sentences that should benefit from the context where we found out that the model does, in fact, benefit from context but is unable to effectively leverage it, especially in a low-resource setting. This shows that there is a need for further innovation in the way useful context is identified and leveraged.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"volkova-etal-2010-emotional","url":"https:\/\/aclanthology.org\/W10-0212.pdf","title":"Emotional Perception of Fairy Tales: Achieving Agreement in Emotion Annotation of Text","abstract":"Emotion analysis (EA) is a rapidly developing area in computational linguistics. An EA system can be extremely useful in fields such as information retrieval and emotion-driven computer animation. For most EA systems, the number of emotion classes is very limited and the text units the classes are assigned to are discrete and predefined. The question we address in this paper is whether the set of emotion categories can be enriched and whether the units to which the categories are assigned can be more flexibly defined. We present an experiment showing how an annotation task can be set up so that untrained participants can perform emotion analysis with high agreement even when not restricted to a predetermined annotation unit and using a rich set of emotion categories. As such it sets the stage for the development of more complex EA systems which are closer to the actual human emotional perception of text.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"al-mannai-etal-2014-unsupervised","url":"https:\/\/aclanthology.org\/W14-3628.pdf","title":"Unsupervised Word Segmentation Improves Dialectal Arabic to English Machine Translation","abstract":"We demonstrate the feasibility of using unsupervised morphological segmentation for dialects of Arabic, which are poor in linguistics resources. Our experiments using a Qatari Arabic to English machine translation system show that unsupervised segmentation helps to improve the translation quality as compared to using no segmentation or to using ATB segmentation, which was especially designed for Modern Standard Arabic (MSA). We use MSA and other dialects to improve Qatari Arabic to English machine translation, and we show that a uniform segmentation scheme across them yields an improvement of 1.5 BLEU points over using no segmentation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"qiu-etal-2019-training","url":"https:\/\/aclanthology.org\/P19-1372.pdf","title":"Are Training Samples Correlated? Learning to Generate Dialogue Responses with Multiple References","abstract":"Due to its potential applications, open-domain dialogue generation has become popular and achieved remarkable progress in recent years, but sometimes suffers from generic responses. Previous models are generally trained based on 1-to-1 mapping from an input query to its response, which actually ignores the nature of 1-ton mapping in dialogue that there may exist multiple valid responses corresponding to the same query. In this paper, we propose to utilize the multiple references by considering the correlation of different valid responses and modeling the 1-ton mapping with a novel two-step generation architecture. The first generation phase extracts the common features of different responses which, combined with distinctive features obtained in the second phase, can generate multiple diverse and appropriate responses. Experimental results show that our proposed model can effectively improve the quality of response and outperform existing neural dialogue models on both automatic and human evaluations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their constructive comments. This work was supported by the National Key Research and Development Program of China (No. 2017YFC0804001), the National Science Foundation of China (NSFC No. 61672058; NSFC No. 61876196). Rui Yan was sponsored by CCF-Tencent Open Research Fund and Alibaba Innovative Research (AIR) Fund.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"heylen-etal-2008-modelling","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2008\/pdf\/818_paper.pdf","title":"Modelling Word Similarity: an Evaluation of Automatic Synonymy Extraction Algorithms.","abstract":"Vector-based models of lexical semantics retrieve semantically related words automatically from large corpora by exploiting the property that words with a similar meaning tend to occur in similar contexts. Despite their increasing popularity, it is unclear which kind of semantic similarity they actually capture and for which kind of words. In this paper, we use three vector-based models to retrieve semantically related words for a set of Dutch nouns and we analyse whether three linguistic properties of the nouns influence the results. In particular, we compare results from a dependency-based model with those from a 1st and 2nd order bag-of-words model and we examine the effect of the nouns' frequency, semantic speficity and semantic class. We find that all three models find more synonyms for high-frequency nouns and those belonging to abstract semantic classses. Semantic specificty does not have a clear influence.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sennhauser-berwick-2018-evaluating","url":"https:\/\/aclanthology.org\/W18-5414.pdf","title":"Evaluating the Ability of LSTMs to Learn Context-Free Grammars","abstract":"While long short-term memory (LSTM) neural net architectures are designed to capture sequence information, human language is generally composed of hierarchical structures. This raises the question as to whether LSTMs can learn hierarchical structures. We explore this question with a well-formed bracket prediction task using two types of brackets modeled by an LSTM. Demonstrating that such a system is learnable by an LSTM is the first step in demonstrating that the entire class of CFLs is also learnable. We observe that the model requires exponential memory in terms of the number of characters and embedded depth, where a sub-linear memory should suffice. Still, the model does more than memorize the training input. It learns how to distinguish between relevant and irrelevant information. On the other hand, we also observe that the model does not generalize well. We conclude that LSTMs do not learn the relevant underlying context-free rules, suggesting the good overall performance is attained rather by an efficient way of evaluating nuisance variables. LSTMs are a way to quickly reach good results for many natural language tasks, but to understand and generate natural language one has to investigate other concepts that can make more direct use of natural language's structural nature.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We want to thank Beracah Yankama for his help, valuable discussions and the machines to run the experiments on. We thank Prof. Thomas Hofmann for the fast and easy administrative process at ETH Zurich and also for granting access to high-computing clusters. Additionally we are very grateful for the financial support provided by the Zeno Karl Schindler Foundation.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"watanabe-etal-1999-translation","url":"https:\/\/aclanthology.org\/1999.mtsummit-1.90.pdf","title":"Translation camera","abstract":"In this paper, we propose a camera system which translates Japanese texts in a scene. The system is portable and consists of four components: digital camera, character image extraction process, character recognition process, and translation process. The system extracts character strings from a region which a user specifies, and translates them into English.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1999,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sankar-ravi-2019-deep","url":"https:\/\/aclanthology.org\/W19-5901.pdf","title":"Deep Reinforcement Learning For Modeling Chit-Chat Dialog With Discrete Attributes","abstract":"Open domain dialog systems face the challenge of being repetitive and producing generic responses. In this paper, we demonstrate that by conditioning the response generation on interpretable discrete dialog attributes and composed attributes, it helps improve the model perplexity and results in diverse and interesting non-redundant responses. We propose to formulate the dialog attribute prediction as a reinforcement learning (RL) problem and use policy gradients methods to optimize utterance generation using long-term rewards. Unlike existing RL approaches which formulate the token prediction as a policy, our method reduces the complexity of the policy optimization by limiting the action space to dialog attributes, thereby making the policy optimization more practical and sample efficient. We demonstrate this with experimental and human evaluations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kohn-menzel-2013-incremental","url":"https:\/\/aclanthology.org\/R13-1048.pdf","title":"Incremental and Predictive Dependency Parsing under Real-Time Conditions","abstract":"We present an incremental dependency parser which derives predictions about the upcoming structure in a parse-as-youtype mode. Drawing on the inherent strong anytime property of the underlying transformation-based approach, an existing system, jwcdg, has been modified to make it truly interruptible. A speed-up was achieved by means of parallel processing. In addition, MaltParser is used to bootstrap the search which increases accuracy under tight time constraints. With these changes, jwcdg can effectively utilize the full time span until the next word becomes available which results in an optimal quality-time trade-off.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-etal-2011-automatic","url":"https:\/\/aclanthology.org\/P11-2028.pdf","title":"Automatic Evaluation of Chinese Translation Output: Word-Level or Character-Level?","abstract":"Word is usually adopted as the smallest unit in most tasks of Chinese language processing. However, for automatic evaluation of the quality of Chinese translation output when translating from other languages, either a word-level approach or a character-level approach is possible. So far, there has been no detailed study to compare the correlations of these two approaches with human assessment. In this paper, we compare word-level metrics with characterlevel metrics on the submitted output of English-to-Chinese translation systems in the IWSLT'08 CT-EC and NIST'08 EC tasks. Our experimental results reveal that character-level metrics correlate with human assessment better than word-level metrics. Our analysis suggests several key reasons behind this finding.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was done for CSIDM Project No. CSIDM-200804 partially funded by a grant from the National Research Foundation (NRF) administered by the Media Development Authority (MDA) of Singapore. This research has also been funded by the Natural Science Foundation of China under Grant No. 60975053, 61003160, and 60736014, and also supported by the External Cooperation Program of the Chinese Academy of Sciences. We thank Kun Wang, Daniel Dahlmeier, Matthew Snover, and Michael Denkowski for their kind assistance.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zeng-etal-2019-faceted","url":"https:\/\/aclanthology.org\/D19-5317.pdf","title":"Faceted Hierarchy: A New Graph Type to Organize Scientific Concepts and a Construction Method","abstract":"On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-todescendent link with a specific facet \"type-of\". We use information extraction techniques to find synonyms, sibling concepts, and ancestordescendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.","label_nlp4sg":1,"task":["Organize Scientific Concepts"],"method":["hierarchy growth algorithm"],"goal1":"Industry, Innovation and Infrastructure","goal2":"Quality Education","goal3":null,"acknowledgments":"This work was supported by Natural Science Foundation Grant CCF-1901059. ","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"coutinho-etal-2016-assessing","url":"https:\/\/aclanthology.org\/L16-1211.pdf","title":"Assessing the Prosody of Non-Native Speakers of English: Measures and Feature Sets","abstract":"In this paper, we describe a new database with audio recordings of non-native (L2) speakers of English, and the perceptual evaluation experiment conducted with native English speakers for assessing the prosody of each recording. These annotations are then used to compute the gold standard using different methods, and a series of regression experiments is conducted to evaluate their impact on the performance of a regression model predicting the degree of naturalness of L2 speech. Further, we compare the relevance of different feature groups modelling prosody in general (without speech tempo), speech rate and pauses modelling speech tempo (fluency), voice quality, and a variety of spectral features. We also discuss the impact of various fusion strategies on performance. Overall, our results demonstrate that the prosody of non-native speakers of English as L2 can be reliably assessed using supra-segmental audio features; prosodic features seem to be the most important ones.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jacobs-etal-1992-ge","url":"https:\/\/aclanthology.org\/M92-1026.pdf","title":"GE-CMU: Description of the TIPSTER\/SHOGUN System as Used for MUC-4","abstract":"The GE-CMU team is developing the TIPSTER\/SHOGUN system under the governmentsponsored TIPSTER program, which aims to advance coverage, accuracy, and portability in tex t interpretation. The system will soon be tested on Japanese and English news stories in tw o new domains. MUC-4 served as the first substantial test of the combined system. Because th e SHOGUN system takes advantage of most of .the components of the GE NLTooLsET excep t for the parser, this paper supplements the NLTOOLSET system description by explaining th e relationship between the two systems and comparing their performance on the examples fro m MUC-4 .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"fowlie-2013-order","url":"https:\/\/aclanthology.org\/W13-3002.pdf","title":"Order and Optionality: Minimalist Grammars with Adjunction","abstract":"Adjuncts are characteristically optional, but many, such as adverbs and adjectives, are strictly ordered. In Minimalist Grammars (MGs), it is straightforward to account for optionality or ordering, but not both. I present an extension of MGs, MGs with Adjunction, which accounts for optionality and ordering simply by keeping track of two pieces of information at once: the original category of the adjoined-to phrase, and the category of the adjunct most recently adjoined. By imposing a partial order on the categories, the Adjoin operation can require that higher adjuncts precede lower adjuncts, but not vice versa, deriving order.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"somers-etal-1997-multilingual","url":"https:\/\/aclanthology.org\/A97-1040.pdf","title":"Multilingual Generation and Summarization of Job Adverts: the TREE Project","abstract":"A multilingual Internet-based employment advertisement system is described. Job ads are submitted as e-mail texts, analysed by an example-based pattern matcher and stored in language-independent schemas in an object-oriented database. Users can search the database in their own language and get customized summaries of the job ads. The query engine uses symbolic case-based reasoning techniques, while the generation module integrates canned text, templates, and grammar rules to produce texts and hypertexts in a simple way.","label_nlp4sg":1,"task":["Summarization","Multilingual Generation"],"method":["query engine","pattern matcher"],"goal1":"Decent Work and Economic Growth","goal2":null,"goal3":null,"acknowledgments":null,"year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":1,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"graf-2020-curbing","url":"https:\/\/aclanthology.org\/2020.scil-1.27.pdf","title":"Curbing Feature Coding: Strictly Local Feature Assignment","abstract":"Graf (2017) warns that every syntactic formalism faces a severe overgeneration problem because of the hidden power of subcategorization. Any constraint definable in monadic second-order logic can be compiled into the category system so that it is indirectly enforced as part of subcategorization. Not only does this kind of feature coding deprive syntactic proposals of their empirical bite, it also undermines computational efforts to limit syntactic formalisms via subregular complexity. This paper presents a subregular solution to feature coding. Instead of features being a cheap resource that comes for free, features must be assigned by a transduction. In particular, category features must be assigned by an input strictly local (ISL) tree-to-tree transduction, defined here for the first time. The restriction to ISL transductions correctly rules out various deviant category systems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work reported in this paper was supported by the National Science Foundation under Grant No. BCS-1845344. This paper benefited greatly from the feedback of Jeffrey Heinz, Dakotah Lambert, and three anonymous reviewers. I am indebted to the participants of the University of Troms\u00f8's workshop Thirty Million Theories of Syntactic Features, which lit the initial spark that grew into the ideas reported here.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"maxwelll-smith-etal-2020-applications","url":"https:\/\/aclanthology.org\/2020.bea-1.12.pdf","title":"Applications of Natural Language Processing in Bilingual Language Teaching: An Indonesian-English Case Study","abstract":"Multilingual corpora are difficult to compile and a classroom setting adds pedagogy to the mix of factors which make this data so rich and problematic to classify. In this paper, we set out methodological considerations of using automated speech recognition to build a corpus of teacher speech in an Indonesian language classroom. Our preliminary results (64% word error rate) suggest these tools have the potential to speed data collection in this context. We provide practical examples of our data structure, details of our piloted computer-assisted processes, and fine-grained error analysis. Our study is informed and directed by genuine research questions and discussion in both the education and computational linguistics fields. We highlight some of the benefits and risks of using these emerging technologies to analyze the complex work of language teachers and in education more generally.","label_nlp4sg":1,"task":["Bilingual Language Teaching"],"method":["automated speech recognition","corpus"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":"We gratefully acknowledge the contribution of anonymous reviewers, colleagues, and study participant to this paper.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cerbah-2000-exogeneous","url":"https:\/\/aclanthology.org\/C00-1022.pdf","title":"Exogeneous and Endogeneous Approaches to Semantic Categorization of Unknown Technical Terms","abstract":"Acquiring and updating terminological resources are di cult and tedious tasks, especially when semantic information should be provided. This paper deals with Term Semantic Categorization. The goal of this process is to assign semantic categories to unknown technical terms. We propose two approaches to the problem that rely on di erent knowledge sources. The exogeneous approach exploits contextual information extracted from corpora. The endogeneous approach relies on a lexical analysis of the technical terms. After describing the two implemented methods, we present the experiments that we conducted on signi cant test sets. The results demonstrate that term categorization can provide a reliable help in the terminology acquisition processes.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kazantseva-szpakowicz-2010-summarizing","url":"https:\/\/aclanthology.org\/J10-1003.pdf","title":"Summarizing Short Stories","abstract":"We present an approach to the automatic creation of extractive summaries of literary short stories. The summaries are produced with a specific objective in mind: to help a reader decide whether she would be interested in reading the complete story. To this end, the summaries give the user relevant information about the setting of the story without revealing its plot. The system relies on assorted surface indicators about clauses in the short story, the most important of which are those related to the aspectual type of a clause and to the main entities in a story. Fifteen judges evaluated the summaries on a number of extrinsic and intrinsic measures. The outcome of this evaluation suggests that the summaries are helpful in achieving the original objective.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to Connexor Oy and especially to Atro Voutilainen for permission to use the Connexor Machinese Syntax parser free of charge for research purposes. We thank John Conroy and Judith Schlesinger for running CLASSY on our test set, and Andrew Hickl for doing it with GISTexter. Ana Arregui helped us recruit students for the evaluation. Many thanks to the annotators, summary writers, and raters, who helped evaluate our summarizer. A special thank-you goes to the anonymous reviewers for Computational Linguistics for all their incisive, insightful, and immensely helpful comments. Support for this work comes from the Natural Sciences and Engineering Research Council of Canada.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sun-etal-2021-medai","url":"https:\/\/aclanthology.org\/2021.semeval-1.183.pdf","title":"MedAI at SemEval-2021 Task 10: Negation-aware Pre-training for Source-free Negation Detection Domain Adaptation","abstract":"Due to the increasing concerns for data privacy, source-free unsupervised domain adaptation attracts more and more research attention, where only a trained source model is assumed to be available, while the labeled source data remains private. To get promising adaptation results, we need to find effective ways to transfer knowledge learned in source domain and leverage useful domain specific information from target domain at the same time. This paper describes our winning contribution to SemEval 2021 Task 10: Source-Free Domain Adaptation for Semantic Processing. Our key idea is to leverage the model trained on source domain data to generate pseudo labels for target domain samples. Besides, we propose Negationaware Pre-training (NAP) to incorporate negation knowledge into model. Our method wins the 1st place with F1-score of 0.822 on the official blind test set of Negation Detection Track.","label_nlp4sg":1,"task":["Negation Detection Domain Adaptation"],"method":["Negationaware Pre - training"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"ichimura-etal-2000-kana","url":"https:\/\/aclanthology.org\/C00-1050.pdf","title":"Kana-Kanji Conversion System with Input Support Based on Prediction","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"su-etal-2015-context","url":"https:\/\/aclanthology.org\/P15-1023.pdf","title":"A Context-Aware Topic Model for Statistical Machine Translation","abstract":"Lexical selection is crucial for statistical machine translation. Previous studies separately exploit sentence-level contexts and documentlevel topics for lexical selection, neglecting their correlations. In this paper, we propose a context-aware topic model for lexical selection, which not only models local contexts and global topics but also captures their correlations. The model uses target-side translations as hidden variables to connect document topics and source-side local contextual words. In order to learn hidden variables and distributions from data, we introduce a Gibbs sampling algorithm for statistical estimation and inference. A new translation probability based on distributions learned by the model is integrated into a translation system for lexical selection. Experiment results on NIST Chinese-English test sets demonstrate that 1) our model significantly outperforms previous lexical selection methods and 2) modeling correlations between local words and global topics can further improve translation quality.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors were supported by National Natural Science Foundation of China (Grant Nos 61303082 (Grant No. 1301021018). We also thank the anonymous reviewers for their insightful comments.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kahane-mazziotta-2015-dependency","url":"https:\/\/aclanthology.org\/W15-2121.pdf","title":"Dependency-based analyses for function words -- Introducing the polygraphic approach","abstract":"This paper scrutinizes various dependency-based representations of the syntax of function words, such as prepositions. The focus is on the underlying formal object used to encode the linguistic analyses and its relation to the corresponding linguistic theory. The polygraph structure is introduced: it consists of a generalization of the concept of graph that allows edges to be vertices of other edges. Such a structure is used to encode dependency-based analyses that are founded on two kinds of morphosyntactic criteria: presence constraints and distributional constraints.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank Brigitte Antoine, Marie Steffens and Elizabeth Rowley-Jolivet for proofreading and Timothy Osborne for contents corrections and suggestions.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"stymne-etal-2013-feature","url":"https:\/\/aclanthology.org\/W13-3308.pdf","title":"Feature Weight Optimization for Discourse-Level SMT","abstract":"We present an approach to feature weight optimization for document-level decoding. This is an essential task for enabling future development of discourse-level statistical machine translation, as it allows easy integration of discourse features in the decoding process. We extend the framework of sentence-level feature weight optimization to the document-level. We show experimentally that we can get competitive and relatively stable results when using a standard set of features, and that this framework also allows us to optimize documentlevel features, which can be used to model discourse phenomena.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the Swedish strategic research programme eSSENCE.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lawley-schubert-2022-mining","url":"https:\/\/aclanthology.org\/2022.acl-srw.25.pdf","title":"Mining Logical Event Schemas From Pre-Trained Language Models","abstract":"We present NESL (the Neuro-Episodic Schema Learner), an event schema learning system that combines large language models, FrameNet parsing, a powerful logical representation of language, and a set of simple behavioral schemas meant to bootstrap the learning process. In lieu of a pre-made corpus of stories, our dataset is a continuous feed of \"situation samples\" from a pre-trained language model, which are then parsed into FrameNet frames, mapped into simple behavioral schemas, and combined and generalized into complex, hierarchical schemas for a variety of everyday scenarios. We show that careful sampling from the language model can help emphasize stereotypical properties of situations and de-emphasize irrelevant details, and that the resulting schemas specify situations more comprehensively than those learned by other systems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chytil-1982-computational","url":"https:\/\/aclanthology.org\/C82-2016.pdf","title":"Computational Linguistics and Its Role in Mechanized or Man-Machine Cognitive Problem Solving","abstract":"In the present paper cognitive science will be conceived as a discipline which theoretically supports the following constructing of various cognitive problem solving systems running in a CA mode or in a man-machine mode or in the form of cognitive robots. The role of computational lin~stios i~ this context will be demonstrated and Justified.\nI. CoKnitive uroblem solving. Hence \"cognition\" is not considered here as an object of psychological analysis but ra~her in the sense of a man-machine cognitive process, i.e. of a purpose built point of view. Therefore, an analogy wlth industrial mass-productionwill be emphasized and the necessary theoretical questions for projecting and setting up such efficient mechanized or computer assisted cognitive systems will be studied. Such systems can be employed especially in scientific research since it presents a systematic form of activity in the field of general cognition. Following our analogy it is to say that such \"factories on cognition\" should not be identified with usual computing centers~ The considerations will be focused on a so-called co~itive, problem, It is a question raised for inquiry, investigation or discovery, which needs to be solved and where the final solution will present new knowledge. Non-cognitive problems are designated as technical problems. Their final solution consists of a desirable change in a material system. In the following we re-strict our account to factual extramathematical cognitive problems the solving of which satisfies necessarily the two conditions: I. a computer is used, at leas~, partly in the solving I\" mathematical means are included, at least, in a part of the process of reasoning (as an algorithm or inferring in a suitable calculus).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1982,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cano-bojar-2019-efficiency","url":"https:\/\/aclanthology.org\/W19-8630.pdf","title":"Efficiency Metrics for Data-Driven Models: A Text Summarization Case Study","abstract":"Using data-driven models for solving text summarization or similar tasks has become very common in the last years. Yet most of the studies report basic accuracy scores only, and nothing is known about the ability of the proposed models to improve when trained on more data. In this paper, we define and propose three data efficiency metrics: data score efficiency, data time deficiency and overall data efficiency. We also propose a simple scheme that uses those metrics and apply it for a more comprehensive evaluation of popular methods on text summarization and title generation tasks. For the latter task, we process and release a huge collection of 35 million abstract-title pairs from scientific articles. Our results reveal that among the tested models, the Transformer is the most efficient on both tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research work was supported by the project No. CZ.02.2.69\/0.0\/0.0\/16 027\/0008495 (International Mobility of Researchers at Charles University) of the Operational Programme Research, Development and Education, the project no. 19-26934X (NEUREM3) of the Czech Science Foundation and ELITR (H2020-ICT-2018-2-825460) of the EU.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ke-etal-2021-pre","url":"https:\/\/aclanthology.org\/2021.naacl-main.436.pdf","title":"Pre-training with Meta Learning for Chinese Word Segmentation","abstract":"Recent researches show that pre-trained models (PTMs) are beneficial to Chinese Word Segmentation (CWS). However, PTMs used in previous works usually adopt language modeling as pre-training tasks, lacking task-specific prior segmentation knowledge and ignoring the discrepancy between pre-training tasks and downstream CWS tasks. In this paper, we propose a CWS-specific pre-trained model METASEG, which employs a unified architecture and incorporates meta learning algorithm into a multi-criteria pre-training task. Empirical results show that METASEG could utilize common prior segmentation knowledge from different existing criteria and alleviate the discrepancy between pre-trained models and downstream CWS tasks. Besides, METASEG can achieve new state-of-the-art performance on twelve widely-used CWS datasets and significantly improve model performance in lowresource settings.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-etal-2020-hyperbolic","url":"https:\/\/aclanthology.org\/2020.acl-main.283.pdf","title":"Hyperbolic Capsule Networks for Multi-Label Classification","abstract":"Although deep neural networks are effective at extracting high-level features, classification methods usually encode an input into a vector representation via simple feature aggregation operations (e.g. pooling). Such operations limit the performance. For instance, a multi-label document may contain several concepts. In this case, one vector can not sufficiently capture its salient and discriminative content. Thus, we propose Hyperbolic Capsule Networks (HYPERCAPS) for Multi-Label Classification (MLC), which have two merits. First, hyperbolic capsules are designed to capture fine-grained document information for each label, which has the ability to characterize complicated structures among labels and documents. Second, Hyperbolic Dynamic Routing (HDR) is introduced to aggregate hyperbolic capsules in a label-aware manner, so that the label-level discriminative information can be preserved along the depth of neural networks. To efficiently handle large-scale MLC datasets, we additionally present a new routing method to adaptively adjust the capsule number during routing. Extensive experiments are conducted on four benchmark datasets. Compared with the state-of-the-art methods, HY-PERCAPS significantly improves the performance of MLC especially on tail labels.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by the National Natural Science Foundation of China under Grant 61822601, 61773050, and 61632004; the Beijing Natural Science Foundation under Grant Z180006; National Key Research and Development Program (2017YFC1703506); the Fundamental Research Funds for the Central Universities (2019JBZ110). We thank the anonymous reviewers for their valuable feedback.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hartmann-etal-2014-large","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/413_Paper.pdf","title":"A Large Corpus of Product Reviews in Portuguese: Tackling Out-Of-Vocabulary Words","abstract":"Web 2.0 has allowed a never imagined communication boom. With the widespread use of computational and mobile devices, anyone, in practically any language, may post comments in the web. As such, formal language is not necessarily used. In fact, in these communicative situations, language is marked by the absence of more complex syntactic structures and the presence of internet slang, with missing diacritics, repetitions of vowels, and the use of chat-speak style abbreviations, emoticons and colloquial expressions. Such language use poses severe new challenges for Natural Language Processing (NLP) tools and applications, which, so far, have focused on well-written texts. In this work, we report the construction of a large web corpus of product reviews in Brazilian Portuguese and the analysis of its lexical phenomena, which support the development of a lexical normalization tool for, in future work, subsidizing the use of standard NLP products for web opinion mining and summarization purposes.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This paper results from an academic agreement between University of S\u00e3o Paulo and Samsung Eletr\u00f4nica da Amaz\u00f4nia Ltda, to whom the authors are grateful. The authors are also grateful to FAPESP and CNPq for supporting this work.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bies-etal-2016-comparison","url":"https:\/\/aclanthology.org\/W16-1004.pdf","title":"A Comparison of Event Representations in DEFT","abstract":"This paper will discuss and compare event representations across a variety of types of event annotation: Rich Entities, Relations, and Events (Rich ERE), Light Entities, Relations, and Events (Light ERE), Event Nugget (EN), Event Argument Extraction (EAE), Richer Event Descriptions (RED), and Event-Event Relations (EER). Comparisons of event representations are presented, along with a comparison of data annotated according to each event representation. An event annotation experiment is also discussed, including annotation for all of these representations on the same set of sample data, with the purpose of being able to compare actual annotation across all of these approaches as directly as possible. We walk through a brief example to illustrate the various annotation approaches, and to show the intersections among the various annotated data sets. Subtype Modality\/Realis Arguments Trigger Light ERE 8 types 33 subtypes Actual Labelled, must include at least one Minimal span Rich ERE 9 types 38 subtypes Actual, Generic, Other Labelled, but events with no arguments are possible Minimal span Event Argument 2014-2015 9 types 31 subtypes Actual, Generic, Other At least one Event mentions are not tagged Event Nugget 2014 8 types 33 subtypes Actual, Generic, Other No Maximal semantic unit Event Nugget 2015 9 types 38 subtypes Actual, Generic, Other No Minimal span Event-event relation 8 types 33 subtypes Actual, Generic, Other No Minimal span RED Untyped, all predicating events","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This material is based on research sponsored by Air Force Research Laboratory and Defense Advanced Research Projects Agency under agreement number FA8750-13-2-0045. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclu-sions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Air Force Research Laboratory and Defense Advanced Research Projects Agency or the U.S. Government.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rabinovich-etal-2018-native","url":"https:\/\/aclanthology.org\/Q18-1024.pdf","title":"Native Language Cognate Effects on Second Language Lexical Choice","abstract":"We present a computational analysis of cognate effects on the spontaneous linguistic productions of advanced non-native speakers. Introducing a large corpus of highly competent non-native English speakers, and using a set of carefully selected lexical items, we show that the lexical choices of non-natives are affected by cognates in their native language. This effect is so powerful that we are able to reconstruct the phylogenetic language tree of the Indo-European language family solely from the frequencies of specific lexical items in the English of authors with various native languages. We quantitatively analyze nonnative lexical choice, highlighting cognate facilitation as one of the important phenomena shaping the language of non-native speakers.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partially supported by the National Science Foundation through award IIS-1526745. We would like to thank Anat Prior and Steffen Eger for valuable suggestions. We are also grateful to Sivan Rabinovich for much advise and helpful comments. Finally, we are thankful to our action editor, Ivan Titov, and three anonymous reviewers for their constructive feedback.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lin-1993-principle","url":"https:\/\/aclanthology.org\/P93-1016.pdf","title":"Principle-Based Parsing Without Overgeneration","abstract":"Overgeneration is the main source of computational complexity in previous principle-based parsers. This paper presents a message passing algorithm for principle-based parsing that avoids the overgeneration problem. This algorithm has been implemented in C++ and successfully tested with example sentences from (van Riemsdijk and Williams, 1986).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1993,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"abdul-mageed-etal-2021-nadi","url":"https:\/\/aclanthology.org\/2021.wanlp-1.28.pdf","title":"NADI 2021: The Second Nuanced Arabic Dialect Identification Shared Task","abstract":"We present the findings and results of the Second Nuanced Arabic Dialect Identification Shared Task (NADI 2021). This Shared Task includes four subtasks: country-level Modern Standard Arabic (MSA) identification (Subtask 1.1), country-level dialect identification (Subtask 1.2), province-level MSA identification (Subtask 2.1), and province-level sub-dialect identification (Subtask 2.2). The shared task dataset covers a total of 100 provinces from 21 Arab countries, collected from the Twitter domain. A total of 53 teams from 23 countries registered to participate in the tasks, thus reflecting the interest of the community in this area. We received 16 submissions for Subtask 1.1 from five teams, 27 submissions for Subtask 1.2 from eight teams, 12 submissions for Subtask 2.1 from four teams, and 13 Submissions for subtask 2.2 from four teams.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We gratefully acknowledge the support of the Natural Sciences and Engineering Research Council of Canada, the Social Sciences Research Council of Canada, Compute Canada, and UBC Sockeye.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"degaetano-ortlieb-etal-2012-feature","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/268_Paper.pdf","title":"Feature Discovery for Diachronic Register Analysis: a Semi-Automatic Approach","abstract":"In this paper, we present corpus-based procedures to semi-automatically discover features relevant for the study of recent language change in scientific registers. First, linguistic features potentially adherent to recent language change are extracted from the SciTex Corpus. Second, features are assessed for their relevance for the study of recent language change in scientific registers by means of correspondence analysis. The discovered features will serve for further investigations of the linguistic evolution of newly emerged scientific registers.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The project Register im Kontakt: Zur Genese spezialisierter wissenschaftlicher Diskurse (Registers in contact: linguistic evolution of specialized scientific registers) is supported by a grant from Deutsche Forschungsgemeinschaft (DFG). We are especially grateful to Hannah Kermes for providing the necessary corpus processing pipeline. Also, we wish to thank the anonymous reviewers for their suggestions for improving our paper. All remaining errors remain ours. ","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"costa-branco-2013-temporal","url":"https:\/\/aclanthology.org\/W13-0106.pdf","title":"Temporal Relation Classification Based on Temporal Reasoning","abstract":"The area of temporal information extraction has recently focused on temporal relation classification. This task is about classifying the temporal relation (precedence, overlap, etc.) holding between two given entities (events, dates or times) mentioned in a text. This interest has largely been driven by the two recent TempEval competitions. Even though logical constraints on the structure of possible sets of temporal relations are obvious, this sort of information deserves more exploration in the context of temporal relation classification. In this paper, we show that logical inference can be used to improve-sometimes dramaticallyexisting machine learned classifiers for the problem of temporal relation classification. 02\/06\/1998 22:19:00<\/TIMEX3> WASHINGTON The economy created<\/EVENT> jobs at a surprisingly robust pace in January<\/TIMEX3>, the government reported<\/EVENT> on Friday<\/TIMEX3>, evidence that America's economic stamina has withstood<\/EVENT> any disruptions<\/EVENT> caused<\/EVENT> so far by the financial tumult<\/EVENT> in Asia.<\/s> ","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mcinnes-2008-unsupervised","url":"https:\/\/aclanthology.org\/P08-3009.pdf","title":"An Unsupervised Vector Approach to Biomedical Term Disambiguation: Integrating UMLS and Medline","abstract":"This paper introduces an unsupervised vector approach to disambiguate words in biomedical text that can be applied to all-word disambiguation. We explore using contextual information from the Unified Medical Language System (UMLS) to describe the possible senses of a word. We experiment with automatically creating individualized stoplists to help reduce the noise in our dataset. We compare our results to SenseClusters and Humphrey et al. (2006) using the NLM-WSD dataset and with SenseClusters using conflated data from the 2005 Medline Baseline.","label_nlp4sg":1,"task":["Biomedical Term Disambiguation"],"method":["Unsupervised Vector Approach"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"The author thanks Ted Pedersen, John Carlis and Siddharth Patwardhan for their comments.Our experiments were conducted using CuiTools v0.15, which is freely available from http:\/\/cuitools.sourceforge.net.","year":2008,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mori-nakagawa-1996-zero","url":"https:\/\/aclanthology.org\/C96-2132.pdf","title":"Zero Pronouns and Conditionals in Japanese Instruction Manuals","abstract":"This paper proposes a method of the zero pronoun resohition, which is one of the essential processes in understanding systems for Japanese manual sentences. It is based on pragmatic properties of Japanese conditionals. We examined a uumber of sentences appearing in Japanese manuals according to the classillcation based on the types of agent and the types of verb phrase. As ~ result, we obtained the following pattern of usage in matrix clauses: 1) The connective particles TO and REBA have tt, e same distribution of usage. TARA and NARA have the same distribution of usage. 2) '['he distribution of usage of TO and REBA, and that of TARA and NARA are complementary to each other. We show that these distributions of usage can be used for resolution of zero subjects.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hwa-2000-sample","url":"https:\/\/aclanthology.org\/W00-1306.pdf","title":"Sample Selection for Statistical Grammar Induction","abstract":"Corpus-based grz.mmar induction relies on using many hand-parsed sentences as training examples. However, the construction of a training corpus with detailed syntactic analysis for every sentence is a labor-intensive task. We propose to use sample selection methods to minimize the amount of annotation needed in the training data, thereby reducing the workload of the human annotators. This paper shows that the amount of annotated training data can be reduced by 36% without degrading the quality of the induced grammars. * This material is based upon work supported by the National Science Foundation under Grant No. IRI 9712068. We thank Wheeler Rural for his plotting tool; and Stuart Shieber, Lillian Lee, Ric Crabbe, and the anonymous reviewers for their comments on the paper.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cercel-etal-2017-oiqa","url":"https:\/\/doi.org\/10.26615\/978-954-452-038-0_002.pdf","title":"oIQa: An Opinion Influence Oriented Question Answering Framework with Applications to Marketing Domain","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"stajner-etal-2017-sentence","url":"https:\/\/aclanthology.org\/P17-2016.pdf","title":"Sentence Alignment Methods for Improving Text Simplification Systems","abstract":"We provide several methods for sentencealignment of texts with different complexity levels. Using the best of them, we sentence-align the Newsela corpora, thus providing large training materials for automatic text simplification (ATS) systems. We show that using this dataset, even the standard phrase-based statistical machine translation models for ATS can outperform the state-of-the-art ATS systems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been partially supported by the SFB 884 on the Political Economy of Reforms at the University of Mannheim (project C4), funded by the German Research Foundation (DFG), and also by the SomEMBED TIN2015-71147-C2-1-P MINECO research project.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"weller-etal-2014-using","url":"https:\/\/aclanthology.org\/2014.amta-researchers.21.pdf","title":"Using noun class information to model selectional preferences for translating prepositions in SMT","abstract":"Translating prepositions is a difficult and under-studied problem in SMT. We present a novel method to improve the translation of prepositions by using noun classes to model their selectional preferences. We compare three variants of noun class information: (i) classes induced from the lexical resource GermaNet or obtained from clusterings based on either (ii) window information or (iii) syntactic features. Furthermore, we experiment with PP rule generalization. While we do not significantly improve over the baseline, our results demonstrate that (i) integrating selectional preferences as rigid class annotation in the parse tree is sub-optimal, and that (ii) clusterings based on window co-occurrence are more robust than syntax-based clusters or GermaNet classes for the task of modeling selectional preferences.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was funded by the DFG Research Projects Distributional Approaches to Semantic Relatedness and Models of Morphosyntax for Statistical Machine Translation -Phase 2 and the DFG Heisenberg Fellowship SCHU-2580\/1-1. We would like to thank several anonymous reviewers for their helpful comments.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"racz-etal-2014-rules","url":"https:\/\/aclanthology.org\/W14-2807.pdf","title":"Rules, Analogy, and Social Factors Codetermine Past-tense Formation Patterns in English","abstract":"We investigate past-tense formation preferences for five irregular English verb classes. We gathered data on a large scale using a nonce probe study implemented on Amazon Mechanical Turk. We compare a Minimal Generalization Learner (which infers stochastic rules) with a Generalized Context Model (which evaluates new items via analogy with existing items) as models of participant choices. Overall, the GCM is a better predictor, but the the MGL provides some additional predictive power. Because variation across speakers is greater than variation across items, we also explore individual-level factors as predictors.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This project was made possible through a grant from the John Templeton Foundation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation. Hay and Beckner were also supported by a Rutherford Discovery Fellowship awarded to Hay. The authors would like to thank Adam Albright, Patrick LaShell, Chun Liang Chan, and Lisa Garnard Dawdy-Hesterberg. All faults remain ours.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zou-etal-2007-emotional","url":"https:\/\/aclanthology.org\/O07-3006.pdf","title":"Emotional Recognition Using a Compensation Transformation in Speech Signal","abstract":"An effective method based on GMM is proposed in this paper for speech emotional recognition; a compensation transformation is introduced in the recognition stage to reduce the influence of variations in speech characteristics and noise. The extraction of emotional features includes the globe feature, time series structure feature, LPCC, MFCC and PLP. Five human emotions (happiness, angry, surprise, sadness and neutral) are investigated. The result shows that it can increase the recognition ratio more than normal GMM; the method in this paper is effective and robust.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nn-1993-development","url":"https:\/\/aclanthology.org\/1993.tc-1.11.pdf","title":"Development of management package for translators in translation management Foreign languages in WordPerfect","abstract":"This paper examines some of the problems of day-today management and control of work passing through a busy translation office, aspects which are common to both translation companies and internal translation departments, such as maintenance of \"client\" and \"supplier\" databases, production of printed papers, statistics and control of \"work in progress\". It then passes on to consider some of the solutions available. One specific solution, developed by Peter Barber over a number of years in close collaboration with Bruce Carroll, a computer consultant, is the Electronic Translations Manager (ETM). ETM is a standalone or network computer program written by specialists for specialists, with the aim of minimising the repetition and routine drudgery of job and data handling. In addition to \"job\" management, ETM manages the \"client\" and \"supplier\" databases, and merges data from all areas to provide instant information on the current and historical status of work. These data, when suitably merged, also provide a wealth of statistical reports.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1993,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dogruoz-etal-2021-survey","url":"https:\/\/aclanthology.org\/2021.acl-long.131.pdf","title":"A Survey of Code-switching: Linguistic and Social Perspectives for Language Technologies","abstract":"The analysis of data in which multiple languages are represented has gained popularity among computational linguists in recent years. So far, much of this research focuses mainly on the improvement of computational methods and largely ignores linguistic and social aspects of C-S discussed across a wide range of languages within the long-established literature in linguistics. To fill this gap, we offer a survey of code-switching (C-S) covering the literature in linguistics with a reflection on the key issues in language technologies. From the linguistic perspective, we provide an overview of structural and functional patterns of C-S focusing on the literature from European and Indian contexts as highly multilingual areas. From the language technologies perspective, we discuss how massive language models fail to represent diverse C-S types due to lack of appropriate training data, lack of robust evaluation benchmarks for C-S (across multilingual situations and types of C-S) and lack of end-toend systems that cover sociolinguistic aspects of C-S as well. Our survey will be a step towards an outcome of mutual benefit for computational scientists and linguists with a shared interest in multilingualism and C-S.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jacobs-hoste-2020-extracting","url":"https:\/\/aclanthology.org\/2020.fnp-1.36.pdf","title":"Extracting Fine-Grained Economic Events from Business News","abstract":"Based on a recently developed fine-grained event extraction dataset for the economic domain, we present in a pilot study for supervised economic event extraction. We investigate how a stateof-the-art model for event extraction performs on the trigger and argument identification and classification. While F 1-scores of above 50% are obtained on the task of trigger identification, we observe a large gap in performance compared to results on the benchmark ACE05 dataset. We show that single-token triggers do not provide sufficient discriminative information for a finegrained event detection setup in a closed domain such as economics, since many classes have a large degree of lexico-semantic and contextual overlap.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the Research Foundation Flanders (FWO) under a Ph.D fellowship grant for the SENTiVENT project. We would like to thank anonymous reviewers for their helpful suggestions, as well as David Wadden for publishing and maintaining their model source code and answering questions about the model.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rijhwani-preotiuc-pietro-2020-temporally","url":"https:\/\/aclanthology.org\/2020.acl-main.680.pdf","title":"Temporally-Informed Analysis of Named Entity Recognition","abstract":"Natural language processing models often have to make predictions on text data that evolves over time as a result of changes in language use or the information described in the text. However, evaluation results on existing data sets are seldom reported by taking the timestamp of the document into account. We analyze and propose methods that make better use of temporally-diverse training data, with a focus on the task of named entity recognition. To support these experiments, we introduce a novel data set of English tweets annotated with named entities. 1 We empirically demonstrate the effect of temporal drift on performance, and how the temporal information of documents can be used to obtain better models compared to those that disregard temporal information. Our analysis gives insights into why this information is useful, in the hope of informing potential avenues of improvement for named entity recognition as well as other NLP tasks under similar experimental setups.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Leslie Barrett, Liang-Kang Huang, Prabhanjan Kambadur, Mayank Kulkarni, Amanda Stent, Umut Topkara, Jing Wang, Chuck-Hou Yee and the other members of the Bloomberg AI group. They provided invaluable feedback on the experiments and the paper. We also thank the anonymous reviewers for their valuable suggestions. Shruti Rijhwani is supported by a Bloomberg Data Science Ph.D. Fellowship.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"daxenberger-gurevych-2013-automatically","url":"https:\/\/aclanthology.org\/D13-1055.pdf","title":"Automatically Classifying Edit Categories in Wikipedia Revisions","abstract":"In this paper, we analyze a novel set of features for the task of automatic edit category classification. Edit category classification assigns categories such as spelling error correction, paraphrase or vandalism to edits in a document. Our features are based on differences between two versions of a document including meta data, textual and language properties and markup. In a supervised machine learning experiment, we achieve a micro-averaged F1 score of .62 on a corpus of edits from the English Wikipedia. In this corpus, each edit has been multi-labeled according to a 21-category taxonomy. A model trained on the same data achieves state-of-the-art performance on the related task of fluency edit classification. We apply pattern mining to automatically labeled edits in the revision histories of different Wikipedia articles. Our results suggest that high-quality articles show a higher degree of homogeneity with respect to their collaboration patterns as compared to random articles.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been supported by the Volkswagen Foundation as part of the Lichtenberg-Professorship Program under grant No. I\/82806, and by the Hessian research excellence program \"Landes-Offensive zur Entwicklung Wissenschaftlichokonomischer Exzellenz\" (LOEWE) as part of the research center \"Digital Humanities\". We thank the anonymous reviewers for their valuable feedback.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"feng-etal-2021-target","url":"https:\/\/aclanthology.org\/2021.naacl-main.145.pdf","title":"Target-specified Sequence Labeling with Multi-head Self-attention for Target-oriented Opinion Words Extraction","abstract":"Opinion target extraction and opinion term extraction are two fundamental tasks in Aspect Based Sentiment Analysis (ABSA). Many recent works on ABSA focus on Targetoriented Opinion Words (or Terms) Extraction (TOWE), which aims at extracting the corresponding opinion words for a given opinion target. TOWE can be further applied to Aspect-Opinion Pair Extraction (AOPE) which aims at extracting aspects (i.e., opinion targets) and opinion terms in pairs. In this paper, we propose Target-Specified sequence labeling with Multi-head Self-Attention (TSMSA) for TOWE, in which any pre-trained language model with multi-head self-attention can be integrated conveniently. As a case study, we also develop a Multi-Task structure named MT-TSMSA for AOPE by combining our TSMSA with an aspect and opinion term extraction module. Experimental results indicate that TSMSA outperforms the benchmark methods on TOWE significantly; meanwhile, the performance of MT-TSMSA is similar or even better than state-of-the-art AOPE baseline models.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to the reviewers for their constructive comments and suggestions on this study. This work has been supported by the National Natural Science Foundation of China (61972426) and Guangdong Basic and Applied Basic Research Foundation (2020A1515010536).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"v-hahn-vertan-2002-architectures","url":"https:\/\/aclanthology.org\/2002.eamt-1.8.pdf","title":"Architectures of ``toy'' systems for teaching machine translation","abstract":"This paper addresses the advantages of practical academic teaching of machine translation by implementations of \"toy\" systems. This is the result of experience from several semesters with different types of courses and different categories of students. In addition to describing two possible architectures for such educational toy systems, we will also discuss how to overcome misconceptions about MT and the evaluation both of the achieved systems and the learning success.","label_nlp4sg":1,"task":["teaching machine translation"],"method":["architectures"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sah-2013-development","url":"https:\/\/aclanthology.org\/Y13-1015.pdf","title":"The Development of Coherence in Narratives: Causal Relations","abstract":"This study explored Mandarin-speaking children's ability in maintaining narrative coherence. Thirty Mandarin-speaking fiveyear-olds, 30 nine-year-olds and 30 adults participated. The narrative data were elicited using Frog, where are you? Narrative coherence was assessed in terms of causal networks. The results displayed children's development in achieving narrative coherence by establishing causal relations between narrative events. Results were considered in relation to capacities for working memory and theory of mind. Narrators' differences in communicative competence and cognitive preferences were also discussed.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nagata-1992-empirical","url":"https:\/\/aclanthology.org\/C92-1030.pdf","title":"An Empirical Study on Rule Granularity and Unification Interleaving Toward an Efficient Unification-Based Parsing System","abstract":"This paper describes an empirical study on the optimal granularity of the phrase structure rules and the optimal strategy for interleaving CFG parsing with unification in order to implement an eltlcient unification-based parsing system. We claim that using \"medium-grained\" CFG phrase structure rules, which balance tile computational cost of CI?G parsing and unification, are a cost-effective solution for making unification-based grammar both efficicnt and easy to maintain. We also claim that \"late unification\", which delays unification until a complete CI\"G parse is found, saves unnecessary copies of DAGs for irrelevant subparses and improves performance significantly. The effectiveness of these methods was proved in an extensive experiment. The results show that, on average, the proposed system parses 3.5 times faster than our previous one. The grammar and the parser described in this paper are fully implemented and ased as the .lapmmse analysis module in SL-TRANS, the speech-to-speech translation system of ATR.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The author would like to thank Dr. l(urematsu, and all the n,embers of A2'R huerpreting q'elephony Besearch I,al)s. for their constant help m)d fl'tfitflfl discussions.","year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bonial-palmer-2016-comprehensive","url":"https:\/\/aclanthology.org\/L16-1628.pdf","title":"Comprehensive and Consistent PropBank Light Verb Annotation","abstract":"Recent efforts have focused on expanding the annotation coverage of PropBank from verb relations to adjective and noun relations, as well as light verb constructions (e.g., make an offer, take a bath). While each new relation type has presented unique annotation challenges, ensuring consistent and comprehensive annotation of light verb constructions has proved particularly challenging, given that light verb constructions are semi-productive, difficult to define, and there are often borderline cases. This research describes the iterative process of developing PropBank annotation guidelines for light verb constructions, the current guidelines, and a comparison to related resources.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We gratefully acknowledge the support of the National Science Foundation Grants NSF: 0910992 IIS:RI: Large: Language Processing, and the support of DARPA BOLT -HR0011-11-C-0145 and DEFT -FA-8750-13-2-0045 via a subcontract from LDC. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation, DARPA or the US government.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rinaldi-etal-2008-dependency","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2008\/pdf\/728_paper.pdf","title":"Dependency-Based Relation Mining for Biomedical Literature","abstract":"We describe techniques for the automatic detection of relationships among domain entities (e.g. genes, proteins, diseases) mentioned in the biomedical literature. Our approach is based on the adaptive selection of candidate interactions sentences, which are then parsed using our own dependency parser. Specific syntax-based filters are used to limit the number of possible candidate interacting pairs. The approach has been implemented as a demonstrator over a corpus of 2000 richly annotated MedLine abstracts, and later tested by participation to a text mining competition. In both cases, the results obtained have proved the adequacy of the proposed approach to the task of interaction detection.","label_nlp4sg":1,"task":["Relation Mining"],"method":["dependency parser"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"This research is partially supported by the Swiss National Science Foundation (grant 100014-118396\/1).","year":2008,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lee-etal-2012-computational","url":"https:\/\/aclanthology.org\/W12-2501.pdf","title":"Computational Analysis of Referring Expressions in Narratives of Picture Books","abstract":"This paper discusses successes and failures of computational linguistics techniques in the study of how inter-event time intervals in a story affect the narrator's use of different types of referring expressions. The success story shows that a conditional frequency distribution analysis of proper nouns and pronouns yields results that are consistent with our previous results-based on manual coding-that the narrator's choice of referring expression depends on the amount of time that elapsed between events in a story. Unfortunately, the less successful story indicates that state-of-the-art coreference resolution systems fail to achieve high accuracy for this genre of discourse. Fine-grained analyses of these failures provide insight into the limitations of current coreference resolution systems, and ways of improving them.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hoang-anh-etal-2008-basic","url":"https:\/\/aclanthology.org\/I08-7019.pdf","title":"A Basic Framework to Build a Test Collection for the Vietnamese Text Catergorization","abstract":"The aim of this paper is to present a basic framework to build a test collection for a Vietnamese text categorization. The presented content includes our evaluations of some popular text categorization test collections, our researches on the requirements, the proposed model and the techniques to build the BKTexts-test collection for a Vietnamese text categorization. The XML specification of both text and metadata of Vietnamese documents in the BKTexts also is presented. Our BKTexts test collection is built with the XML specification and currently has more than 17100 Vietnamese text documents collected from e-newspapers.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"meng-etal-2018-automatic","url":"https:\/\/aclanthology.org\/L18-1639.pdf","title":"Automatic Labeling of Problem-Solving Dialogues for Computational Microgenetic Learning Analytics","abstract":"This paper presents a recurrent neural network model to automate the analysis of students' computational thinking in problem-solving dialogue. We have collected and annotated dialogue transcripts from middle school students solving a robotics challenge, and each dialogue turn is assigned a code. We use sentence embeddings and speaker identities as features, and experiment with linear chain CRFs and RNNs with a CRF layer (LSTM-CRF). Both the linear chain CRF model and the LSTM-CRF model outperform the na\u00efve baselines by a large margin, and LSTM-CRF has an edge between the two. To our knowledge, this is the first study on dialogue segment annotation using neural network models. This study is also a stepping-stone to automating the microgenetic analysis of cognitive interactions between students.","label_nlp4sg":1,"task":["Automatic Labeling"],"method":["recurrent neural network","sentence embeddings","RNNs","LSTM","CRF"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pouran-ben-veyseh-2016-cross","url":"https:\/\/aclanthology.org\/W16-1403.pdf","title":"Cross-Lingual Question Answering Using Common Semantic Space","abstract":"With the advent of Big Data concept, a lot of attention has been paid to structuring and giving semantic to this data. Knowledge bases like DBPedia play an important role to achieve this goal. Question answering systems are common approach to address expressivity and usability of information extraction from knowledge bases. Recent researches focused only on monolingual QA systems while cross-lingual setting has still so many barriers. In this paper we introduce a new cross-lingual approach using a unified semantic space among languages. After keyword extraction, entity linking and answer type detection, we use cross lingual semantic similarity to extract the answer from knowledge base via relation selection and type matching. We have evaluated our approach on Persian and Spanish which are typologically different languages. Our experiments are on DBPedia. The results are promising for both languages.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ward-levow-2021-prosody","url":"https:\/\/aclanthology.org\/2021.acl-tutorials.5.pdf","title":"Prosody: Models, Methods, and Applications","abstract":"Prosody is essential in human interaction, enabling people to show interest, establish rapport, efficiently convey nuances of attitude or intent, and so on.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"geng-etal-2021-dependency","url":"https:\/\/aclanthology.org\/2021.paclic-1.18.pdf","title":"Dependency Enhanced Contextual Representations for Japanese Temporal Relation Classification","abstract":"Recently, quite a few studies have been progressive for temporal relation extraction, which is an important work used in several natural language processing applications. However, less concentration has been paid to corpora of Asian languages. In this work, we explored the feasibility of applying neural networks to temporal relation identification in the non-English corpora, especially Japanese corpora, BCCWJ-TimeBank. We explored the strength of combining contextual word representations (CWR) such as BERT (Devlin et al., 2019) and shortest dependency paths (SDP) for Japanese temporal relation extraction. We carefully designed a set of experiments to gradually reveal the improvements contributed by CWR and SDP. The empirical results suggested the following conclusions: 1) SDP offers richer information for beating the experiments with only source and target mentions. 2) CWR significantly outperforms fastText. 3) In most cases, the model applied CWR + SDP + Fine-tuning achieves the best performance overall.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by JSPS KAKENHI Grants Number 18H05521 and 21H00308.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pershina-etal-2015-personalized","url":"https:\/\/aclanthology.org\/N15-1026.pdf","title":"Personalized Page Rank for Named Entity Disambiguation","abstract":"The task of Named Entity Disambiguation is to map entity mentions in the document to their correct entries in some knowledge base. We present a novel graph-based disambiguation approach based on Personalized PageRank (PPR) that combines local and global evidence for disambiguation and effectively filters out noise introduced by incorrect candidates. Experiments show that our method outperforms state-of-the-art approaches by achieving 91.7% in micro-and 89.9% in macroaccuracy on a dataset of 27.8K named entity mentions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tanaka-etal-2014-linguistic","url":"https:\/\/aclanthology.org\/W14-3211.pdf","title":"Linguistic and Acoustic Features for Automatic Identification of Autism Spectrum Disorders in Children's Narrative","abstract":"Autism spectrum disorders are developmental disorders characterised as deficits in social and communication skills, and they affect both verbal and non-verbal communication. Previous works measured differences in children with and without autism spectrum disorders in terms of linguistic and acoustic features, although they do not mention automatic identification using integration of these features. In this paper, we perform an exploratory study of several language and speech features of both single utterances and full narratives. We find that there are characteristic differences between children with autism spectrum disorders and typical development with respect to word categories, prosody, and voice quality, and that these differences can be used in automatic classifiers. We also examine the differences between American and Japanese children and find significant differences with regards to pauses before new turns and linguistic cues.","label_nlp4sg":1,"task":["Automatic Identification of Autism"],"method":["Linguistic and Acoustic Features","exploratory study"],"goal1":"Reduced Inequalities","goal2":"Good Health and Well-Being","goal3":null,"acknowledgments":"We would like to thank the participants, children and their parents, in this study. We also thank Dr. Hidemi Iwasaka for his advice and support as clinician in pediatrics. A part of this study was conducted in Signal Analysis and Interpretation Laboratory (SAIL), University of Southern California. This study is supported by JSPS KAKEN 24240032.","year":2014,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":1,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"matsuyama-etal-2013-four","url":"https:\/\/aclanthology.org\/W13-4043.pdf","title":"A Four-Participant Group Facilitation Framework for Conversational Robots","abstract":"In this paper, we propose a framework for conversational robots that facilitates fourparticipant groups. In three-participant conversations, the minimum unit for multiparty conversations, social imbalance, in which a participant is left behind in the current conversation, sometimes occurs. In such scenarios, a conversational robot has the potential to facilitate situations as the fourth participant. Consequently, we present model procedures for obtaining conversational initiatives in incremental steps to engage such four-participant conversations. During the procedures, a facilitator must be aware of both the presence of dominant participants leading the current conversation and the status of any participant that is left behind. We model and optimize these situations and procedures as a partially observable Markov decision process. The results of experiments conducted to evaluate the proposed procedures show evidence of their acceptability and feeling of groupness.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by the Grant-in-Aid for scientific research WAKATE-B (23700239). TOSHIBA corporation provided the speech synthesizer customized for our spoken dialogue system.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-wu-2012-parsing","url":"https:\/\/aclanthology.org\/W12-6331.pdf","title":"Parsing TCT with Split Conjunction Categories","abstract":"We demonstrate that an unlexicalized PCFG with refined conjunction categories can parse much more accurately than previously shown, by making use of simple, linguistically motivated state splits, which break down false independence assumptions latent in a vanilla treebank grammar and reflect the Chinese idiosyncratic grammatical property. Indeed, its performance is the best result in the 3nd Chinese Parsing Evaluation of single model. This result has showed that refine the function words to represent Chinese subcat frame is a good method. An unlexicalized PCFG is much more compact, easier to replicate, and easier to interpret than more complex lexical models, and the parsing algorithms are simpler, more widely understood, of lower asymptotic complexity, and easier to optimize.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"liu-etal-2019-neural","url":"https:\/\/aclanthology.org\/D19-1068.pdf","title":"Neural Cross-Lingual Event Detection with Minimal Parallel Resources","abstract":"The scarcity in annotated data poses a great challenge for event detection (ED). Crosslingual ED aims to tackle this challenge by transferring knowledge between different languages to boost performance. However, previous cross-lingual methods for ED demonstrated a heavy dependency on parallel resources, which might limit their applicability. In this paper, we propose a new method for cross-lingual ED, demonstrating a minimal dependency on parallel resources. Specifically, to construct a lexical mapping between different languages, we devise a context-dependent translation method; to treat the word order difference problem, we propose a shared syntactic order event detector for multilingual cotraining. The efficiency of our method is studied through extensive experiments on two standard datasets. Empirical results indicate that our method is effective in 1) performing cross-lingual transfer concerning different directions and 2) tackling the extremely annotation-poor scenario.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by the National Key R&D Program of China under Grant 2018YFB1005100\uff0cthe National Natural Science Foundation of China (No.61533018), the National Natural Science Foundation of China (No.61806201) and the independent research project of National Laboratory of Pattern Recognition. This work is also supported by a grant from Ant Financial Services Group and the CCF-Tencent Open Research Fund. We would like to thank the anonymous reviewers for their valuable feedback.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"haruechaiyasak-kongthon-2013-lextoplus","url":"https:\/\/aclanthology.org\/W13-4702.pdf","title":"LexToPlus: A Thai Lexeme Tokenization and Normalization Tool","abstract":"The increasing popularity of social media has a large impact on the evolution of language usage. The evolution includes the transformation of some existing terms to enhance the expression of the writer's emotion and feeling. Text processing tasks on social media texts have become much more challenging. In this paper, we propose LexToPlus, a Thai lexeme tokenizer with term normalization process. Lex-ToPlus is designed to handle the intentional errors caused by the repeated characters at the end of words. LexToPlus is a dictionary-based parser which detects existing terms in a dictionary. Unknown tokens with repeated characters are merged and removed. We performed statistical analysis and evaluated the performance of the proposed approach by using a Twitter corpus. The experimental results show that the proposed algorithm yields an accuracy of 96.3% on a test data set. The errors are mostly caused by the out-ofvocabulary problem which can be solved by adding newly found terms into the dictionary.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"collard-2018-finite","url":"https:\/\/aclanthology.org\/W18-4106.pdf","title":"Finite State Reasoning for Presupposition Satisfaction","abstract":"Sentences with presuppositions are often treated as uninterpretable or unvalued (neither true nor false) if their presuppositions are not satisfied. However, there is an open question as to how this satisfaction is calculated. In some cases, determining whether a presupposition is satisfied is not a trivial task (or even a decidable one), yet native speakers are able to quickly and confidently identify instances of presupposition failure. I propose that this can be accounted for with a form of possible world semantics that encapsulates some reasoning abilities, but is limited in its computational power, thus circumventing the need to solve computationally difficult problems. This can be modeled using a variant of the framework of finite state semantics proposed by Rooth (2017). A few modifications to this system are necessary, including its extension into a three-valued logic to account for presupposition. Within this framework, the logic necessary to calculate presupposition satisfaction is readily available, but there is no risk of needing exceptional computational power. This correctly predicts that certain presuppositions will not be calculated intuitively, while others can be easily evaluated.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Many thanks to the LCCM reviewers, Mats Rooth, Joseph Halpern, and John Foster for their comments.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"liu-soo-1989-parsing","url":"https:\/\/aclanthology.org\/O89-1008.pdf","title":"Parsing English Conjunctions And Comparatives Using The Wait-And-See Strategy","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1989,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-etal-1998-pat-trees","url":"https:\/\/aclanthology.org\/P98-1038.pdf","title":"PAT-Trees with the Deletion Function as the Learning Device for Linguistic Patterns","abstract":"In this study, a learning device based on the PATtree data structures was developed. The original PAT-trees were enhanced with the deletion function to emulate human learning competence. The learning process worked as follows. The linguistic patterns from the text corpus are inserted into the PAT-tree one by one. Since the memory was limited, hopefully, the important and new patterns would be retained in the PAT-tree and the old and unimportant patterns would be released from the tree automatically. The proposed PAT-trees with the deletion function have the following advantages. 1) They are easy to construct and maintain. 2) Any prefix substring and its frequency count through PAT-tree can be searched very quickly. 3) The space requirement for a PAT-tree is linear with respect to the size of the input text. 4) The insertion of a new element can be carried out at any time without being blocked by the memory constraints because the free space is released through the deletion of unimportant elements. Experiments on learning high frequency bigrams were carried out under different memory size constraints. High recall rates were achieved. The results show that the proposed PAT-trees can be used as on-line learning devices.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"qader-etal-2019-semi","url":"https:\/\/aclanthology.org\/W19-8669.pdf","title":"Semi-Supervised Neural Text Generation by Joint Learning of Natural Language Generation and Natural Language Understanding Models","abstract":"In Natural Language Generation (NLG), Endto-End (E2E) systems trained through deep learning have recently gained a strong interest. Such deep models need a large amount of carefully annotated data to reach satisfactory performance. However, acquiring such datasets for every new NLG application is a tedious and time-consuming task. In this paper, we propose a semi-supervised deep learning scheme that can learn from non-annotated data and annotated data when available. It uses an NLG and a Natural Language Understanding (NLU) sequence-to-sequence models which are learned jointly to compensate for the lack of annotation. Experiments on two benchmark datasets show that, with limited amount of annotated data, the method can achieve very competitive results while not using any preprocessing or re-scoring tricks. These findings open the way to the exploitation of nonannotated datasets which is the current bottleneck for the E2E NLG system development to new applications.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This project was partly funded by the IDEX Universit\u00e9 Grenoble Alpes innovation grant (AI4I-2018-2019) and the R\u00e9gion Auvergne-Rh\u00f4ne-Alpes (AISUA-2018-2019).","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"liu-etal-2013-bilingually","url":"https:\/\/aclanthology.org\/P13-1105.pdf","title":"Bilingually-Guided Monolingual Dependency Grammar Induction","abstract":"This paper describes a novel strategy for automatic induction of a monolingual dependency grammar under the guidance of bilingually-projected dependency. By moderately leveraging the dependency information projected from the parsed counterpart language, and simultaneously mining the underlying syntactic structure of the language considered, it effectively integrates the advantages of bilingual projection and unsupervised induction, so as to induce a monolingual grammar much better than previous models only using bilingual projection or unsupervised induction. We induced dependency grammar for five different languages under the guidance of dependency information projected from the parsed English translation, experiments show that the bilinguallyguided method achieves a significant improvement of 28.5% over the unsupervised baseline and 3.0% over the best projection baseline on average.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors were supported by National Natural Science Foundation of China, We would like to thank the anonymous reviewers for their insightful comments and those who helped to modify the paper.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"galitsky-2016-tool","url":"https:\/\/aclanthology.org\/C16-2042.pdf","title":"A Tool for Efficient Content Compilation","abstract":"We build a tool to assist in content creation by mining the web for information relevant to a given topic. This tool imitates the process of essay writing by humans: searching for topics on the web, selecting content fragments from the found document, and then compiling these fragments to obtain a coherent text. The process of writing starts with automated building of a table of content by obtaining the list of key entities for the given topic extracted from web resources such as Wikipedia. Once a table of content is formed, each item forms a seed for web mining. The tool builds a full-featured structured Word document with table of content, section structure, images and captions and web references for all included text fragments. Two linguistic technologies are employed: for relevance verification, we use similarity computed as a tree similarity between parse trees for a seed and candidate text fragment. For text coherence, we use a measure of agreement between a given and consecutive paragraph by tree kernel learning of their discourse trees.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kleve-1981-pattern","url":"https:\/\/aclanthology.org\/W81-0123.pdf","title":"``Pattern Recognition'' i papyrusforskning (Pattern Recognition in papyrus research) [In Norwegian]","abstract":"Problemet er \u00e5 rekonstruere bokstaver som det bare finnes blekkrester igjen av p\u00e5 papyrusunderlaget.\nRekonstruksjon ved hjelp av fotografiske metoder har vist seg nyttel\u00f8s. Der hvor det ikke er blekk, er skriften totalt forsvunnet, uten \u00e5 etterlate seg spor. Rekonstruksjon ved hjelp av EDB-metoder gjenst\u00e5r.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1981,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"catizone-etal-2012-lie","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/215_Paper.pdf","title":"LIE: Leadership, Influence and Expertise","abstract":"This paper describes our research into methods for inferring social and instrumental roles and relationships from document and discourse corpora. The goal is to identify the roles of initial authors and participants in internet discussions with respect to leadership, influence and expertise. Web documents, forums and blogs provide data from which the relationships between these concepts are empirically derived and compared. Using techniques from Natural Language Processing (NLP), characterizations of authority and expertise are hypothesized and then tested to see if these pick out the same or different participants as may be chosen by techniques based on social network analysis (Huffaker 2010) see if they pick out the same discourse participants for any given level of these qualities (i.e. leadership, expertise and influence). Our methods could be applied, in principle, to any domain topic, but this paper will describe an initial investigation into two subject areas where a range of differing opinions are available and which differ in the nature of their appeals to authority and truth: 'genetic engineering' and a 'Muslim Forum'. The available online corpora for these topics contain discussions from a variety of users with different levels of expertise, backgrounds and personalities.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by a grant from the UK Government.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"majumder-etal-2022-achieving","url":"https:\/\/aclanthology.org\/2022.acl-long.224.pdf","title":"Achieving Conversational Goals with Unsupervised Post-hoc Knowledge Injection","abstract":"A limitation of current neural dialog models is that they tend to suffer from a lack of specificity and informativeness in generated responses, primarily due to dependence on training data that covers a limited variety of scenarios and conveys limited knowledge. One way to alleviate this issue is to extract relevant knowledge from external sources at decoding time and incorporate it into the dialog response. In this paper, we propose a posthoc knowledge-injection technique where we first retrieve a diverse set of relevant knowledge snippets conditioned on both the dialog history and an initial response from an existing dialog model. We construct multiple candidate responses, individually injecting each retrieved snippet into the initial response using a gradient-based decoding method, and then select the final response with an unsupervised ranking step. Our experiments in goal-oriented and knowledge-grounded dialog settings demonstrate that human annotators judge the outputs from the proposed method to be more engaging and informative compared to responses from prior dialog systems. We further show that knowledge-augmentation promotes success in achieving conversational goals in both experimental settings.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank anonymous reviewers for providing valuable feedback. BPM is partly supported by a Qualcomm Innovation Fellowship, a Friends of the International Center Fellowship-UC San Diego, NSF Award #1750063, and MeetElise.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sokolov-etal-2016-learning","url":"https:\/\/aclanthology.org\/P16-1152.pdf","title":"Learning Structured Predictors from Bandit Feedback for Interactive NLP","abstract":"Structured prediction from bandit feedback describes a learning scenario where instead of having access to a gold standard structure, a learner only receives partial feedback in form of the loss value of a predicted structure. We present new learning objectives and algorithms for this interactive scenario, focusing on convergence speed and ease of elicitability of feedback. We present supervised-to-bandit simulation experiments for several NLP tasks (machine translation, sequence labeling, text classification), showing that bandit learning from relative preferences eases feedback strength and yields improved empirical convergence.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by the German research foundation (DFG), and in part by a research cooperation grant with the Amazon Development Center Germany.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wu-etal-2019-wtmed","url":"https:\/\/aclanthology.org\/W19-5044.pdf","title":"WTMED at MEDIQA 2019: A Hybrid Approach to Biomedical Natural Language Inference","abstract":"Natural language inference (NLI) is challenging, especially when it is applied to technical domains such as biomedical settings. In this paper, we propose a hybrid approach to biomedical NLI where different types of information are exploited for this task. Our base model includes a pre-trained text encoder as the core component, and a syntax encoder and a feature encoder to capture syntactic and domain-specific information. Then we combine the output of different base models to form more powerful ensemble models. Finally, we design two conflict resolution strategies when the test data contain multiple (premise, hypothesis) pairs with the same premise. We train our models on the MedNLI dataset, yielding the best performance on the test set of the MEDIQA 2019 Task 1.","label_nlp4sg":1,"task":["Natural Language Inference"],"method":["text encoder","syntax encoder","feature encoder"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rush-etal-2013-optimal","url":"https:\/\/aclanthology.org\/D13-1022.pdf","title":"Optimal Beam Search for Machine Translation","abstract":"Beam search is a fast and empirically effective method for translation decoding, but it lacks formal guarantees about search error. We develop a new decoding algorithm that combines the speed of beam search with the optimal certificate property of Lagrangian relaxation, and apply it to phrase-and syntax-based translation decoding. The new method is efficient, utilizes standard MT algorithms, and returns an exact solution on the majority of translation examples in our test data. The algorithm is 3.5 times faster than an optimized incremental constraint-based decoder for phrase-based translation and 4 times faster for syntax-based translation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"castillo-etal-2004-talp","url":"https:\/\/aclanthology.org\/W04-0823.pdf","title":"The TALP systems for disambiguating WordNet glosses","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bestgen-2019-tintin","url":"https:\/\/aclanthology.org\/S19-2186.pdf","title":"Tintin at SemEval-2019 Task 4: Detecting Hyperpartisan News Article with only Simple Tokens","abstract":"Tintin, the system proposed by the CECL for the Hyperpartisan News Detection task of Se-mEval 2019, is exclusively based on the tokens that make up the documents and a standard supervised learning procedure. It obtained very contrasting results: poor on the main task, but much more effective at distinguishing documents published by hyperpartisan media outlets from unbiased ones, as it ranked first. An analysis of the most important features highlighted the positive aspects, but also some potential limitations of the approach.","label_nlp4sg":1,"task":["Hyperpartisan News Detection"],"method":["supervised learning"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"The author is a Research Associate of the Fonds de la Recherche Scientifique -FNRS.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"fandrych-etal-2016-user","url":"https:\/\/aclanthology.org\/L16-1043.pdf","title":"User, who art thou? User Profiling for Oral Corpus Platforms","abstract":"This contribution presents the background, design and results of a study of users of three oral corpus platforms in Germany. Roughly 5.000 registered users of the Database for Spoken German (DGD), the GeWiss corpus and the corpora of the Hamburg Centre for Language Corpora (HZSK) were asked to participate in a user survey. This quantitative approach was complemented by qualitative interviews with selected users. We briefly introduce the corpus resources involved in the study in section 2. Section 3 describes the methods employed in the user studies. Section 4 summarizes results of the studies focusing on selected key topics. Section 5 attempts a generalization of these results to larger contexts.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"choukri-etal-2016-elra","url":"https:\/\/aclanthology.org\/L16-1074.pdf","title":"ELRA Activities and Services","abstract":"After celebrating its 20th anniversary in 2015, ELRA is carrying on its strong involvement in the HLT field. To share ELRA's expertise of those 21 past years, this article begins with a presentation of ELRA's strategic Data and LR Management Plan for a wide use by the language communities. Then, we further report on ELRA's activities and services provided since LREC 2014. When looking at the cataloguing and licensing activities, we can see that ELRA has been active at making the Meta-Share repository move toward new developments steps, supporting Europe to obtain accurate LRs within the Connecting Europe Facility programme, promoting the use of LR citation, creating the ELRA License Wizard web portal. The article further elaborates on the recent LR production activities of various written, speech and video resources, commissioned by public and private customers. In parallel, ELDA has also worked on several EU-funded projects centred on strategic issues related to the European Digital Single Market. The last part gives an overview of the latest dissemination activities, with a special focus on the celebration of its 20 th anniversary organised in Dubrovnik (Croatia) and the following up of LREC, as well as the launching of the new ELRA portal.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"clark-etal-2019-bert","url":"https:\/\/aclanthology.org\/W19-4828.pdf","title":"What Does BERT Look at? An Analysis of BERT's Attention","abstract":"Large pre-trained neural networks such as BERT have had great recent success in NLP, motivating a growing body of research investigating what aspects of language they are able to learn from unlabeled data. Most recent analysis has focused on model outputs (e.g., language model surprisal) or internal vector representations (e.g., probing classifiers). Complementary to these works, we propose methods for analyzing the attention mechanisms of pre-trained models and apply them to BERT. BERT's attention heads exhibit patterns such as attending to delimiter tokens, specific positional offsets, or broadly attending over the whole sentence, with heads in the same layer often exhibiting similar behaviors. We further show that certain attention heads correspond well to linguistic notions of syntax and coreference. For example, we find heads that attend to the direct objects of verbs, determiners of nouns, objects of prepositions, and coreferent mentions with remarkably high accuracy. Lastly, we propose an attention-based probing classifier and use it to further demonstrate that substantial syntactic information is captured in BERT's attention.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviews for their thoughtful comments and suggestions. Kevin is supported by a Google PhD Fellowship.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"fjeldvig-1981-utvikling","url":"https:\/\/aclanthology.org\/W81-0122.pdf","title":"Utvikling av enkle metoder for tekskts\\oking med s\\okeargumenter i naturlig spr\\aak (Development of simple methods for text search with search arguments in natural language) [In Norwegian]","abstract":"b) Enkelte s\u00f8keargumenter lar seg best formulere i naturlig spr\u00e5k. F.eks. at en jurist i en gitt sak biir presentert for en dom og \u00f8nsker \u00e5 kontrollere hvorvidt det finnes andre dommer som ang\u00e5r samme sp\u00f8rs m\u00e5l. I et slikt tilfelle vil s\u00f8keargumentet kunne best\u00e5 av f.eks. et sammendrag av dommen kombinert med med \"FINN DOKUMENTER SOM LIGNER\".\nN\u00e5 vil imidlertid en uerfaren bruker ogs\u00e5 kunne anvende dagens teksts\u00f8kesystemer uten all for mye veiledning, men da kun p\u00e5 det helt enkleste niv\u00e5. For \u00e5 kunne anvende systemet effektivt og oppn\u00e5 gode resultater, kreves lang erfaring og godt kjenn skap til hvordan man kan utnytte systemets finesser.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1981,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gaanoun-benelallam-2021-sarcasm","url":"https:\/\/aclanthology.org\/2021.wanlp-1.45.pdf","title":"Sarcasm and Sentiment Detection in Arabic language A Hybrid Approach Combining Embeddings and Rule-based Features","abstract":"This paper presents the ArabicProcessors team's system designed for sarcasm (subtask 1) and sentiment (subtask 2) detection shared task. We created a hybrid system by combining rule-based features and both static and dynamic embeddings using transformers and deep learning. The system's architecture is an ensemble of Gaussian Naive Bayes, MarBERT and Mazajak embedding. This process scored an F1-sarcastic score of 51% on sarcasm and an F1-PN of 71% for sentiment detection.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"calzolari-etal-2012-lre","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/769_Paper.pdf","title":"The LRE Map. Harmonising Community Descriptions of Resources","abstract":"Accurate and reliable documentation of Language Resources is an undisputable need: documentation is the gateway to discovery of Language Resources, a necessary step towards promoting the data economy. Language resources that are not documented virtually do not exist: for this reason every initiative able to collect and harmonise metadata about resources represents a valuable opportunity for the NLP community. In this paper we describe the LRE Map, reporting statistics on resources associated with LREC2012 papers and providing comparisons with LREC2010 data. The LRE Map, jointly launched by FLaReNet and ELRA in conjunction with the LREC 2010 conference, is an instrument for enhancing availability of information about resources, either new or already existing ones, reinforcing and facilitating the use of standards in the community. The LRE Map web interface provides the possibility of searching according to a fixed set of metadata and to view the details of extracted resources. The LRE Map is continuing to collect bottom-up input about resources from authors of other conferences through standard submission process. This will help broadening the notion of \"language resources\" and attract to the field neighboring disciplines that so far have been only marginally involved by the standard notion of language resources.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We want to thank all the LRE Map contributors that provided accurate descriptions of existing and newly created resources and tools. Without their contribution the picture of resources usage would be poorest. We thank the META-NET project (FP7-ICT-4 249119: T4ME-NET) for supporting this work. The LRE Map started as an initiative within FLaReNet -Fostering Language Resources Network.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"max-2009-sub","url":"https:\/\/aclanthology.org\/W09-2503.pdf","title":"Sub-sentencial Paraphrasing by Contextual Pivot Translation","abstract":"The ability to generate or to recognize paraphrases is key to the vast majority of NLP applications. As correctly exploiting context during translation has been shown to be successful, using context information for paraphrasing could also lead to improved performance. In this article, we adopt the pivot approach based on parallel multilingual corpora proposed by (Bannard and Callison-Burch, 2005), which finds short paraphrases by finding appropriate pivot phrases in one or several auxiliary languages and back-translating these pivot phrases into the original language. We show how context can be exploited both when attempting to find pivot phrases, and when looking for the most appropriate paraphrase in the original subsentential \"envelope\". This framework allows the use of paraphrasing units ranging from words to large sub-sentential fragments for which context information from the sentence can be successfully exploited. We report experiments on a text revision task, and show that in these experiments our contextual sub-sentential paraphrasing system outperforms a strong baseline system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was funded by a grant from LIMSI.","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kokkinakis-kokkinakis-1999-cascaded","url":"https:\/\/aclanthology.org\/E99-1035.pdf","title":"A Cascaded Finite-State Parser for Syntactic Analysis of Swedish","abstract":"This report describes the development of a parsing system for written Swedish and is focused on a grammar, the main component of the system, semiautomatically extracted from corpora. A cascaded, finite-state algorithm is applied to the grammar in which the input contains coarse-grained semantic class information, and the output produced reflects not only the syntactic structure of the input, but grammatical functions as well. The grammar has been tested on a variety of random samples of different text genres, achieving precision and recall of 94.62% and 91.92% respectively, and average crossing rate of 0.04, when evaluated against manually disambiguated, annotated texts.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1999,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tomar-etal-2017-neural","url":"https:\/\/aclanthology.org\/W17-4121.pdf","title":"Neural Paraphrase Identification of Questions with Noisy Pretraining","abstract":"We present a solution to the problem of paraphrase identification of questions. We focus on a recent dataset of question pairs annotated with binary paraphrase labels and show that a variant of the decomposable attention model (Parikh et al., 2016) results in accurate performance on this task, while being far simpler than many competing neural architectures. Furthermore, when the model is pretrained on a noisy dataset of automatically collected question paraphrases, it obtains the best reported performance on the dataset.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"russo-etal-2012-improving","url":"https:\/\/aclanthology.org\/E12-3010.pdf","title":"Improving machine translation of null subjects in Italian and Spanish","abstract":"Null subjects are non overtly expressed subject pronouns found in pro-drop languages such as Italian and Spanish. In this study we quantify and compare the occurrence of this phenomenon in these two languages. Next, we evaluate null subjects' translation into French, a \"non prodrop\" language. We use the Europarl corpus to evaluate two MT systems on their performance regarding null subject translation: Its-2, a rule-based system developed at LATL, and a statistical system built using the Moses toolkit. Then we add a rule-based preprocessor and a statistical post-editor to the Its-2 translation pipeline. A second evaluation of the improved Its-2 system shows an average increase of 15.46% in correct pro-drop translations for Italian-French and 12.80% for Spanish-French.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been supported in part by the Swiss National Science Foundation (grant No 100015-130634). ","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sahin-etal-2020-linspector","url":"https:\/\/aclanthology.org\/2020.cl-2.4.pdf","title":"LINSPECTOR: Multilingual Probing Tasks for Word Representations","abstract":"probing tasks such as case marking, possession, word length, morphological tag count, and pseudoword identification for 24 languages. We present a reusable methodology for creation and evaluation of such tests in a multilingual setting, which is challenging because of a lack of resources, lower quality of tools, and differences among languages. We then present experiments on several diverse multilingual word embedding models, in which we relate the probing task performance for a diverse set of languages to a range of five classic NLP tasks: POS-tagging, dependency parsing, semantic role labeling, named entity recognition, and natural language inference. We find that a number of probing tests have significantly high positive correlation to the downstream tasks, especially for morphologically rich languages. We show that our tests can be used to explore word embeddings or black-box neural models for linguistic cues in a multilingual setting. We release the probing data sets and the evaluation suite LINSPECTOR with https:\/\/github.com\/UKPLab\/linspector.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We first thank the anonymous reviewers who helped us improve the paper. We would like to thank Marvin Kaster for his help on contextualizing the probing tasks and to Max Eichler for his contribution on acquiring experimental results for additional languages in Appendix C. We are sincerely","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pedersen-2016-duluth","url":"https:\/\/aclanthology.org\/S16-1207.pdf","title":"Duluth at SemEval 2016 Task 14: Extending Gloss Overlaps to Enrich Semantic Taxonomies","abstract":"This paper describes the Duluth systems that participated in Task 14 of SemEval 2016, Semantic Taxonomy Enrichment. There were three related systems in the formal evaluation which are discussed here, along with numerous post-evaluation runs. All of these systems identified synonyms between Word-Net and other dictionaries by measuring the gloss overlaps between them. These systems perform better than the random baseline and one post-evaluation variation was within a respectable margin of the median result attained by all participating systems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-etal-2013-exploiting","url":"https:\/\/aclanthology.org\/D13-1172.pdf","title":"Exploiting Domain Knowledge in Aspect Extraction","abstract":"Aspect extraction is one of the key tasks in sentiment analysis. In recent years, statistical models have been used for the task. However, such models without any domain knowledge often produce aspects that are not interpretable in applications. To tackle the issue, some knowledge-based topic models have been proposed, which allow the user to input some prior domain knowledge to generate coherent aspects. However, existing knowledge-based topic models have several major shortcomings, e.g., little work has been done to incorporate the cannot-link type of knowledge or to automatically adjust the number of topics based on domain knowledge. This paper proposes a more advanced topic model, called MC-LDA (LDA with m-set and c-set), to address these problems, which is based on an Extended generalized P\u00f3lya urn (E-GPU) model (which is also proposed in this paper). Experiments on real-life product reviews from a variety of domains show that MC-LDA outperforms the existing state-of-the-art models markedly.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by a grant from National Science Foundation (NSF) under grant no. IIS-1111092, and a grant from HP Labs Innovation Research Program.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"prost-etal-2019-debiasing","url":"https:\/\/aclanthology.org\/W19-3810.pdf","title":"Debiasing Embeddings for Reduced Gender Bias in Text Classification","abstract":"Bolukbasi et al., 2016) demonstrated that pretrained word embeddings can inherit gender bias from the data they were trained on. We investigate how this bias affects downstream classification tasks, using the case study of occupation classification (De-Arteaga et al., 2019). We show that traditional techniques for debiasing embeddings can actually worsen the bias of the downstream classifier by providing a less noisy channel for communicating gender information. With a relatively minor adjustment, however, we show how these same techniques can be used to simultaneously reduce bias and maintain high classification accuracy.","label_nlp4sg":1,"task":["occupation classification","reduce bias"],"method":["embeddings"],"goal1":"Gender Equality","goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":1,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jain-etal-2020-temporal","url":"https:\/\/aclanthology.org\/2020.emnlp-main.305.pdf","title":"Temporal Knowledge Base Completion: New Algorithms and Evaluation Protocols","abstract":"Research on temporal knowledge bases, which associate a relational fact (s, r, o) with a validity time period (or time instant), is in its early days. Our work considers predicting missing entities (link prediction) and missing time intervals (time prediction) as joint Temporal Knowledge Base Completion (TKBC) tasks, and presents TIMEPLEX, a novel TKBC method, in which entities, relations and, time are all embedded in a uniform, compatible space. TIMEPLEX exploits the recurrent nature of some facts\/events and temporal interactions between pairs of relations, yielding stateof-the-art results on both prediction tasks. We also find that existing TKBC models heavily overestimate link prediction performance due to imperfect evaluation mechanisms. In response, we propose improved TKBC evaluation protocols for both link and time prediction tasks, dealing with subtle issues that arise from the partial overlap of time intervals in gold instances and system predictions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is partly supported by IBM AI Horizons Network grants. IIT Delhi authors are supported by an IBM SUR award, grants by Google, Bloomberg and 1MG, Jai Gupta Chair professorship and a Visvesvaraya faculty award by the Govt. of India. The fourth author is supported by a Jagadish Bose Fellowship. We thank IIT Delhi HPC facility for compute resources. We thank Sankalan, Vaibhav and Siddhant for their helpful comments on an early draft of the paper.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"beck-2014-bayesian","url":"https:\/\/aclanthology.org\/P14-3001.pdf","title":"Bayesian Kernel Methods for Natural Language Processing","abstract":"Kernel methods are heavily used in Natural Language Processing (NLP). Frequentist approaches like Support Vector Machines are the state-of-the-art in many tasks. However, these approaches lack efficient procedures for model selection, which hinders the usage of more advanced kernels. In this work, we propose the use of a Bayesian approach for kernel methods, Gaussian Processes, which allow easy model fitting even for complex kernel combinations. Our goal is to employ this approach to improve results in a number of regression and classification tasks in NLP.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by funding from CNPq\/Brazil (No. 237999\/2012-9) and from the EU FP7-ICT QTLaunchPad project (No. 296347). The author would also like to thank Yahoo for the financial support and the anonymous reviewers for their excellent comments.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ghosh-etal-2014-analyzing","url":"https:\/\/aclanthology.org\/W14-2106.pdf","title":"Analyzing Argumentative Discourse Units in Online Interactions","abstract":"Argument mining of online interactions is in its infancy. One reason is the lack of annotated corpora in this genre. To make progress, we need to develop a principled and scalable way of determining which portions of texts are argumentative and what is the nature of argumentation. We propose a two-tiered approach to achieve this goal and report on several initial studies to assess its potential.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Part of this paper is based on work supported by the DARPA-DEFT program for the first two authors. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bates-1976-syntax","url":"https:\/\/aclanthology.org\/J76-2004.pdf","title":"Syntax in Automatic Speech Understanding","abstract":"This research was principally supported by the Advanced Reeearch Projeclts Agency of the Department of Defense (ARPA Order No. 2904) and was monitored by ONR under Contract No. N0001.4-75-c-0533. Partial support of the author by NSF grant GS-39834 to Harvard University i s gratefully acknowledged.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1976,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"v-roy-2008-acharya","url":"https:\/\/aclanthology.org\/I08-3016.pdf","title":"Acharya - A Text Editor and Framework for working with Indic Scripts","abstract":"This paper discusses an open source project 1 which provides a framework for working with Indian language scripts using a uniform syllable based text encoding scheme. It also discusses the design and implementation of a multi-platform text editor for 9 Indian languages which was built based on this encoding scheme.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to Prof. R.Kalyana Krishnan, Systems Development Lab, IIT Madras for guidance throughout this project, Mr. B.Ganesh ex-CTO of Technical Services Department of ETV for initiating this project and contributing to it, Mr. Anir-","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"antognini-faltings-2020-gamewikisum","url":"https:\/\/aclanthology.org\/2020.lrec-1.820.pdf","title":"GameWikiSum: a Novel Large Multi-Document Summarization Dataset","abstract":"Today's research progress in the field of multi-document summarization is obstructed by the small number of available datasets. Since the acquisition of reference summaries is costly, existing datasets contain only hundreds of samples at most, resulting in heavy reliance on hand-crafted features or necessitating additional, manually annotated data. The lack of large corpora therefore hinders the development of sophisticated models. Additionally, most publicly available multi-document summarization corpora are in the news domain, and no analogous dataset exists in the video game domain. In this paper, we propose GameWikiSum, a new domain-specific dataset for multi-document summarization, which is one hundred times larger than commonly used datasets, and in another domain than news. Input documents consist of long professional video game reviews as well as references of their gameplay sections in Wikipedia pages. We analyze the proposed dataset and show that both abstractive and extractive models can be trained on it. We release GameWikiSum for further research: https:\/\/github.com\/Diego999\/GameWikiSum.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"patra-etal-2020-scopeit","url":"https:\/\/aclanthology.org\/2020.coling-industry.20.pdf","title":"ScopeIt: Scoping Task Relevant Sentences in Documents","abstract":"A prominent problem faced by conversational agents working with large documents (Eg: emailbased assistants) is the frequent presence of information in the document that is irrelevant to the assistant. This in turn makes it harder for the agent to accurately detect intents, extract entities relevant to those intents and perform the desired action. To address this issue we present a neural model for scoping relevant information for the agent from a large document. We show that when used as the first step in a popularly used email-based assistant for helping users schedule meetings 1 , our proposed model helps improve the performance of the intent detection and entity extraction tasks required by the agent for correctly scheduling meetings: across a suite of 6 downstream tasks, by using our proposed method, we observe an average gain of 35% in precision without any drop in recall. Additionally, we demonstrate that the same approach can be used for component level analysis in large documents, such as signature block identification. * Equal Contribution 1 We use Hedwig in lieu of the actual persona of the agent throughout this paper This work is licensed under a Creative Commons Attribution 4.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"delannoy-1999-argumentation","url":"https:\/\/aclanthology.org\/W99-0303.pdf","title":"Argumentation Mark-Up: A Proposal","abstract":"This is a proposal for a an XML markup of argumentation. The annotation can be used to help the reader (e.g. by means of selective highlighting or diagramming), and for further processing (summarization, critique, use in information retrieval). The article proposes a set of markers derived from manual corpus annotation, exemplifies their use, describes a way to assign them using surface cues and limited syntax for scoping, and suggests further directions, including an acquisition tool, the application of machine learning, and a collaborative DTD definition task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1999,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kaeshammer-westburg-2014-complex","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/390_Paper.pdf","title":"On Complex Word Alignment Configurations","abstract":"Resources of manual word alignments contain configurations that are beyond the alignment capacity of current translation models, hence the term complex alignment configuration. They have been the matter of some debate in the machine translation community, as they call for more powerful translation models that come with further complications. In this work we investigate instances of complex alignment configurations in data sets of four different language pairs to shed more light on the nature and cause of those configurations. For the English-German alignments from Pad\u00f3 and Lapata (2006), for instance, we find that only a small fraction of the complex configurations are due to real annotation errors. While a third of the complex configurations in this data set could be simplified when annotating according to a different style guide, the remaining ones are phenomena that one would like to be able to generate during translation. Those instances are mainly caused by the different word order of English and German. Our findings thus motivate further research in the area of translation beyond phrase-based and context-free translation modeling.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was funded by the German Research Foundation as part of the project Grammar Formalisms beyond Context-Free Grammars and their use for Machine Learning Tasks.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"satoh-1996-disambiguation","url":"https:\/\/aclanthology.org\/C96-2152.pdf","title":"Disambiguation by Prioritized Circumscription","abstract":"This paper 1)r(,.sents a nml;ho(t of resolving ambiguity by using ;~ wu'imlt; of cir","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"1. We (~xa.mine ;t fi~asil)ility of priorit:ized (:ir-(:Uln,script;ion for specifying taw. most 1)referal)le re~u(ling by cons|de.ring a. (lisaml)igtt~ution t;~sk in the (:on(:rct(~ exemq)h~s a.nd show 1;1l~-~1; We cmt represent the task quit.e natur~flly.2. We discuss an iml)hmw.l~tatAon of (lis~m> biguation wil:hin gm HCLP la, ngua,ge by showing ~ correslmndc'ncc between a, priorit:y oww preference rules in prioril:ized cir-(mms(:rit)t, ion m~d a (:Olmtr;fint hi(',rar(:hy in ltCI,P.As a~ fut:ur(,' r(:s(',a, rch, we ne('J I;he lbllowing.1. We wouhl like to cxa.min(~ a comt)ut:a.i:iomd (:Oml)h'.\u00d7ity o| dismnbiguation by tI(,'I,P.2. it is bett(;r 1;o learn preferences mltom,'*tic**lly in sl:(',a(t of specifyillg preferences by user. One, at)l)ro~Lch for h'~Lrning is to buihl ml ill-(;(:ra(:i:ive syst, em su(:h t, luL(: the system shows t,o a user a set of possible readings for given sc'nt(m(:(,s and the user gives ~m order over possible readings, if'hen, the, syst;(,m wouht be abl(', to l(,~trn pref(,ren(:(~,s 1)y gener~tlizing the order.","year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-piccardi-2021-improving","url":"https:\/\/aclanthology.org\/2021.paclic-1.32.pdf","title":"Improving Adversarial Text Generation with n-Gram Matching","abstract":"In the past few years, generative adversarial networks (GANs) have become increasingly important in natural language generation. However, their performance seems to still have a significant margin for improvement. For this reason, in this paper we propose a new adversarial training method that tackles some of the limitations of GAN training in unconditioned generation tasks. In addition to the commonly used reward signal from the discriminator, our approach leverages another reward signal which is based on the occurrence of n-gram matches between the generated sentences and the training corpus. Thanks to the inherent correlation of this reward signal with the commonly used evaluation metrics such as BLEU, our approach implicitly bridges the gap between the objectives used during training and inference. To circumvent the non-differentiability issues associated with a discrete objective, our approach leverages the reinforcement learning policy gradient theorem. Our experimental results show that the model trained with mixed rewards from both n-gram matching and the discriminator has been able to outperform other GAN-based models in terms of BLEU score and qualitydiversity trade-off at a parity of computational budget.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The first author is funded by the China Scholarship Council (CSC) from the Ministry of Education of P. R. China.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"barak-etal-2012-modeling","url":"https:\/\/aclanthology.org\/W12-1701.pdf","title":"Modeling the Acquisition of Mental State Verbs","abstract":"Children acquire mental state verbs (MSVs) much later than other, lower-frequency, words. One factor proposed to contribute to this delay is that children must learn various semantic and syntactic cues that draw attention to the difficult-to-observe mental content of a scene. We develop a novel computational approach that enables us to explore the role of such cues, and show that our model can replicate aspects of the developmental trajectory of MSV acquisition. 1 Researchers have noted that children use MSVs in fixed phrases, in a performative use or as a pragmatic marker, well before they use them to refer to actual mental content (e.g., Diessel and Tomasello, 2001; Shatz et al., 1983). Here by \"acquisition of MSVs\", we are specifically referring to children learning usages that genuinely refer to mental content.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kongkachandra-chamnongthai-2006-semantic","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2006\/pdf\/798_pdf.pdf","title":"Semantic-Based Keyword Recovery Function for Keyword Extraction System","abstract":"The goal of implementing a keyword extraction system is to increase as near as 100% of precision and recall. These values are affected by the amount of extracted keywords. There are two groups of errors happened i.e. false-rejected and false-accepted keywords. To improve the performance of the system, false-rejected keywords should be recovered and the false-accepted keywords should be reduced. In this paper, we enhance the conventional keyword extraction systems by attaching the keyword recovery function. This function recovers the previously false-rejected keywords by comparing their semantic information with the contents of each relevant document. The function is automated in three processes i.e. Domain Identification, Knowledge Base Generation and Keyword Determination. Domain identification process identifies domain of interest by searching domains from domain knowledge base by using extracted keywords.The most general domains are selected and then used subsequently. To recover the false-rejected keywords, we match them with keywords in the identified domain within the domain knowledge base rely on their semantics by keyword determination process. To semantically recover keywords, definitions of false-reject keywords and domain knowledge base are previously represented in term of conceptual graph by knowledge base generator process. To evaluate the performance of the proposed function, EXTRACTOR, KEA and our keyword-database-mapping based keyword extractor are compared. The experiments were performed in two modes i.e. training and recovering. In training mode, we use four glossaries from the Internet and 60 articles from the summary sections of IEICE transaction. While in the recovering mode, 200 texts from three resources i.e. summary section of 15 chapters in a computer textbook and articles from IEICE and ACM transactions are used. The experimental results revealed that our proposed function improves the precision and recall rates of the conventional keyword extraction systems approximately 3-5% of precision and 6-10% of recall, respectively.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The paper is based upon work supported by the Thailand Research Fund under the grant No RMU4880007 of TRF Research Scholar. The authors also thank to Connexor Co.Ltd for their free academic license of Machinese Syntax used in our experiments.","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gutierrez-etal-2016-literal","url":"https:\/\/aclanthology.org\/P16-1018.pdf","title":"Literal and Metaphorical Senses in Compositional Distributional Semantic Models","abstract":"Metaphorical expressions are pervasive in natural language and pose a substantial challenge for computational semantics. The inherent compositionality of metaphor makes it an important test case for compositional distributional semantic models (CDSMs). This paper is the first to investigate whether metaphorical composition warrants a distinct treatment in the CDSM framework. We propose a method to learn metaphors as linear transformations in a vector space and find that, across a variety of semantic domains, explicitly modeling metaphor improves the resulting semantic representations. We then use these representations in a metaphor identification task, achieving a high performance of 0.82 in terms of F-score.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Ekaterina Shutova's research is supported by the Leverhulme Trust Early Career Fellowship.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-etal-2020-ji","url":"https:\/\/aclanthology.org\/2020.ccl-1.35.pdf","title":"\u57fa\u4e8e\u9605\u8bfb\u7406\u89e3\u6846\u67b6\u7684\u4e2d\u6587\u4e8b\u4ef6\u8bba\u5143\u62bd\u53d6(Chinese Event Argument Extraction using Reading Comprehension Framework)","abstract":"Traditional event argument extraction methods formulated this task as a multiclassification or sequence labeling task mentioned by entities in the sentence. In these methods, the category of argument roles can only be described as vectors, while their prior information are ignored. In fact, the semantics of argument role category is closely related with the argument itself. Therefore, this paper proposes to regard argument extraction as machine reading comprehension, with argument role described as natural language question\uff0cand the way to extract arguments is to answer these questions based on the context. this method can make better use of the prior information existed in argument role categories and its effectiveness is shown in the experiments of Chinese corpus of ACE 2005.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kawahara-kurohashi-2004-improving","url":"https:\/\/aclanthology.org\/C04-1050.pdf","title":"Improving Japanese Zero Pronoun Resolution by Global Word Sense Disambiguation","abstract":"This paper proposes unsupervised word sense disambiguation based on automatically constructed case frames and its incorporation into our zero pronoun resolution system. The word sense disambiguation is applied to verbs and nouns. We consider that case frames define verb senses and semantic features in a thesaurus define noun senses, respectively, and perform sense disambiguation by selecting them based on case analysis. In addition, according to the one sense per discourse heuristic, the word sense disambiguation results are cached and applied globally to the subsequent words. We integrated this global word sense disambiguation into our zero pronoun resolution system, and conducted experiments of zero pronoun resolution on two different domain corpora. Both of the experimental results indicated the effectiveness of our approach. * In this paper, <> means a semantic feature. \u2020 In this paper, we use 'verb' instead of 'verb, adjective and noun+copula' for simplicity.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We delete a semantic feature that is not similar to the other semantic features of its case slot.To sum up, the procedure for the automatic case frame construction is as follows.","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hlavacova-klimova-2004-derivational","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/326.pdf","title":"Derivational Relations in Flectional Languages - Czech Case","abstract":"When a text in any language is submitted to a morphological analysis, there always rest some unrecognized words. We can lower their number by adding new words into the dictionary used by the morphological analyzer but we can never gather the whole of the language. The system described in this paper (we call it \"derivation module\") deals with the unknown derived words. It aims not only at analyzing but also at synthesizing Czech derived words. Such a system is of particular value for automatic processing of languages where derivational morphology plays an important role in regular word formation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"perez-rosas-etal-2014-multimodal","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/869_Paper.pdf","title":"A Multimodal Dataset for Deception Detection","abstract":"This paper presents the construction of a multimodal dataset for deception detection, including physiological, thermal, and visual responses of human subjects under three deceptive scenarios. We present the experimental protocol, as well as the data acquisition process. To evaluate the usefulness of the dataset for the task of deception detection, we present a statistical analysis of the physiological and thermal modalities associated with the deceptive and truthful conditions. Initial results show that physiological and thermal responses can differentiate between deceptive and truthful states.","label_nlp4sg":1,"task":["Deception Detection"],"method":["Multimodal Dataset"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"This material is based in part upon work supported by National Science Foundation award #1355633. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. ","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"huang-etal-2009-accurate","url":"https:\/\/aclanthology.org\/D09-1128.pdf","title":"Accurate Semantic Class Classifier for Coreference Resolution","abstract":"There have been considerable attempts to incorporate semantic knowledge into coreference resolution systems: different knowledge sources such as WordNet and Wikipedia have been used to boost the performance. In this paper, we propose new ways to extract WordNet feature. This feature, along with other features such as named entity feature, can be used to build an accurate semantic class (SC) classifier. In addition, we analyze the SC classification errors and propose to use relaxed SC agreement features. The proposed accurate SC classifier and the relaxation of SC agreement features on ACE2 coreference evaluation can boost our baseline system by 10.4% and 9.7% using MUC score and anaphor accuracy respectively.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We wish to thank Yannick Versley for his support with BART coreference resolution system and the three anonymous reviewers for their invaluable comments. This research was supported by British Telecom grant CT1080028046 and BISC Program of UC Berkeley.","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"liu-etal-2022-mpii","url":"https:\/\/aclanthology.org\/2022.acl-long.488.pdf","title":"MPII: Multi-Level Mutual Promotion for Inference and Interpretation","abstract":"In order to better understand the rationale behind model behavior, recent works have exploited providing interpretation to support the inference prediction. However, existing methods tend to provide human-unfriendly interpretation, and are prone to sub-optimal performance due to one-side promotion, i.e. either inference promotion with interpretation or vice versa. In this paper, we propose a multi-level Mutual Promotion mechanism for self-evolved Inference and sentence-level Interpretation (MPII). Specifically, from the model-level, we propose a Step-wise Integration Mechanism to jointly perform and deeply integrate inference and interpretation in an autoregressive manner. From the optimizationlevel, we propose an Adversarial Fidelity Regularization to improve the fidelity between inference and interpretation with the Adversarial Mutual Information training strategy. Extensive experiments on NLI and CQA tasks reveal that the proposed MPII approach can significantly outperform baseline models for both the inference performance and the interpretation quality. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to acknowledge Chuhan Wu for the helpful discussion. We also want to thank Jiale Xu for his kindness and help.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zanzotto-etal-2020-kermit","url":"https:\/\/aclanthology.org\/2020.emnlp-main.18.pdf","title":"KERMIT: Complementing Transformer Architectures with Encoders of Explicit Syntactic Interpretations","abstract":"Syntactic parsers have dominated natural language understanding for decades. Yet, their syntactic interpretations are losing centrality in downstream tasks due to the success of large-scale textual representation learners. In this paper, we propose KERMIT (Kernelinspired Encoder with Recursive Mechanism for Interpretable Trees) to embed symbolic syntactic parse trees into artificial neural networks and to visualize how syntax is used in inference. We experimented with KERMIT paired with two state-of-the-art transformerbased universal sentence encoders (BERT and XLNet) and we showed that KERMIT can indeed boost their performance by effectively embedding human-coded universal syntactic representations in neural networks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pruthi-etal-2019-combating","url":"https:\/\/aclanthology.org\/P19-1561.pdf","title":"Combating Adversarial Misspellings with Robust Word Recognition","abstract":"To combat adversarial spelling mistakes, we propose placing a word recognition model in front of the downstream classifier. Our word recognition models build upon the RNN semicharacter architecture, introducing several new backoff strategies for handling rare and unseen words. Trained to recognize words corrupted by random adds, drops, swaps, and keyboard mistakes, our method achieves 32% relative (and 3.3% absolute) error reduction over the vanilla semi-character model. Notably, our pipeline confers robustness on the downstream classifier, outperforming both adversarial training and off-the-shelf spell checkers. Against a BERT model fine-tuned for sentiment analysis, a single adversarially-chosen character attack lowers accuracy from 90.3% to 45.8%. Our defense restores accuracy to 75% 1. Surprisingly, better word recognition does not always entail greater robustness. Our analysis reveals that robustness also depends upon a quantity that we denote the sensitivity.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors are grateful to Graham Neubig, Eduard Hovy, Paul Michel, Mansi Gupta, and Antonios Anastasopoulos for suggestions and feedback.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"callaway-lester-2002-pronominalization","url":"https:\/\/aclanthology.org\/P02-1012.pdf","title":"Pronominalization in Generated Discourse and Dialogue","abstract":"Previous approaches to pronominalization have largely been theoretical rather than applied in nature. Frequently, such methods are based on Centering Theory, which deals with the resolution of anaphoric pronouns. But it is not clear that complex theoretical mechanisms, while having satisfying explanatory power, are necessary for the actual generation of pronouns. We first illustrate examples of pronouns from various domains, describe a simple method for generating pronouns in an implemented multi-page generation system, and present an evaluation of its performance.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Michael Young and Renate Henschel for their helpful comments; Kathy McCoy very quickly provided the original 3 NYT articles upon request; the anonymous reviewers whose comments greatly improved this paper. Support for this work was provided by ITC-irst and the IntelliMedia Initiative of North Carolina State University.","year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"stylianou-vlahavas-2021-corelm","url":"https:\/\/aclanthology.org\/2021.crac-1.8.pdf","title":"CoreLM: Coreference-aware Language Model Fine-Tuning","abstract":"Language Models are the underpin of all modern Natural Language Processing (NLP) tasks. The introduction of the Transformers architecture has contributed significantly into making Language Modeling very effective across many NLP task, leading to significant advancements in the field. However, Transformers come with a big computational cost, which grows quadratically with respect to the input length. This presents a challenge as to understand long texts requires a lot of context. In this paper, we propose a Fine-Tuning framework, named CoreLM, that extends the architecture of current Pretrained Language Models so that they incorporate explicit entity information. By introducing entity representations, we make available information outside the contextual space of the model, which results in a better Language Model for a fraction of the computational cost. We implement our approach using GPT2 and compare the finetuned model to the original. Our proposed model achieves a lower Perplexity in GUMBY and LAMBDADA datasets when compared to GPT2 and a fine-tuned version of GPT2 without any changes. We also compare the models' performance in terms of Accuracy in LAM-BADA and Children's Book Test, with and without the use of model-created coreference annotations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is co-financed by Greece and the European Union (European Social Fund -ESF) through the Operational Programme \"Human Resources Development, Education and Lifelong Learning\" in the context of the project \"Strengthening Human Resources Research Potential via Doctorate Research\" (MIS-5000432), implemented by the State Scholarships Foundation (IKY).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nordhoff-hammarstrom-2012-glottolog","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/733_Paper.pdf","title":"Glottolog\/Langdoc:Increasing the visibility of grey literature for low-density languages","abstract":"Language resources can be divided into structural resources treating phonology, morphosyntax, semantics etc. and resources treating the social, demographic, ethnic, political context. A third type are meta-resources, like bibliographies, which provide access to the resources of the first two kinds. This poster will present the Glottolog\/Langdoc project, a comprehensive bibliography providing web access to 180k bibliographical records to (mainly) low visibility resources from low-density languages. The resources are annotated for macro-area, content language, and document type and are available in XHTML and RDF.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bingel-haider-2014-named","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/967_Paper.pdf","title":"Named Entity Tagging a Very Large Unbalanced Corpus: Training and Evaluating NE Classifiers","abstract":"We describe a systematic and application-oriented approach to training and evaluating named entity recognition and classification (NERC) systems, the purpose of which is to identify an optimal system and to train an optimal model for named entity tagging DEREKO, a very large general-purpose corpus of contemporary German (Kupietz et al., 2010). DEREKO's strong dispersion wrt. genre, register and time forces us to base our decision for a specific NERC system on an evaluation performed on a representative sample of DEREKO instead of performance figures that have been reported for the individual NERC systems when evaluated on more uniform and less diverse data. We create and manually annotate such a representative sample as evaluation data for three different NERC systems, for each of which various models are learnt on multiple training data. The proposed sampling method can be viewed as a generally applicable method for sampling evaluation data from an unbalanced target corpus for any sort of natural language processing.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhuang-zuccon-2021-dealing","url":"https:\/\/aclanthology.org\/2021.emnlp-main.225.pdf","title":"Dealing with Typos for BERT-based Passage Retrieval and Ranking","abstract":"Passage retrieval and ranking is a key task in open-domain question answering and information retrieval. Current effective approaches mostly rely on pre-trained deep language model-based retrievers and rankers. These methods have been shown to effectively model the semantic matching between queries and passages, also in presence of keyword mismatch, i.e. passages that are relevant to a query but do not contain important query keywords. In this paper we consider the Dense Retriever (DR), a passage retrieval method, and the BERT re-ranker, a popular passage re-ranking method. In this context, we formally investigate how these models respond and adapt to a specific type of keyword mismatch-that caused by keyword typos occurring in queries. Through empirical investigation, we find that typos can lead to a significant drop in retrieval and ranking effectiveness. We then propose a simple typos-aware training framework for DR and BERT re-ranker to address this issue. Our experimental results on the MS MARCO passage ranking dataset show that, with our proposed typos-aware training, DR and BERT re-ranker can become robust to typos in queries, resulting in significantly improved effectiveness compared to models trained without appropriately accounting for typos.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Dr Guido Zuccon is the recipient of an Australian Research Council DECRA Research Fellowship (DE180101579). This research is partially funded by the Grain Research and Development Corporation project AgAsk (UOQ2003-009RTX).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gurevich-2006-finite","url":"https:\/\/aclanthology.org\/N06-2012.pdf","title":"A Finite-State Model of Georgian Verbal Morphology","abstract":"Georgian is a less commonly studied language with complex, non-concatenative verbal morphology. We present a computational model for generation and recognition of Georgian verb conjugations, relying on the analysis of Georgian verb structure as a word-level template. The model combines a set of finite-state transducers with a default inheritance mechanism. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jones-martin-1997-contextual","url":"https:\/\/aclanthology.org\/A97-1025.pdf","title":"Contextual Spelling Correction Using Latent Semantic Analysis","abstract":"Contextual spelling errors are defined as the use of an incorrect, though valid, word in a particular sentence or context. Traditional spelling checkers flag misspelled words, but they do not typically attempt to identify words that are used incorrectly in a sentence. We explore the use of Latent Semantic Analysis for correcting these incorrectly used words and the results are compared to earlier work based on a Bayesian classifier.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The first author is supported under DARPA contract SOL BAA95-10. We gratefully acknowledge the comments and suggestions of Thomas Landauer and the anonymous reviewers.","year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"das-etal-2015-gaussian","url":"https:\/\/aclanthology.org\/P15-1077.pdf","title":"Gaussian LDA for Topic Models with Word Embeddings","abstract":"Continuous space word embeddings learned from large, unstructured corpora have been shown to be effective at capturing semantic regularities in language. In this paper we replace LDA's parameterization of \"topics\" as categorical distributions over opaque word types with multivariate Gaussian distributions on the embedding space. This encourages the model to group words that are a priori known to be semantically related into topics. To perform inference, we introduce a fast collapsed Gibbs sampling algorithm based on Cholesky decompositions of covariance matrices of the posterior predictive distributions. We further derive a scalable algorithm that draws samples from stale posterior predictive distributions and corrects them with a Metropolis-Hastings step. Using vectors learned from a domain-general corpus (English Wikipedia), we report results on two document collections (20-newsgroups and NIPS). Qualitatively, Gaussian LDA infers different (but still very sensible) topics relative to standard LDA. Quantitatively, our technique outperforms existing models at dealing with OOV words in held-out documents.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers and Manaal Faruqui for helpful comments and feedback.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"granger-1982-scruffy","url":"https:\/\/aclanthology.org\/P82-1035.pdf","title":"Scruffy Text Understanding: Design and Implementation of `Tolerant' Understanders","abstract":"Most large text-understanding systems have been designed under the assumption that the input text will be in reasonably \"neat\" form, e.g., newspaper stories and other edited texts. However, a great deal of natural language texts e.g.~ memos, rough drafts, conversation transcripts~ etc., have features that differ significantly from \"neat\" texts, posing special problems for readers, such as misspelled words, missing words, poor syntactic constructlon, missing periods, etc. Our solution to these problems is to make use of exoectations, based both on knowledge of surface English and on world knowledge of the situation being described. These syntactic and semantic expectations can be used to figure out unknown words from context, constrain the possible word-senses of words with multiple meanings (ambiguity), fill in missing words (elllpsis), and resolve referents (anaphora). This method of using expectations to aid the understanding of \"scruffy\" texts has been incorporated into a working computer program called NOMAD, which understands scruffy texts in the domain of Navy messages.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1982,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"grefenstette-sadrzadeh-2011-experimenting","url":"https:\/\/aclanthology.org\/W11-2507.pdf","title":"Experimenting with transitive verbs in a DisCoCat","abstract":"Formal and distributional semantic models offer complementary benefits in modeling meaning. The categorical compositional distributional model of meaning of Coecke et al. (2010) (abbreviated to DisCoCat in the title) combines aspects of both to provide a general framework in which meanings of words, obtained distributionally, are composed using methods from the logical setting to form sentence meaning. Concrete consequences of this general abstract setting and applications to empirical data are under active study (Grefenstette et al., 2011; Grefenstette and Sadrzadeh, 2011). In this paper, we extend this study by examining transitive verbs, represented as matrices in a DisCoCat. We discuss three ways of constructing such matrices, and evaluate each method in a disambiguation task developed by Grefenstette and Sadrzadeh (2011).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ganter-strube-2009-finding","url":"https:\/\/aclanthology.org\/P09-2044.pdf","title":"Finding Hedges by Chasing Weasels: Hedge Detection Using Wikipedia Tags and Shallow Linguistic Features","abstract":"We investigate the automatic detection of sentences containing linguistic hedges using corpus statistics and syntactic patterns. We take Wikipedia as an already annotated corpus using its tagged weasel words which mark sentences and phrases as non-factual. We evaluate the quality of Wikipedia as training data for hedge detection, as well as shallow linguistic features.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgments. This work has been partially funded by the European Union under the project Judicial Management by Digital Libraries Semantics (JUMAS FP7-214306) and by the Klaus Tschira Foundation, Heidelberg, Germany.","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gaustad-2003-importance","url":"https:\/\/aclanthology.org\/U03-1015.pdf","title":"The importance of high-quality input for WSD: an application-oriented comparison of part-of-speech taggers","abstract":"In this paper, we present an applicationoriented evaluation of three Part-of-Speech (PoS) taggers in a word sense disambiguation (WSD) system. Following the intuition that high quality input is likely to influence the final results of a complex system, we test whether the more accurate taggers also produce better results when integrated into the WSD system. For this purpose, a stand-alone evaluation of the PoS taggers is used to assess which tagger is the most accurate. The results of the WSD task, computed on the training section of the Dutch Senseval-2 data, including the PoS information from all three taggers show that the most accurate PoS tags do indeed lead to the best results, thereby verifying our hypothesis. A surprising result, however, is the fact that the performance of the complex WSD system with the different PoS tags included does not necessarily reflect the stand-alone accuracy of the PoS taggers.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was carried out within the framework of the PIONIER Project Algorithms for Linguistic Processing. This PIONIER Project is funded by NWO (Dutch Organization for Scientific Research) and the University of Groningen. We are grateful to Robbert Prins for his help with the HMM tagger as well as to Gertjan van Noord and Menno van Zaanen for comments and discussions.","year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-etal-2019-imitation","url":"https:\/\/aclanthology.org\/P19-1338.pdf","title":"An Imitation Learning Approach to Unsupervised Parsing","abstract":"Recently, there has been an increasing interest in unsupervised parsers that optimize semantically oriented objectives, typically using reinforcement learning. Unfortunately, the learned trees often do not match actual syntax trees well. Shen et al. (2018) propose a structured attention mechanism for language modeling (PRPN), which induces better syntactic structures but relies on ad hoc heuristics. Also, their model lacks interpretability as it is not grounded in parsing actions. In our work, we propose an imitation learning approach to unsupervised parsing, where we transfer the syntactic knowledge induced by the PRPN to a Tree-LSTM model with discrete parsing actions. Its policy is then refined by Gumbel-Softmax training towards a semantically oriented objective. We evaluate our approach on the All Natural Language Inference dataset and show that it achieves a new state of the art in terms of parsing F-score, outperforming our base models, including the PRPN. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Yikang Shen and Zhouhan Lin at MILA for fruitful discussions. FK was supported by the Leverhulme Trust through International Academic Fellowship IAF-2017-019.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"du-etal-2022-understanding-iterative","url":"https:\/\/aclanthology.org\/2022.acl-long.250.pdf","title":"Understanding Iterative Revision from Human-Written Text","abstract":"Writing is, by nature, a strategic, adaptive, and more importantly, an iterative process. A crucial part of writing is editing and revising the text. Previous works on text revision have focused on defining edit intention taxonomies within a single domain or developing computational models with a single level of edit granularity, such as sentence-level edits, which differ from human's revision cycles. This work describes ITERATER: the first largescale, multi-domain, edit-intention annotated corpus of iteratively revised text. In particular, ITERATER is collected based on a new framework to comprehensively model the iterative text revisions that generalize to various domains of formal writing, edit intentions, revision depths, and granularities. When we incorporate our annotated edit intentions, both generative and edit-based text revision models significantly improve automatic evaluations. 1 Through our work, we better understand the text revision process, making vital connections between edit intentions and writing quality, enabling the creation of diverse corpora to support computational modeling of iterative text revisions. * This research was performed when Wanyu Du was interning at Grammarly. 1 Code and dataset are available at https:\/\/github. com\/vipulraheja\/IteraTeR. Each comment was annotated by three different annotators, which achieved high inter-annotator agreement. The proposed annotation process approach CLARITY is also language and domain independent, nevertheless, it was currently applied for Brazilian Portuguese MEANING-CHANGED. Each comment was annotated by three different annotators, which and COHERENCE achieved high inter-annotator agreement. The new MEANING-CHANGED proposed annotation approach is also language and domain independent, nevertheless, it was currentlydomain-independent (although it has been CLARITY applied for Brazilian Por-tuguese) FLUENCY. Each comment was annotated by three different annotators , FLUENCY and achieved high inter-annotator agreement. The new COHERENCE proposed annotation approach is also language and domain-independent (although it has been applied nevertheless it is currently customized COHERENCE for Brazilian Portuguese ) FLUENCY .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank all linguistic expert annotators at Grammarly for annotating, evaluating and providing feedback during our data annotation and evaluation process. We appreciate that Courtney Napoles and Knar Hovakimyan at Grammarly helped coordinate the annotation resources. We also thank Yangfeng Ji at University of Virginia and the anonymous reviewers for their helpful comments.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ma-etal-2016-learning","url":"https:\/\/aclanthology.org\/W16-1908.pdf","title":"Learning Phone Embeddings for Word Segmentation of Child-Directed Speech","abstract":"This paper presents a novel model that learns and exploits embeddings of phone ngrams for word segmentation in child language acquisition. Embedding-based models are evaluated on a phonemically transcribed corpus of child-directed speech, in comparison with their symbolic counterparts using the common learning framework and features. Results show that learning embeddings significantly improves performance. We make use of extensive visualization to understand what the model has learned. We show that the learned embeddings are informative for both word segmentation and phonology in general.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank the anonymous reviewers for their helpful comments and suggestions. The financial support for the research reported in this paper was partly provided by the German Research Foundation (DFG) via the Collaborative Research Center \"The Construction of Meaning\" (SFB 833), project A3.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zervanou-etal-2011-enrichment","url":"https:\/\/aclanthology.org\/W11-1507.pdf","title":"Enrichment and Structuring of Archival Description Metadata","abstract":"Cultural heritage institutions are making their digital content available and searchable online. Digital metadata descriptions play an important role in this endeavour. This metadata is mostly manually created and often lacks detailed annotation, consistency and, most importantly, explicit semantic content descriptors which would facilitate online browsing and exploration of available information. This paper proposes the enrichment of existing cultural heritage metadata with automatically generated semantic content descriptors. In particular, it is concerned with metadata encoding archival descriptions (EAD) and proposes to use automatic term recognition and term clustering techniques for knowledge acquisition and content-based document classification purposes.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"vulic-etal-2020-improving","url":"https:\/\/aclanthology.org\/2020.repl4nlp-1.7.pdf","title":"Improving Bilingual Lexicon Induction with Unsupervised Post-Processing of Monolingual Word Vector Spaces","abstract":"Work on projection-based induction of crosslingual word embedding spaces (CLWEs) predominantly focuses on the improvement of the projection (i.e., mapping) mechanisms. In this work, in contrast, we show that a simple method for post-processing monolingual embedding spaces facilitates learning of the crosslingual alignment and, in turn, substantially improves bilingual lexicon induction (BLI). The post-processing method we examine is grounded in the generalisation of first-and second-order monolingual similarities to the n th-order similarity. By post-processing monolingual spaces before the cross-lingual alignment, the method can be coupled with any projection-based method for inducing CLWE spaces. We demonstrate the effectiveness of this simple monolingual post-processing across a set of 15 typologically diverse languages (i.e., 15\u00d714 BLI setups), and in combination with two different projection methods.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by the ERC Consolidator Grant LEXICAL (no 648909) awarded to Anna Korhonen. Goran Glava\u0161 is supported by the Eliteprogramm of the Baden-W\u00fcrttemberg Stiftung (AGREE grant). We thank the reviewers for their insightful suggestions.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"przepiorkowski-wolinski-2003-unberable","url":"https:\/\/aclanthology.org\/W03-2415.pdf","title":"The Unberable Lightness of Tagging* A Case Study in Morphosyntactic Tagging of Polish","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chang-etal-2011-inference","url":"https:\/\/aclanthology.org\/W11-1904.pdf","title":"Inference Protocols for Coreference Resolution","abstract":"This paper presents Illinois-Coref, a system for coreference resolution that participated in the CoNLL-2011 shared task. We investigate two inference methods, Best-Link and All-Link, along with their corresponding, pairwise and structured, learning protocols. Within these, we provide a flexible architecture for incorporating linguistically-motivated constraints, several of which we developed and integrated. We compare and evaluate the inference approaches and the contribution of constraints, analyze the mistakes of the system, and discuss the challenges of resolving coreference for the OntoNotes-4.0 data set.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgments This research is supported by the Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0181 and the Army Research Laboratory (ARL) under agreement W911NF-09-2-0053. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of the DARPA, AFRL, ARL or the US government.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yang-etal-2001-towards","url":"https:\/\/aclanthology.org\/H01-1071.pdf","title":"Towards Automatic Sign Translation","abstract":"Signs are everywhere in our lives. They make our lives easier when we are familiar with them. But sometimes they also pose problems. For example, a tourist might not be able to understand signs in a foreign country. In this paper, we present our efforts towards automatic sign translation. We discuss methods for automatic sign detection. We describe sign translation using example based machine translation technology. We use a usercentered approach in developing an automatic sign translation system. The approach takes advantage of human intelligence in selecting an area of interest and domain for translation if needed. A user can determine which sign is to be translated if multiple signs have been detected within the image. The selected part of the image is then processed, recognized, and translated. We have developed a prototype system that can recognize Chinese signs input from a video camera which is a common gadget for a tourist, and translate them into English text or voice stream.","label_nlp4sg":1,"task":["Sign Translation"],"method":["usercentered approach"],"goal1":"Reduced Inequalities","goal2":null,"goal3":null,"acknowledgments":"We would like to thank Dr. Ralf Brown and Dr. Robert Frederking for providing initial EBMT software and William Kunz for developing the interface for the prototype system. We would also like to thank other members in the Interactive Systems Labs for their inspiring discussions and support. This research is partially supported by DARPA under TIDES project.","year":2001,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":1,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"widmoser-etal-2021-randomized","url":"https:\/\/aclanthology.org\/2021.eacl-main.100.pdf","title":"Randomized Deep Structured Prediction for Discourse-Level Processing","abstract":"Expressive text encoders such as RNNs and Transformer Networks have been at the center of NLP models in recent work. Most of the effort has focused on sentence-level tasks, capturing the dependencies between words in a single sentence, or pairs of sentences. However, certain tasks, such as argumentation mining, require accounting for longer texts and complicated structural dependencies between them. Deep structured prediction is a general framework to combine the complementary strengths of expressive neural encoders and structured inference for highly structured domains. Nevertheless, when the need arises to go beyond sentences, most work relies on combining the output scores of independently trained classifiers. One of the main reasons for this is that constrained inference comes at a high computational cost. In this paper, we explore the use of randomized inference to alleviate this concern and show that we can efficiently leverage deep structured prediction and expressive neural encoders for a set of tasks involving complicated argumentative structures.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"marge-etal-2019-research","url":"https:\/\/aclanthology.org\/N19-4023.pdf","title":"A Research Platform for Multi-Robot Dialogue with Humans","abstract":"This paper presents a research platform that supports spoken dialogue interaction with multiple robots. The demonstration showcases our crafted MultiBot testing scenario in which users can verbally issue search, navigate, and follow instructions to two robotic teammates: a simulated ground robot and an aerial robot. This flexible language and robotic platform takes advantage of existing tools for speech recognition and dialogue management that are compatible with new domains, and implements an inter-agent communication protocol (tactical behavior specification), where verbal instructions are encoded for tasks assigned to the appropriate robot.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the U.S. Army Research Laboratory. The authors thank the anonymous reviewers for their feedback, as well as Judith Klavans, Chris Kroninger, and Garrett Warnell for their contributions to this project.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"desilets-etal-2008-evaluating","url":"https:\/\/aclanthology.org\/2008.iwslt-papers.3.pdf","title":"Evaluating productivity gains of hybrid ASR-MT systems for translation dictation.","abstract":"This paper is about Translation Dictation with ASR, that is, the use of Automatic Speech Recognition (ASR) by human translators, in order to dictate translations. We are particularly interested in the productivity gains that this could provide over conventional keyboard input, and ways in which such gains might be increased through a combination of ASR and Statistical Machine Translation (SMT). In this hybrid technology, the source language text is presented to both the human translator and a SMT system. The latter produces Nbest translations hypotheses, which are then used to fine tune the ASR language model and vocabulary towards utterances which are probable translations of source text sentences. We conducted an ergonomic experiment with eight professional translators dictating into French, using a top of the line offthe-shelf ASR system (Dragon NatuallySpeaking 8). We found that the ASR system had an average Word Error Rate (WER) of 11.7%, and that translation using this system did not provide statistically significant productivity increases over keyboard input, when following the manufacturer recommended procedure for error correction. However, we found indications that, even in its current imperfect state, French ASR might be beneficial to translators who are already used to dictation (either with ASR or a dictaphone), but more focused experiments are needed to confirm this. We also found that dictation using an ASR with WER of 4% or less would have resulted in statistically significant (p < 0.6) productivity gains in the order of 25.1% to 44.9% Translated Words Per Minute. We also evaluated the extent to which the limited manufacturer provided Domain Adaptation features could be used to positively bias the ASR using SMT hypotheses. We found that the relative gains in WER were much lower than has been reported in the literature for tighter integration of SMT with ASR, pointing the advantages of tight integration approaches and the need for more research in that area.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank the following people for their help. From NRC: George Foster, Roland Kuhn, Samuel Larkin, Pierre Isabelle, Julie Cliffe and Norm Vinson. From the Translation Bureau of Canada: Susanne Marceau and Susanne Garceau. Also, the eight anonymous subjects who participated in this study, as well as the two anonymous reviewers whose relevant comments greatly helped improve the paper.","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zheng-etal-2021-enhancing-visual","url":"https:\/\/aclanthology.org\/2021.findings-emnlp.158.pdf","title":"Enhancing Visual Dialog Questioner with Entity-based Strategy Learning and Augmented Guesser","abstract":"Considering the importance of building a good Visual Dialog (VD) Questioner, many researchers study the topic under a Q-Bot-A-Bot image-guessing game setting, where the Questioner needs to raise a series of questions to collect information of an undisclosed image. Despite progress has been made in Supervised Learning (SL) and Reinforcement Learning (RL), issues still exist. Firstly, previous methods do not provide explicit and effective guidance for Questioner to generate visually related and informative questions. Secondly, the effect of RL is hampered by an incompetent component, i.e., the Guesser, who makes image predictions based on the generated dialogs and assigns rewards accordingly. To enhance VD Questioner: 1) we propose a Related entity enhanced Questioner (ReeQ) that generates questions under the guidance of related entities and learns entity-based questioning strategy from human dialogs; 2) we propose an Augmented Guesser (AugG) that is strong and is optimized for the VD setting especially. Experimental results on the VisDial v1.0 dataset show that our approach achieves state-of-theart performance on both image-guessing task and question diversity. Human study further proves that our model generates more visually related, informative and coherent questions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank anonymous reviewers for their suggestions and comments. The work was supported by the National Natural Science Foundation of China (NSFC62076032) and the Cooperation Poject with Beijing SanKuai Technology Co., Ltd.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gao-etal-2006-approximation","url":"https:\/\/aclanthology.org\/P06-1029.pdf","title":"Approximation Lasso Methods for Language Modeling","abstract":"Lasso is a regularization method for parameter estimation in linear models. It optimizes the model parameters with respect to a loss function subject to model complexities. This paper explores the use of lasso for statistical language modeling for text input. Owing to the very large number of parameters, directly optimizing the penalized lasso loss function is impossible. Therefore, we investigate two approximation methods, the boosted lasso (BLasso) and the forward stagewise linear regression (FSLR). Both methods, when used with the exponential loss function, bear strong resemblance to the boosting algorithm which has been used as a discriminative training method for language modeling. Evaluations on the task of Japanese text input show that BLasso is able to produce the best approximation to the lasso solution, and leads to a significant improvement, in terms of character error rate, over boosting and the traditional maximum likelihood estimation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rudinger-etal-2017-social","url":"https:\/\/aclanthology.org\/W17-1609.pdf","title":"Social Bias in Elicited Natural Language Inferences","abstract":"We analyze the Stanford Natural Language Inference (SNLI) corpus in an investigation of bias and stereotyping in NLP data. The human-elicitation protocol employed in the construction of the SNLI makes it prone to amplifying bias and stereotypical associations, which we demonstrate statistically (using pointwise mutual information) and with qualitative examples.","label_nlp4sg":1,"task":["investigation of bias and stereotyping in NLP data"],"method":["Statistical analysis"],"goal1":"Reduced Inequalities","goal2":"Gender Equality","goal3":null,"acknowledgments":"We are grateful to our many reviewers who offered both candid and thoughtful feedback.This material is based upon work supported by the JHU Human Language Technology Center of Excellence (HLTCOE), DARPA LORELEI, and the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1232825. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA, the NSF, or the U.S. Government.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":1,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":1,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yang-mitchell-2016-joint","url":"https:\/\/aclanthology.org\/N16-1033.pdf","title":"Joint Extraction of Events and Entities within a Document Context","abstract":"Events and entities are closely related; entities are often actors or participants in events and events without entities are uncommon. The interpretation of events and entities is highly contextually dependent. Existing work in information extraction typically models events separately from entities, and performs inference at the sentence level, ignoring the rest of the document. In this paper, we propose a novel approach that models the dependencies among variables of events, entities, and their relations, and performs joint inference of these variables across a document. The goal is to enable access to document-level contextual information and facilitate contextaware predictions. We demonstrate that our approach substantially outperforms the stateof-the-art methods for event extraction as well as a strong baseline for entity extraction.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by NSF grant IIS-1250956, and in part by the DARPA DEFT program under contract FA87501320005. We would like to thank members of the CMU NELL group for helpful comments. We also thank the anonymous reviewers for insightful suggestions.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kermanidis-etal-2004-learning","url":"https:\/\/aclanthology.org\/C04-1153.pdf","title":"Learning Greek Verb Complements: Addressing the Class Imbalance","abstract":"Imbalanced training sets, where one class is heavily underrepresented compared to the others, have a bad effect on the classification of rare class instances. We apply One-sided Sampling for the first time to a lexical acquisition task (learning verb complements from Modern Greek corpora) to remove redundant and misleading training examples of verb nondependents and thereby balance our training set. We experiment with well-known learning algorithms to classify new examples. Performance improves up to 22% in recall and 15% in precision after balancing the dataset 1 .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hjalmarsson-2010-vocal","url":"https:\/\/aclanthology.org\/W10-4340.pdf","title":"The vocal intensity of turn-initial cue phrases in dialogue","abstract":"The present study explores the vocal intensity of turn-initial cue phrases in a corpus of dialogues in Swedish. Cue phrases convey relatively little propositional content, but have several important pragmatic functions. The majority of these entities are frequently occurring monosyllabic words such as \"eh\", \"mm\", \"ja\". Prosodic analysis shows that these words are produced with higher intensity than other turn-initial words are. In light of these results, it is suggested that speakers produce these expressions with high intensity in order to claim the floor. It is further shown that the difference in intensity can be measured as a dynamic inter-speaker relation over the course of a dialogue using the end of the interlocutor's previous turn as a reference point.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was carried out at Centre for Speech Technology, KTH. Funding was provided by Riksbankens Jubileumsfond (RJ) project P09-0064:1-E Prosody in conversation and the Graduate School for Language Technology (GSLT). Many thanks to Rolf Carlson, Jens Edlund and Joakim Gustafson for valuable comments.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"peshterliev-etal-2019-active","url":"https:\/\/aclanthology.org\/N19-2012.pdf","title":"Active Learning for New Domains in Natural Language Understanding","abstract":"We explore active learning (AL) for improving the accuracy of new domains in a natural language understanding (NLU) system. We propose an algorithm called Majority-CRF that uses an ensemble of classification models to guide the selection of relevant utterances, as well as a sequence labeling model to help prioritize informative examples. Experiments with three domains show that Majority-CRF achieves 6.6%-9% relative error rate reduction compared to random sampling with the same annotation budget, and statistically significant improvements compared to other AL approaches. Additionally, case studies with human-in-the-loop AL on six new domains show 4.6%-9% improvement on an existing NLU system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"copestake-etal-2004-lexicon","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/706.pdf","title":"A Lexicon Module for a Grammar Development Environment","abstract":"Past approaches to developing an effective lexicon component in a grammar development environment have suffered from a number of usability and efficiency issues. We present a lexical database module currently in use by a number of grammar development projects. The database module presented addresses issues which have caused problems in the past and the power of a database architecture provides a number of practical advantages as well as a solid framework for future extension.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"coppersmith-etal-2015-adhd","url":"https:\/\/aclanthology.org\/W15-1201.pdf","title":"From ADHD to SAD: Analyzing the Language of Mental Health on Twitter through Self-Reported Diagnoses","abstract":"Many significant challenges exist for the mental health field, but one in particular is a lack of data available to guide research. Language provides a natural lens for studying mental health-much existing work and therapy have strong linguistic components, so the creation of a large, varied, language-centric dataset could provide significant grist for the field of mental health research. We examine a broad range of mental health conditions in Twitter data by identifying self-reported statements of diagnosis. We systematically explore language differences between ten conditions with respect to the general population, and to each other. Our aim is to provide guidance and a roadmap for where deeper exploration is likely to be fruitful.","label_nlp4sg":1,"task":["Analyzing the Language of Mental Health"],"method":["Self - Reported Diagnoses"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank Bradley Skaggs, Matthew DiFabion, and Aleksander Yelskiy for their insights throughout this endeavor.","year":2015,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"vivek-kalyan-etal-2021-textgraphs","url":"https:\/\/aclanthology.org\/2021.textgraphs-1.20.pdf","title":"Textgraphs-15 Shared Task System Description : Multi-Hop Inference Explanation Regeneration by Matching Expert Ratings","abstract":"Creating explanations for answers to science questions is a challenging task that requires multi-hop inference over a large set of fact sentences. This year, to refocus the Textgraphs Shared Task on the problem of gathering relevant statements (rather than solely finding a single 'correct path'), the WorldTree dataset was augmented with expert ratings of 'relevance' of statements to each overall explanation. Our system, which achieved second place on the Shared Task leaderboard, combines initial statement retrieval; language models trained to predict the relevance scores; and ensembling of a number of the resulting rankings. Our code implementation is made available at https: \/\/github.com\/mdda\/worldtree_ corpus\/tree\/textgraphs_2021","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sachan-etal-2015-learning","url":"https:\/\/aclanthology.org\/P15-1024.pdf","title":"Learning Answer-Entailing Structures for Machine Comprehension","abstract":"Understanding open-domain text is one of the primary challenges in NLP. Machine comprehension evaluates the system's ability to understand text through a series of question-answering tasks on short pieces of text such that the correct answer can be found only in the given text. For this task, we posit that there is a hidden (latent) structure that explains the relation between the question, correct answer, and text. We call this the answer-entailing structure; given the structure, the correctness of the answer is evident. Since the structure is latent, it must be inferred. We present a unified max-margin framework that learns to find these hidden structures (given a corpus of question-answer pairs), and uses what it learns to answer machine comprehension questions on novel texts. We extend this framework to incorporate multi-task learning on the different subtasks that are required to perform machine comprehension. Evaluation on a publicly available dataset shows that our framework outperforms various IR and neuralnetwork baselines, achieving an overall accuracy of 67.8% (vs. 59.9%, the best previously-published result.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank the anonymous reviewers, along with Sujay Jauhar and Snigdha Chaturvedi for their valuable comments and suggestions to improve the quality of the paper.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"davoodi-kosseim-2016-clac","url":"https:\/\/aclanthology.org\/S16-1151.pdf","title":"CLaC at SemEval-2016 Task 11: Exploring linguistic and psycho-linguistic Features for Complex Word Identification","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dahab-belz-2010-game","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/476_Paper.pdf","title":"A Game-based Approach to Transcribing Images of Text","abstract":"We present a methodology that takes as input scanned documents of typed or handwritten text, and produces transcriptions of the text as output. Instead of using OCR technology, the methodology is game-based and produces such transcriptions as a by-product. The approach is intended particularly for languages for which language technology and resources are scarce and reliable OCR technology may not exist. It can be used in place of OCR for transcribing individual documents, or to create corpora of paired images and transcriptions required to train OCR tools. We present Minefield, a prototype implementation of the approach which is currently collecting Arabic transcriptions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dras-2015-squibs","url":"https:\/\/aclanthology.org\/J15-2005.pdf","title":"Squibs: Evaluating Human Pairwise Preference Judgments","abstract":"Human evaluation plays an important role in NLP, often in the form of preference judgments. Although there has been some use of classical non-parametric and bespoke approaches to evaluating these sorts of judgments, there is an entire body of work on this in the context of sensory discrimination testing and the human judgments that are central to it, backed by rigorous statistical theory and freely available software, that NLP can draw on. We investigate one approach, Log-Linear Bradley-Terry models, and apply it to sample NLP data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-etal-2010-distributional","url":"https:\/\/aclanthology.org\/P10-2066.pdf","title":"Distributional Similarity vs. PU Learning for Entity Set Expansion","abstract":"Distributional similarity is a classic technique for entity set expansion, where the system is given a set of seed entities of a particular class, and is asked to expand the set using a corpus to obtain more entities of the same class as represented by the seeds. This paper shows that a machine learning model called positive and unlabeled learning (PU learning) can model the set expansion problem better. Based on the test results of 10 corpora, we show that a PU learning technique outperformed distributional similarity significantly.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgements: Bing Liu and Lei Zhang acknowledge the support of HP Labs Innovation Research Grant 2009-1062-1-A, and would like to thank Suk Hwan Lim and Eamonn O'Brien-Strain for many helpful discussions. Table 2 . Precision @ top N (with 3 seeds, and window size w = 3)","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"stern-etal-2012-efficient","url":"https:\/\/aclanthology.org\/P12-1030.pdf","title":"Efficient Search for Transformation-based Inference","abstract":"This paper addresses the search problem in textual inference, where systems need to infer one piece of text from another. A prominent approach to this task is attempts to transform one text into the other through a sequence of inference-preserving transformations, a.k.a. a proof, while estimating the proof's validity. This raises a search challenge of finding the best possible proof. We explore this challenge through a comprehensive investigation of prominent search algorithms and propose two novel algorithmic components specifically designed for textual inference: a gradient-style evaluation function, and a locallookahead node expansion method. Evaluations, using the open-source system, BIUTEE, show the contribution of these ideas to search efficiency and proof quality.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partially supported by the Israel Science Foundation grant 1112\/08, the PASCAL-2 Network of Excellence of the European Community FP7- ICT-2007-1-216886, and the European Community's Seventh Framework Programme (FP7\/2007-2013 under grant agreement no. 287923 (EXCITEMENT).","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"thomson-etal-2020-sportsett","url":"https:\/\/aclanthology.org\/2020.intellang-1.4.pdf","title":"SportSett:Basketball - A robust and maintainable data-set for Natural Language Generation","abstract":"Data2Text Natural Language Generation is a complex and varied task. We investigate the data requirements for the difficult real-world problem of generating statistic-focused summaries of basketball games. This has recently been tackled using the Rotowire and Rotowire-FG datasets of paired data and text. It can, however, be difficult to filter, query, and maintain such large volumes of data. In this resource paper, we introduce the Sport-Sett:Basketball database 1. This easy-to-use resource allows for simple scripts to be written which generate data in suitable formats for a variety of systems. Building upon the existing data, we provide more attributes, across multiple dimensions, increasing the overlap of content between data and text. We also highlight and resolve issues of training, validation and test partition contamination in these previous datasets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is funded by the Engineering and Physical Sciences Research Council (EPSRC), which funds Craig Thomson under a National Productivity Investment Fund Doctoral Studentship (EP\/R512412\/1).We would like to thank our reviewers, as well as the NLG (CLAN) Machine Learning reading groups at the University of Aberdeen for their helpful feedback on this work. We would also like to thank Moray Greig, who was our basketball domain expert.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chao-etal-2019-learning","url":"https:\/\/aclanthology.org\/W19-5926.pdf","title":"Learning Question-Guided Video Representation for Multi-Turn Video Question Answering","abstract":"Understanding and conversing about dynamic scenes is one of the key capabilities of AI agents that navigate the environment and convey useful information to humans. Video question answering is a specific scenario of such AI-human interaction where an agent generates a natural language response to a question regarding the video of a dynamic scene. Incorporating features from multiple modalities, which often provide supplementary information, is one of the challenging aspects of video question answering. Furthermore, a question often concerns only a small segment of the video, hence encoding the entire video sequence using a recurrent neural network is not computationally efficient. Our proposed question-guided video representation module efficiently generates the token-level video summary guided by each word in the question. The learned representations are then fused with the question to generate the answer. Through empirical evaluation on the Audio Visual Scene-aware Dialog (AVSD) dataset (Alamri et al., 2019a), our proposed models in single-turn and multiturn question answering achieve state-of-theart performance on several automatic natural language generation evaluation metrics.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"vath-vu-2019-combine","url":"https:\/\/aclanthology.org\/W19-5908.pdf","title":"To Combine or Not To Combine? A Rainbow Deep Reinforcement Learning Agent for Dialog Policies","abstract":"We explore state-of-the-art deep reinforcement learning methods such as prioritized experience replay, double deep Q-Networks, dueling network architectures, distributional learning methods for dialog policy. Our main findings show that each individual method improves the rewards and the task success rate but combining these methods in a Rainbow agent, which performs best across tasks and environments, is a non-trivial task. We, therefore, provide insights about the influence of each method on the combination and how to combine them to form the Rainbow agent.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"spiegler-flach-2010-enhanced","url":"https:\/\/aclanthology.org\/P10-1039.pdf","title":"Enhanced Word Decomposition by Calibrating the Decision Threshold of Probabilistic Models and Using a Model Ensemble","abstract":"This paper demonstrates that the use of ensemble methods and carefully calibrating the decision threshold can significantly improve the performance of machine learning methods for morphological word decomposition. We employ two algorithms which come from a family of generative probabilistic models. The models consider segment boundaries as hidden variables and include probabilities for letter transitions within segments. The advantage of this model family is that it can learn from small datasets and easily generalises to larger datasets. The first algorithm PROMODES, which participated in the Morpho Challenge 2009 (an international competition for unsupervised morphological analysis) employs a lower order model whereas the second algorithm PROMODES-H is a novel development of the first using a higher order model. We present the mathematical description for both algorithms, conduct experiments on the morphologically rich language Zulu and compare characteristics of both algorithms based on the experimental results.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Narayanan Edakunni and Bruno Gol\u00e9nia for discussions concerning this paper as well as the anonymous reviewers for their comments. The research described was sponsored by EPSRC grant EP\/E010857\/1 Learning the morphology of complex synthetic languages.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zarriess-etal-2011-underspecifying","url":"https:\/\/aclanthology.org\/P11-1101.pdf","title":"Underspecifying and Predicting Voice for Surface Realisation Ranking","abstract":"This paper addresses a data-driven surface realisation model based on a large-scale reversible grammar of German. We investigate the relationship between the surface realisation performance and the character of the input to generation, i.e. its degree of underspecification. We extend a syntactic surface realisation system, which can be trained to choose among word order variants, such that the candidate set includes active and passive variants. This allows us to study the interaction of voice and word order alternations in realistic German corpus data. We show that with an appropriately underspecified input, a linguistically informed realisation model trained to regenerate strings from the underlying semantic representation achieves 91.5% accuracy (over a baseline of 82.5%) in the prediction of the original voice.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"weiner-1984-knowledge","url":"https:\/\/aclanthology.org\/J84-1001.pdf","title":"A Knowledge Representation Approach to Understanding Metaphors","abstract":"This study represents an exploration of the phenomenon of non-literal language (\"metaphors\") and an approach that lends itself to computational modeling. Ortony's theories of the way in which salience and asymmetry function in human metaphor processing are explored and expanded on the basis of numerous examples. A number of factors appear to be interacting in the metaphor comprehension process. In addition to salience and asymmetry, of major importance are incongruity, hyperbolicity, inexpressibility, prototypicality, and probable value range. Central to the model is a knowledge representation system incorporating these factors and allowing for the manner in which they interact. A version of KL-ONE (with small revisions) is used for this purpose. In sentences of this form, A is commonly referred to as the \"topic\", the B term as the \"vehicle\". That which they have in common is called the \"ground\". In a sentence like","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"I'd like to thank Ralph Weischedel, Michael Freeman, and Genevieve Berry-Rogghe for reading and commenting on an earlier draft of this paper.","year":1984,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bade-shrestha-bal-2020-named","url":"https:\/\/aclanthology.org\/2020.nlptea-1.16.pdf","title":"Named-Entity Based Sentiment Analysis of Nepali News Media Texts","abstract":"Due to the general availability, relative abundance and wide diversity of opinions, news Media texts are very good sources for sentiment analysis. However, the major challenge with such texts is the difficulty in aligning the expressed opinions to the concerned political leaders as this entails a non-trivial task of named-entity recognition and anaphora resolution. In this work, our primary focus is on developing a Natural Language Processing (NLP) pipeline involving a robust Named-Entity Recognition followed by Anaphora Resolution and then after alignment of the recognized and resolved named-entities, in this case, political leaders to the correct class of opinions as expressed in the texts. We visualize the popularity of the politicians via the time series graph of positive and negative sentiments as an outcome of the pipeline. We have achieved the performance metrics of the individual components of the pipeline as follows: Part of speech tagging-93.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"berant-liang-2014-semantic","url":"https:\/\/aclanthology.org\/P14-1133.pdf","title":"Semantic Parsing via Paraphrasing","abstract":"A central challenge in semantic parsing is handling the myriad ways in which knowledge base predicates can be expressed. Traditionally, semantic parsers are trained primarily from text paired with knowledge base information. Our goal is to exploit the much larger amounts of raw text not tied to any knowledge base. In this paper, we turn semantic parsing on its head. Given an input utterance, we first use a simple method to deterministically generate a set of candidate logical forms with a canonical realization in natural language for each. Then, we use a paraphrase model to choose the realization that best paraphrases the input, and output the corresponding logical form. We present two simple paraphrase models, an association model and a vector space model, and train them jointly from question-answer pairs. Our system PARASEMPRE improves stateof-the-art accuracies on two recently released question-answering datasets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Kai Sheng Tai for performing the error analysis. Stanford University gratefully acknowledges the support of the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) contract no. FA8750-13-2-0040. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of the DARPA, AFRL, or the US government. The second author is supported by a Google Faculty Research Award.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"honnet-etal-2018-machine","url":"https:\/\/aclanthology.org\/L18-1597.pdf","title":"Machine Translation of Low-Resource Spoken Dialects: Strategies for Normalizing Swiss German","abstract":"The goal of this work is to design a machine translation (MT) system for a low-resource family of dialects, collectively known as Swiss German, which are widely spoken in Switzerland but seldom written. We collected a significant number of parallel written resources to start with, up to a total of about 60k words. Moreover, we identified several other promising data sources for Swiss German. Then, we designed and compared three strategies for normalizing Swiss German input in order to address the regional diversity. We found that character-based neural MT was the best solution for text normalization. In combination with phrase-based statistical MT, our solution reached 36% BLEU score when translating from the Bernese dialect. This value, however, decreases as the testing data becomes more remote from the training one, geographically and topically. These resources and normalization techniques are a first step towards full MT of Swiss German dialects.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"blanco-moldovan-2012-fine","url":"https:\/\/aclanthology.org\/N12-1050.pdf","title":"Fine-Grained Focus for Pinpointing Positive Implicit Meaning from Negated Statements","abstract":"Negated statements often carry positive implicit meaning. Regardless of the semantic representation one adopts, pinpointing the positive concepts within a negated statement is needed in order to encode the statement's meaning. In this paper, novel ideas to reveal positive implicit meaning using focus of negation are presented. The concept of granularity of focus is introduced and justified. New annotation and features to detect fine-grained focus are discussed and results reported.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lopez-ludena-etal-2012-upm","url":"https:\/\/aclanthology.org\/W12-3142.pdf","title":"UPM system for WMT 2012","abstract":"This paper describes the UPM system for the Spanish-English translation task at the NAACL 2012 workshop on statistical machine translation. This system is based on Moses. We have used all available free corpora, cleaning and deleting some repetitions. In this paper, we also propose a technique for selecting the sentences for tuning the system. This technique is based on the similarity with the sentences to translate. With our approach, we improve the BLEU score from 28.37% to 28.57%. And as a result of the WMT12 challenge we have obtained a 31.80% BLEU with the 2012 test set. Finally, we explain different experiments that we have carried out after the competition.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work leading to these results has received funding from the European Union under grant agreement n\u00b0 287678. It has also been supported by TIMPANO (TIN2011-28169-C05-03), ITALIHA (CAM-UPM), INAPRA (MICINN, DPI2010-21247-C02-02), and MA2VICMR (Comunidad Aut\u00f3noma de Madrid, S2009\/TIC-1542), Plan Avanza Consignos Exp N\u00ba: TSI-020100-2010-489 and the European FEDER fund projects.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bernardi-etal-2010-context","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/330_Paper.pdf","title":"Context Fusion: The Role of Discourse Structure and Centering Theory","abstract":"Questions are not asked in isolation. Their context, viz. the preceding interactions, might be of help to understand them and retrieve the correct answer. Previous research in Interactive Question Answering showed that context fusion has a big potential to improve the performance of answer retrieval. In this paper, we study how much context, and what elements of it, should be considered to answer Follow-Up Questions (FU Qs). Following previous research, we exploit Logistic Regression Models to learn aspects of dialogue structure relevant to answering FU Qs. We enrich existing models based on shallow features with deep features, relying on the theory of discourse structure of (Chai and Jin, 2004), and on Centering Theory, respectively. Using models trained on realistic IQA data, we show which of the various theoretically motivated features hold up against empirical evidence. We also show that, while these deep features do not outperform the shallow ones on their own, an IQA system's answer correctness increases if the shallow and deep features are combined.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"de-melo-weikum-2010-providing","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/312_Paper.pdf","title":"Providing Multilingual, Multimodal Answers to Lexical Database Queries","abstract":"Language users are increasingly turning to electronic resources to address their lexical information needs, due to their convenience and their ability to simultaneously capture different facets of lexical knowledge in a single interface. In this paper, we discuss techniques to respond to a user's lexical queries by providing multilingual and multimodal information, and facilitating navigating along different types of links. To this end, structured information from sources like WordNet, Wikipedia, Wiktionary, as well as Web services is linked and integrated to provide a multi-faceted yet consistent response to user queries. The meanings of words in many different languages are characterized by mapping them to appropriate WordNet sense identifiers and adding multilingual gloss descriptions as well as example sentences. Relationships are derived from WordNet and Wiktionary to allow users to discover semantically related words, etymologically related words, alternative spellings, as well as misspellings. Last but not least, images, audio recordings, and geographical maps extracted from Wikipedia and Wiktionary allow for a multimodal experience.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dhanwal-etal-2020-annotated","url":"https:\/\/aclanthology.org\/2020.lrec-1.149.pdf","title":"An Annotated Dataset of Discourse Modes in Hindi Stories","abstract":"In this paper, we present a new corpus consisting of sentences from Hindi short stories annotated for five different discourse modes argumentative, narrative, descriptive, dialogic and informative. We present a detailed account of the entire data collection and annotation processes. The annotations have a very high inter-annotator agreement (0.87 k-alpha). We analyze the data in terms of label distributions, part of speech tags, and sentence lengths. We characterize the performance of various classification algorithms on this dataset and perform ablation studies to understand the nature of the linguistic models suitable for capturing the nuances of the embedded discourse structures in the presented corpus.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yu-etal-2021-adaptsum","url":"https:\/\/aclanthology.org\/2021.naacl-main.471.pdf","title":"AdaptSum: Towards Low-Resource Domain Adaptation for Abstractive Summarization","abstract":"State-of-the-art abstractive summarization models generally rely on extensive labeled data, which lowers their generalization ability on domains where such data are not available. In this paper, we present a study of domain adaptation for the abstractive summarization task across six diverse target domains in a low-resource setting. Specifically, we investigate the second phase of pre-training on large-scale generative models under three different settings: 1) source domain pre-training; 2) domain-adaptive pre-training; and 3) taskadaptive pre-training. Experiments show that the effectiveness of pre-training is correlated with the similarity between the pre-training data and the target domain task. Moreover, we find that continuing pre-training could lead to the pre-trained model's catastrophic forgetting, and a learning method with less forgetting can alleviate this issue. Furthermore, results illustrate that a huge gap still exists between the low-resource and high-resource settings, which highlights the need for more advanced domain adaptation methods for the abstractive summarization task. 1 * * Equal contributions. Listing order is random.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We want to thank the anonymous reviewers for their constructive feedback. This work is partially funded by ITF\/319\/16FP, ITS\/353\/19FP and MRP\/055\/18 of the Innovation Technology Commission, the Hong Kong SAR Government.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhang-etal-2021-pdaln","url":"https:\/\/aclanthology.org\/2021.emnlp-main.442.pdf","title":"PDALN: Progressive Domain Adaptation over a Pre-trained Model for Low-Resource Cross-Domain Named Entity Recognition","abstract":"Cross-domain Named Entity Recognition (NER) transfers the NER knowledge from high-resource domains to the low-resource target domain. Due to limited labeled resources and domain shift, cross-domain NER is a challenging task. To address these challenges, we propose a progressive domain adaptation Knowledge Distillation (KD) approach-PDALN. It achieves superior domain adaptability by employing three components: (1) Adaptive data augmentation techniques, which alleviate cross-domain gap and label sparsity simultaneously; (2) Multi-level Domain invariant features, derived from a multigrained MMD (Maximum Mean Discrepancy) approach, to enable knowledge transfer across domains; (3) Advanced KD schema, which progressively enables powerful pre-trained language models to perform domain adaptation. Extensive experiments on four benchmarks show that PDALN can effectively adapt highresource domains to low-resource target domains, even if they are diverse in terms and writing styles. Comparison with other baselines indicates the state-of-the-art performance of PDALN.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the reviewers for their valuable comments. This work is supported in part by NSF under grants III-1763325, III-1909323, III-2106758, and SaTC-1930941. ","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jayashree-srijith-2020-evaluation","url":"https:\/\/aclanthology.org\/2020.lrec-1.185.pdf","title":"Evaluation of Deep Gaussian Processes for Text Classification","abstract":"With the tremendous success of deep learning models on computer vision tasks, there are various emerging works on the Natural Language Processing (NLP) task of Text Classification using parametric models. However, it constrains the expressability limit of the function and demands enormous empirical efforts to come up with a robust model architecture. Also, the huge parameters involved in the model causes over-fitting when dealing with small datasets. Deep Gaussian Processes (DGP) offer a Bayesian non-parametric modelling framework with strong function compositionality, and helps in overcoming these limitations. In this paper, we propose DGP models for the task of Text Classification and an empirical comparison of the performance of shallow and Deep Gaussian Process models is made. Extensive experimentation is performed on the benchmark Text Classification datasets such as TREC (Text REtrieval Conference), SST (Stanford Sentiment Treebank), MR (Movie Reviews), R8 (Reuters-8), which demonstrate the effectiveness of DGP models.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"luo-etal-2012-active","url":"https:\/\/aclanthology.org\/W12-3303.pdf","title":"Active Learning with Transfer Learning","abstract":"In sentiment classification, unlabeled user reviews are often free to collect for new products, while sentiment labels are rare. In this case, active learning is often applied to build a high-quality classifier with as small amount of labeled instances as possible. However, when the labeled instances are insufficient, the performance of active learning is limited. In this paper, we aim at enhancing active learning by employing the labeled reviews from a different but related (source) domain. We propose a framework Active Vector Rotation (AVR), which adaptively utilizes the source domain data in the active learning procedure. Thus, AVR gets benefits from source domain when it is helpful, and avoids the negative affects when it is harmful. Extensive experiments on toy data and review texts show our success, compared with other state-of-theart active learning approaches, as well as approaches with domain adaptation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by the National Fundamental Research Program of China (2010CB327903) and the Doctoral Fund of Ministry of Education of China (20110091110003). We also thank Shujian Huang, Ning Xi, Yinggong Zhao, and anonymous reviewers for their greatly helpful comments.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"du-cardie-2020-event","url":"https:\/\/aclanthology.org\/2020.emnlp-main.49.pdf","title":"Event Extraction by Answering (Almost) Natural Questions","abstract":"The problem of event extraction requires detecting the event trigger and extracting its corresponding arguments. Existing work in event argument extraction typically relies heavily on entity recognition as a preprocessing\/concurrent step, causing the well-known problem of error propagation. To avoid this issue, we introduce a new paradigm for event extraction by formulating it as a question answering (QA) task that extracts the event arguments in an end-to-end manner. Empirical results demonstrate that our framework outperforms prior methods substantially; in addition, it is capable of extracting event arguments for roles not seen at training time (i.e., in a zeroshot learning setting). 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers and Heng Ji for helpful suggestions. This research is based on work supported in part by DARPA LwLL Grant FA8750-19-2-0039.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sokolova-lapalme-2009-classification","url":"https:\/\/aclanthology.org\/R09-1076.pdf","title":"Classification of Opinions with Non-affective Adverbs and Adjectives","abstract":"We propose domain-independent language patterns that purposefully omit the affective words for the classification of opinions. The information extracted with those patterns is then used to analyze opinions expressed in the texts. Empirical evidence shows that opinions can be discovered without the use of affective words. We ran experiments on four sets of reviews of consumer goods: books, DVD, electronics, kitchen, and house ware. Our results support the practical use of our approach and its competitiveness in comparison with other data-driven methods. This method can also be applied to analyze texts which do not explicitly disclose affects such as medical and legal documents.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partially supported by the Natural Sciences and Engineering Research Council of Canada and the Ontario Centre of Excellence.","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nicolao-etal-2016-framework","url":"https:\/\/aclanthology.org\/L16-1315.pdf","title":"A Framework for Collecting Realistic Recordings of Dysarthric Speech - the homeService Corpus","abstract":"This paper introduces a new British English speech database, named the homeService corpus, which has been gathered as part of the homeService project. This project aims to help users with speech and motor disabilities to operate their home appliances using voice commands. The audio recorded during such interactions consists of realistic data of speakers with severe dysarthria. The majority of the homeService corpus is recorded in real home environments where voice control is often the normal means by which users interact with their devices. The collection of the corpus is motivated by the shortage of realistic dysarthric speech corpora available to the scientific community. Along with the details on how the data is organised and how it can be accessed, a brief description of the framework used to make the recordings is provided. Finally, the performance of the homeService automatic recogniser for dysarthric speech trained with single-speaker data from the corpus is provided as an initial baseline. Access to the homeService corpus is provided through the dedicated web page at http:\/\/mini.dcs.shef.ac.uk\/resources\/homeservice-corpus\/. This will also have the most updated description of the data. At the time of writing the collection process is still ongoing.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"goswami-etal-2021-cross","url":"https:\/\/aclanthology.org\/2021.emnlp-main.716.pdf","title":"Cross-lingual Sentence Embedding using Multi-Task Learning","abstract":"Multilingual sentence embeddings capture rich semantic information not only for measuring similarity between texts but also for catering to a broad range of downstream crosslingual NLP tasks. State-of-the-art multilingual sentence embedding models require large parallel corpora to learn efficiently, which confines the scope of these models. In this paper, we propose a novel sentence embedding framework based on an unsupervised loss function for generating effective multilingual sentence embeddings, eliminating the need for parallel corpora. We capture semantic similarity and relatedness between sentences using a multitask loss function for training a dual encoder model mapping different languages onto the same vector space. We demonstrate the efficacy of an unsupervised as well as a weakly supervised variant of our framework on STS, BUCC and Tatoeba benchmark tasks. The proposed unsupervised sentence embedding framework outperforms even supervised stateof-the-art methods for certain under-resourced languages on the Tatoeba dataset and on a monolingual benchmark. Further, we show enhanced zero-shot learning capabilities for more than 30 languages, with the model being trained on only 13 languages. Our model can be extended to a wide range of languages from any language family, as it overcomes the requirement of parallel corpora for training. * * Work started during internship at Huawei Research.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"su-etal-2010-learning","url":"https:\/\/aclanthology.org\/P10-2003.pdf","title":"Learning Lexicalized Reordering Models from Reordering Graphs","abstract":"Lexicalized reordering models play a crucial role in phrase-based translation systems. They are usually learned from the word-aligned bilingual corpus by examining the reordering relations of adjacent phrases. Instead of just checking whether there is one phrase adjacent to a given phrase, we argue that it is important to take the number of adjacent phrases into account for better estimations of reordering models. We propose to use a structure named reordering graph, which represents all phrase segmentations of a sentence pair, to learn lexicalized reordering models efficiently. Experimental results on the NIST Chinese-English test sets show that our approach significantly outperforms the baseline method.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors were supported by National Natural Science Foundation of China, Contracts 60873167 and 60903138. We thank the anonymous reviewers for their insightful comments. We are also grateful to Hongmei Zhao and Shu Cai for their helpful feedback.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"collobert-weston-2007-fast","url":"https:\/\/aclanthology.org\/P07-1071.pdf","title":"Fast Semantic Extraction Using a Novel Neural Network Architecture","abstract":"We describe a novel neural network architecture for the problem of semantic role labeling. Many current solutions are complicated, consist of several stages and handbuilt features, and are too slow to be applied as part of real applications that require such semantic labels, partly because of their use of a syntactic parser (Pradhan et al., 2004; Gildea and Jurafsky, 2002). Our method instead learns a direct mapping from source sentence to semantic tags for a given predicate without the aid of a parser or a chunker. Our resulting system obtains accuracies comparable to the current state-of-the-art at a fraction of the computational cost.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"aralikatte-etal-2018-sanskrit","url":"https:\/\/aclanthology.org\/D18-1530.pdf","title":"Sanskrit Sandhi Splitting using seq2(seq)2","abstract":"In Sanskrit, small words (morphemes) are combined to form compound words through a process known as Sandhi. Sandhi splitting is the process of splitting a given compound word into its constituent morphemes. Although rules governing word splitting exists in the language, it is highly challenging to identify the location of the splits in a compound word. Though existing Sandhi splitting systems incorporate these pre-defined splitting rules, they have a low accuracy as the same compound word might be broken down in multiple ways to provide syntactically correct splits. In this research, we propose a novel deep learning architecture called Double Decoder RNN (DD-RNN), which (i) predicts the location of the split(s) with 95% accuracy, and (ii) predicts the constituent words (learning the Sandhi splitting rules) with 79.5% accuracy, outperforming the state-of-art by 20%. Additionally, we show the generalization capability of our deep learning model, by showing competitive results in the problem of Chinese word segmentation, as well.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"malmasi-2014-data","url":"https:\/\/aclanthology.org\/U14-1021.pdf","title":"A Data-driven Approach to Studying Given Names and their Gender and Ethnicity Associations","abstract":"Studying the structure of given names and how they associate with gender and ethnicity is an interesting research topic that has recently found practical uses in various areas. Given the paucity of annotated name data, we develop and make available a new dataset containing 14k given names. Using this dataset, we take a datadriven approach to this task and achieve up to 90% accuracy for classifying the gender of unseen names. For ethnicity identification, our system achieves 83% accuracy. We also experiment with a feature analysis method for exploring the most informative features for this task. Shervin Malmasi. 2014. A Data-driven Approach to Studying Given Names and their Gender and Ethnicity Associations. In Proceedings of Australasian Language Technology Association Workshop, pages 145\u2212149.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the reviewers for their insightful feedback and constructive comments.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"busemann-hauenschild-1988-constructive","url":"https:\/\/aclanthology.org\/C88-1017.pdf","title":"A Constructive View of GPSG or How to Make It Work","abstract":"Using the formalism of generalized phrase structure grammar (GF~SG) in an NL system (e.g. for machine translation (MT)) is promising since the modular structure of the formalism is very well suited to meet some particular needs of MT. However, it seems impossible to implement GPSG in its 1985 version straightforwardly. This would involve a vast overgeneration of structures as well as processes to filter out everything but the admissible tree(s). We therefore argue for a constructive version of GPSG where information is gathered in subsequent steps to produce syntactic structures. As a result, we consider it necessary to incorporate procedural aspects into the formalism in order to use it as a linguistic basis for NL parsing and generation. The paper discusses the major implications of such a modified view of GPSG. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1988,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"thwaites-etal-2010-lips","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/326_Paper.pdf","title":"LIPS: A Tool for Predicting the Lexical Isolation Point of a Word","abstract":"We present LIPS (Lexical Isolation Point Software), a tool for accurate lexical isolation point (IP) prediction in recordings of speech. The IP is the point in time in which a word is correctly recognised given the acoustic evidence available to the hearer. The ability to accurately determine lexical IPs is of importance to work in the field of cognitive processing, since it enables the evaluation of competing models of word recognition. IPs are also of importance in the field of neurolinguistics, where the analyses of high-temporal-resolution neuroimaging data require a precise time alignment of the observed brain activity with the linguistic input. LIPS provides an attractive alternative to costly multi-participant perception experiments by automatically computing IPs for arbitrary words. On a test set of words, the LIPS system predicts IPs with a mean difference from the actual IP of within 1ms. The difference from the predicted and actual IP approximate to a normal distribution with a standard deviation of around 80ms (depending on the model used).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by EPSRC (grant EP\/F030061\/1) and the MRC (grants U.1055.04.002.00001.01 and G0500842).","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-etal-2012-fine","url":"https:\/\/aclanthology.org\/C12-2068.pdf","title":"Fine-Grained Classification of Named Entities by Fusing Multi-Features","abstract":"Due to the increase in the number of classes and the decrease in the semantic differences between classes, fine-grained classification of Named Entities is a more difficult task than classic classification of NEs. Using only simple local context features for this fine-grained task cannot yield a good classification performance. This paper proposes a method exploiting Multi-features for fine-grained classification of Named Entities. In addition to adopting the context features, we introduce three new features into our classification model: the cluster-based features, the entityrelated features and the class-specific features. We experiment on them separately and also fused with prior ones on the subcategorization of person names. Results show that our method achieves a significant improvement for the fine-grained classification task when the new features are fused with others.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by NSFC Project 61075067 and National Key Technology R&D Program (No: 2011BAH10B04-03). We thank Claudio Giuliano for their input person instances and the anonymous reviewers for their insightful comments.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"polanyi-etal-2004-rule","url":"https:\/\/aclanthology.org\/W04-2322.pdf","title":"A Rule Based Approach to Discourse Parsing","abstract":"In this paper we present an overview of recent developments in discourse theory and parsing under the Linguistic Discourse Model (LDM) framework, a semantic theory of discourse structure. We give a novel approach to the problem of discourse segmentation based on discourse semantics and sketch a limited but robust approach to symbolic discourse parsing based on syntactic, semantic and lexical rules. To demonstrate the utility of the system in a real application, we briefly describe the architecture of the PALSUMM system, a symbolic summarization system being developed at FX Palo Alto Laboratory that uses discourse structures constructed using the theory outlined to summarize written English prose texts. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhang-patrick-2006-extracting","url":"https:\/\/aclanthology.org\/U06-1027.pdf","title":"Extracting Patient Clinical Profiles from Case Reports","abstract":"This research aims to extract detailed clinical profiles, such as signs and symptoms, and important laboratory test results of the patient from descriptions of the diagnostic and treatment procedures in journal articles. This paper proposes a novel markup tag set to cover a wide variety of semantics in the description of clinical case studies in the clinical literature. A manually annotated corpus which consists of 75 clinical reports with 5,117 sentences has been created and a sentence classification system is reported as the preliminary attempt to exploit the fast growing online repositories of clinical case reports.","label_nlp4sg":1,"task":["Extracting Patient Clinical Profiles"],"method":["markup tag set","manually annotated corpus"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"We wish to thank Prof Deborah Saltman for defining the tag categories and Joel Nothman for refining their use on texts.","year":2006,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mcdonald-etal-2010-distributed","url":"https:\/\/aclanthology.org\/N10-1069.pdf","title":"Distributed Training Strategies for the Structured Perceptron","abstract":"Perceptron training is widely applied in the natural language processing community for learning complex structured models. Like all structured prediction learning frameworks, the structured perceptron can be costly to train as training complexity is proportional to inference, which is frequently non-linear in example sequence length. In this paper we investigate distributed training strategies for the structured perceptron as a means to reduce training times when computing clusters are available. We look at two strategies and provide convergence bounds for a particular mode of distributed structured perceptron training based on iterative parameter mixing (or averaging). We present experiments on two structured prediction problems-namedentity recognition and dependency parsingto highlight the efficiency of this method.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgements: We thank Mehryar Mohri, Fernando Periera, Mark Dredze and the three anonymous reviews for their helpful comments on this work.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"imrenyi-2013-syntax","url":"https:\/\/aclanthology.org\/W13-3714.pdf","title":"The Syntax of Hungarian Auxiliaries: A Dependency Grammar Account","abstract":"This paper addresses a hot topic of Hungarian syntactic research, viz. the treatment of \"discontinuous\" constructions involving auxiliaries. The case is made for a projective dependency grammar (DG) account built on the notions of rising and catenae (Gro\u00df and Osborne, 2009). Additionally, the semantic basis of the dependency created by rising is described with a view to analogy and constructional meaning.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research reported here was supported by the Hungarian Scientific Research Fund","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"king-falkedal-1990-using","url":"https:\/\/aclanthology.org\/C90-2037.pdf","title":"Using Test Suites in Evaluation of Machine Translation Systems","abstract":"As awareness of the increasing need for translations grows, readiness to consider computerized aids to translation grows with it. Recent years have seen increased funding for research in machine aids to translation, both in the public and the private sector, and potential customers are much in evidence in conferences devoted to work in the area. Activity in the area in its turn stimulate,;\nan interest in evaluation techniques: sponsors would like to know if their money has been well spent, system developers would like to know how well they fare compared to their rivals, and potential customers need to be able to estimate the wisdom of their proposed investment. indeed, interest in evaluation extends beyond translation aids to natural language processing as a whole, as a consequence of attempts to facilitate storage and retrieval of large anaounts of information. Concrete manifestations of this interest include a workshop on evaluation in Philadelphia in late 1988, and, in the particular field of machine translation, the publication of two books, the first [4] dedicated to the topic, the second [7] containing much discussion of it.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1990,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"stede-2004-potsdam","url":"https:\/\/aclanthology.org\/W04-0213.pdf","title":"The Potsdam Commentary Corpus","abstract":"A corpus of German newspaper commentaries has been assembled and annotated with different information (and currently, to different degrees): part-of-speech, syntax, rhetorical structure, connectives, co-reference, and information structure. The paper explains the design decisions taken in the annotations, and describes a number of applications using this corpus with its multi-layer annotation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pauls-etal-2010-top","url":"https:\/\/aclanthology.org\/P10-2037.pdf","title":"Top-Down K-Best A* Parsing","abstract":"We propose a top-down algorithm for extracting k-best lists from a parser. Our algorithm, TKA * is a variant of the kbest A * (KA *) algorithm of Pauls and Klein (2009). In contrast to KA * , which performs an inside and outside pass before performing k-best extraction bottom up, TKA * performs only the inside pass before extracting k-best lists top down. TKA * maintains the same optimality and efficiency guarantees of KA * , but is simpler to both specify and implement.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This project is funded in part by the NSF under grant 0643742 and an NSERC Postgraduate Fellowship.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"iosif-potamianos-2015-feeling","url":"https:\/\/aclanthology.org\/W15-0121.pdf","title":"Feeling is Understanding: From Affective to Semantic Spaces","abstract":"Motivated by theories of language development we investigate the contribution of affect to lexical semantics in the context of distributional semantic models (DSMs). The relationship between semantic and affective spaces is computationally modeled for the task of semantic similarity computation between words. It is shown that affective spaces contain salient information for lexical semantic tasks. We further investigate specific semantic relationships where affective information plays a prominent role. The relations between semantic similarity and opposition are studied in the framework of a binary classification problem applied for the discrimination of synonyms and antonyms. For the case of antonyms, the use of affective features results in 33% relative improvement in classification accuracy compared to the use of semantic features.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgments. This work has been partially funded by the SpeDial project supported by the EU FP7 with grant number 611396, and the BabyAffect project supported by the Greek General Secretariat for Research and Technology (GSRT) with grant number 3610.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pedersen-2017-duluth","url":"https:\/\/aclanthology.org\/S17-2070.pdf","title":"Duluth at SemEval-2017 Task 7 : Puns Upon a Midnight Dreary, Lexical Semantics for the Weak and Weary","abstract":"This paper describes the Duluth systems that participated in SemEval-2017 Task 7 : Detection and Interpretation of English Puns. The Duluth systems participated in all three subtasks, and relied on methods that included word sense disambiguation and measures of semantic relatedness.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"steinmetz-harbusch-2020-enabling","url":"https:\/\/aclanthology.org\/2020.winlp-1.17.pdf","title":"Enabling fast and correct typing in `Leichte Sprache' (Easy Language)","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"braune-etal-2018-evaluating","url":"https:\/\/aclanthology.org\/N18-2030.pdf","title":"Evaluating bilingual word embeddings on the long tail","abstract":"Bilingual word embeddings are useful for bilingual lexicon induction, the task of mining translations of given words. Many studies have shown that bilingual word embeddings perform well for bilingual lexicon induction but they focused on frequent words in general domains. For many applications, bilingual lexicon induction of rare and domainspecific words is of critical importance. Therefore, we design a new task to evaluate bilingual word embeddings on rare words in different domains. We show that state-of-the-art approaches fail on this task and present simple new techniques to improve bilingual word embeddings for mining rare words. We release new gold standard datasets and code to stimulate research on this task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Helmut Schmid and the anonymous reviewers for their valuable input. This project has received funding from the European Unions Horizon 2020 research and innovation programme under grant agreement \u2116 644402 (HimL). This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement \u2116 640550).","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"xie-etal-2021-humorhunter","url":"https:\/\/aclanthology.org\/2021.semeval-1.33.pdf","title":"HumorHunter at SemEval-2021 Task 7: Humor and Offense Recognition with Disentangled Attention","abstract":"In this paper, we describe our system submitted to SemEval 2021 Task 7: HaHackathon: Detecting and Rating Humor and Offense. The task aims at predicting whether the given text is humorous, the average humor rating given by the annotators, and whether the humor rating is controversial. In addition, the task also involves predicting how offensive the text is. Our approach adopts the DeBERTa architecture with disentangled attention mechanism, where the attention scores between words are calculated based on their content vectors and relative position vectors. We also took advantage of the pre-trained language models and fine-tuned the DeBERTa model on all the four subtasks. We experimented with several BERT-like structures and found that the large DeBERTa model generally performs better. During the evaluation phase, our system achieved an F-score of 0.9480 on subtask 1a, an RMSE of 0.5510 on subtask 1b, an F-score of 0.4764 on subtask 1c, and an RMSE of 0.4230 on subtask 2a (rank 3 on the leaderboard).","label_nlp4sg":1,"task":["Humor and Offense Recognition"],"method":["Disentangled Attention","DeBERTa","disentangled attention","DeBERTa","BERT","DeBERTa"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"wendlandt-etal-2018-factors","url":"https:\/\/aclanthology.org\/N18-1190.pdf","title":"Factors Influencing the Surprising Instability of Word Embeddings","abstract":"Despite the recent popularity of word embedding methods, there is only a small body of work exploring the limitations of these representations. In this paper, we consider one aspect of embedding spaces, namely their stability. We show that even relatively high frequency words (100-200 occurrences) are often unstable. We provide empirical evidence for how various factors contribute to the stability of word embeddings, and we analyze the effects of stability on downstream tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Ben King and David Jurgens for helpful discussions about this paper, as well as our anonymous reviewers for useful feedback. This material is based in part upon work supported by the National Science Foundation (NSF #1344257) and the Michigan Institute for Data Science (MIDAS). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF or MI-DAS.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"he-etal-2018-adaptive","url":"https:\/\/aclanthology.org\/D18-1383.pdf","title":"Adaptive Semi-supervised Learning for Cross-domain Sentiment Classification","abstract":"We consider the cross-domain sentiment classification problem, where a sentiment classifier is to be learned from a source domain and to be generalized to a target domain. Our approach explicitly minimizes the distance between the source and the target instances in an embedded feature space. With the difference between source and target minimized, we then exploit additional information from the target domain by consolidating the idea of semi-supervised learning, for which, we jointly employ two regularizations-entropy minimization and self-ensemble bootstrapping-to incorporate the unlabeled target data for classifier refinement. Our experimental results demonstrate that the proposed approach can better leverage unlabeled data from the target domain and achieve substantial improvements over baseline methods in various experimental settings.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"koller-lascarides-2009-logic","url":"https:\/\/aclanthology.org\/E09-1052.pdf","title":"A Logic of Semantic Representations for Shallow Parsing","abstract":"One way to construct semantic representations in a robust manner is to enhance shallow language processors with semantic components. Here, we provide a model theory for a semantic formalism that is designed for this, namely Robust Minimal Recursion Semantics (RMRS). We show that RMRS supports a notion of entailment that allows it to form the basis for comparing the semantic output of different parses of varying depth.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgments. We thank Ann Copestake, Dan Flickinger, and Stefan Thater for extremely fruitful discussions and the reviewers for their comments. The work of Alexander Koller was funded by a DFG Research Fellowship and the Cluster of Excellence \"Multimodal Computing and Interaction\".","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"petasis-karkaletsis-2016-identifying","url":"https:\/\/aclanthology.org\/W16-2811.pdf","title":"Identifying Argument Components through TextRank","abstract":"In this paper we examine the application of an unsupervised extractive summarisation algorithm, TextRank, on a different task, the identification of argumentative components. Our main motivation is to examine whether there is any potential overlap between extractive summarisation and argument mining, and whether approaches used in summarisation (which typically model a document as a whole) can have a positive effect on tasks of argument mining. Evaluation has been performed on two corpora containing user posts from an on-line debating forum and persuasive essays. Evaluation results suggest that graph-based approaches and approaches targeting extractive summarisation can have a positive effect on tasks related to argument mining.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sennrich-2014-cyk","url":"https:\/\/aclanthology.org\/W14-4011.pdf","title":"A CYK+ Variant for SCFG Decoding Without a Dot Chart","abstract":"While CYK+ and Earley-style variants are popular algorithms for decoding unbinarized SCFGs, in particular for syntaxbased Statistical Machine Translation, the algorithms rely on a so-called dot chart which suffers from a high memory consumption. We propose a recursive variant of the CYK+ algorithm that eliminates the dot chart, without incurring an increase in time complexity for SCFG decoding. In an evaluation on a string-totree SMT scenario, we empirically demonstrate substantial improvements in memory consumption and translation speed.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"I thank Matt Post, Philip Williams, Marcin Junczys-Dowmunt and the anonymous reviewers for their helpful suggestions and feedback. This research was funded by the Swiss National Science Foundation under grant P2ZHP1_148717.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sloos-etal-2018-boarnsterhim","url":"https:\/\/aclanthology.org\/L18-1232.pdf","title":"The Boarnsterhim Corpus: A Bilingual Frisian-Dutch Panel and Trend Study","abstract":"The Boarnsterhim Corpus consists of 250 hours of speech in both West Frisian and Dutch by the same sample of bilingual speakers. The corpus contains original recordings from 1982-1984 and a replication study recorded 35 years later. The data collection spans speech of four generations, and combines panel and trend data. This paper describes the Boarnsterhim Corpus halfway the project which started in 2016 and describes the way it was collected, the annotations, potential use, and the envisaged tools and end-user web application.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research has been made possible through a VENI grant (number 275-75-10) by the Netherlands Organization for Scientific Research to the first author, matched by the Fryske Akademy, which is gratefully acknowledged. The BHC1 studies were funded by Nederlandse Organisatie voor Zuiver Wetenschappelijk Onderzoek (currently Netherlands Organization for Scientific Research), Stichting Taalwetenschap Fryske Akademy, and the Frysl\u00e2n Bank.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"economou-etal-2000-lexiploigissi","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2000\/pdf\/271.pdf","title":"LEXIPLOIGISSI: An Educational Platform for the Teaching of Terminology in Greece","abstract":"This paper introduces a project, LEXIPLOIGISSI * , which involves use of language resources for educational purposes. More particularly, the aim of the project is to develop written corpora, electronic dictionaries and exercises to enhance students' reading and writing abilities in six different school subjects. It is the product of a small-scale pilot program that will be part of the school curriculum in the three grades of Upper Secondary Education in Greece. The application seeks to create exploratory learning environments in which digital sound, image, text and video are fully integrated through the educational platform and placed under the direct control of users who are able to follow individual pathways through data stores. * The Institute for Language and Speech Processing has undertaken this project as the leading contractor and Kastaniotis Publications as a subcontractor. The first partner was responsible for the design, development and implementation of the educational platform, as well as for the provision of pedagogic scenarios of use; the second partner provided the resources (texts and multimedia material). The starting date of the project was June 1999, the development of the software and the collection of material lasted nine months.","label_nlp4sg":1,"task":["Teaching"],"method":["Educational Platform"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lin-etal-2018-learning","url":"https:\/\/aclanthology.org\/D18-1331.pdf","title":"Learning When to Concentrate or Divert Attention: Self-Adaptive Attention Temperature for Neural Machine Translation","abstract":"Most of the Neural Machine Translation (NMT) models are based on the sequence-tosequence (Seq2Seq) model with an encoderdecoder framework equipped with the attention mechanism. However, the conventional attention mechanism treats the decoding at each time step equally with the same matrix, which is problematic since the softness of the attention for different types of words (e.g. content words and function words) should differ. Therefore, we propose a new model with a mechanism called Self-Adaptive Control of Temperature (SACT) to control the softness of attention by means of an attention temperature. Experimental results on the Chinese-English translation and English-Vietnamese translation demonstrate that our model outperforms the baseline models, and the analysis and the case study show that our model can attend to the most relevant elements in the source-side contexts and generate the translation of high quality.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by National Natural Science Foundation of China (No. 61673028) and the National Thousand Young Talents Program. Qi Su is the corresponding author of this paper.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"seemann-maletti-2015-discontinuous","url":"https:\/\/aclanthology.org\/W15-3029.pdf","title":"Discontinuous Statistical Machine Translation with Target-Side Dependency Syntax","abstract":"For several languages only potentially non-projective dependency parses are readily available. Projectivizing the parses and utilizing them in syntax-based translation systems often yields particularly bad translation results indicating that those translation models cannot properly utilize such information. We demonstrate that our system based on multi bottom-up tree transducers, which can natively handle discontinuities, can avoid the large translation quality deterioration, achieve the best performance of all classical syntax-based translation systems, and close the gap to phrase-based and hierarchical systems that do not utilize syntax.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to express their gratitude to the reviewers for their helpful comments. Furthermore, we would like to thank ANDERS BJ\u00d6RKELUND and WOLFGANG SEEKER for their shared expertise on dependency parsing.The authors were financially supported by the German Research Foundation (DFG) grant MA 4959 \/ 1-1, which we gratefully acknowledge.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-xue-2014-joint","url":"https:\/\/aclanthology.org\/P14-1069.pdf","title":"Joint POS Tagging and Transition-based Constituent Parsing in Chinese with Non-local Features","abstract":"We propose three improvements to address the drawbacks of state-of-the-art transition-based constituent parsers. First, to resolve the error propagation problem of the traditional pipeline approach, we incorporate POS tagging into the syntactic parsing process. Second, to alleviate the negative influence of size differences among competing action sequences, we align parser states during beam-search decoding. Third, to enhance the power of parsing models, we enlarge the feature set with non-local features and semisupervised word cluster features. Experimental results show that these modifications improve parsing performance significantly. Evaluated on the Chinese Tree-Bank (CTB), our final performance reaches 86.3% (F1) when trained on CTB 5.1, and 87.1% when trained on CTB 6.0, and these results outperform all state-of-the-art parsers.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank three anonymous reviewers for their cogent comments. This work is funded by the DAPRA via contract HR0011-11-C-0145 entitled \/Linguistic Resources for Multilingual Process-ing0. All opinions expressed here are those of the authors and do not necessarily reflect the views of DARPA.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tutubalina-2015-clustering","url":"https:\/\/aclanthology.org\/W15-0906.pdf","title":"Clustering-based Approach to Multiword Expression Extraction and Ranking","abstract":"We present a domain-independent clusteringbased approach for automatic extraction of multiword expressions (MWEs). The method combines statistical information from a general-purpose corpus and texts from Wikipedia articles. We incorporate association measures via dimensions of data points to cluster MWEs and then compute the ranking score for each MWE based on the closest exemplar assigned to a cluster. Evaluation results, achieved for two languages, show that a combination of association measures gives an improvement in the ranking of MWEs compared with simple counts of cooccurrence frequencies and purely statistical measures.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partially supported by Russian Foundation for Basic Research (Project \u2116 13-07-00773).","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"moiron-tiedemann-2006-identifying","url":"https:\/\/aclanthology.org\/W06-2405.pdf","title":"Identifying idiomatic expressions using automatic word-alignment","abstract":"For NLP applications that require some sort of semantic interpretation it would be helpful to know what expressions exhibit an idiomatic meaning and what expressions exhibit a literal meaning. We investigate whether automatic word-alignment in existing parallel corpora facilitates the classification of candidate expressions along a continuum ranging from literal and transparent expressions to idiomatic and opaque expressions. Our method relies on two criteria: (i) meaning predictability that is measured as semantic entropy and (ii), the overlap between the meaning of an expression and the meaning of its component words. We approximate the mentioned overlap as the proportion of default alignments. We obtain a significant improvement over the baseline with both measures.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was carried out as part of the research programs for IMIX, financed by NWO and the IRME STEVIN project. We would also like to thank the three anonymous reviewers for their comments on an earlier version of this paper.","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"grice-savino-1997-pitch","url":"https:\/\/aclanthology.org\/W97-1205.pdf","title":"Can pitch accent type convey information status in yes-no questions?","abstract":"This paper analyses the intonation of polar questions extracted from a corpus of taskoriented dialogues in the Bari variety of Italian. They are classified using a system developed for similar dialogues in English where each question is regarded as an initiating move in a conversational game (Carletta et al 1995). It was found that there was no one-to-one correspondence between move-type and intonation pattern. An alternative classification was carried out taking into account information status, that is, whether or not the information requested by the speaker is recoverable from the previous dialogue context. It is found that the degree of confidence with which the speaker believes the information to be shared with the interlocutor is reflected in the choice of pitch accent and postfocal accentual pattern. Low confidence polar questions contain a L+H* focal pitch accent and allow for accents to follow it, whereas high confidence ones contain a H*+L focal pitch accent, followed by deaccenting or suppression of accents.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Steve Isard and Jean Carletta for generously making time for discussion of the categories of move treated in this paper. Thank you also to Ralf Benzmtiller for his input to the discussion of moves in a multilingual context and to Elisabeth Maier for comments on the manuscript.","year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nie-etal-2021-like","url":"https:\/\/aclanthology.org\/2021.acl-long.134.pdf","title":"I like fish, especially dolphins: Addressing Contradictions in Dialogue Modeling","abstract":"To quantify how well natural language understanding models can capture consistency in a general conversation, we introduce the DialoguE COntradiction DEtection task (DE-CODE) and a new conversational dataset containing both human-human and human-bot contradictory dialogues. We show that: (i) our newly collected dataset is notably more effective at providing supervision for the dialogue contradiction detection task than existing NLI data including those aimed to cover the dialogue domain; (ii) Transformer models that explicitly hinge on utterance structures for dialogue contradiction detection are more robust and generalize well on both analysis and outof-distribution dialogues than standard (unstructured) Transformers. We also show that our best contradiction detection model correlates well with human judgments and further provide evidence for its usage in both automatically evaluating and improving the consistency of state-of-the-art generative chatbots.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the reviewers, and Jie Lei and Hao Tan for their helpful discussions. YN interned at Facebook. YN and MB were later sponsored by NSF-CAREER Award 1846185, DARPA MCS Grant N66001-19-2-4031, and DARPA YFA17-D17AP00022.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"canisius-van-den-bosch-2009-constraint","url":"https:\/\/aclanthology.org\/2009.eamt-1.25.pdf","title":"A Constraint Satisfaction Approach to Machine Translation","abstract":"Constraint satisfaction inference is presented as a generic, theory-neutral inference engine for machine translation. The approach enables the integration of many different solutions to aspects of the output space, including classification-based translation models that take source-side context into account, as well as stochastic components such as target language models. The approach is contrasted with a word-based SMT system using the same decoding algorithm, but optimising a different objective function. The incorporation of sourceside context models in our model filters out many irrelevant candidate translations, leading to superior translation scores.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This study was funded by the Netherlands Organisation for Scientific Research, as part of NWO IMIX and the Vici Implicit Linguistics project.","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nivre-2011-invited","url":"https:\/\/aclanthology.org\/W11-4602.pdf","title":"Invited Paper: Bare-Bones Dependency Parsing -- A Case for Occam's Razor?","abstract":"If all we want from a syntactic parser is a dependency tree, what do we gain by first computing a different representation such as a phrase structure tree? The principle of parsimony suggests that a simpler model should be preferred over a more complex model, all other things being equal, and the simplest model is arguably one that maps a sentence directly to a dependency tree-a bare-bones dependency parser. In this paper, I characterize the parsing problem faced by such a system, survey the major parsing techniques currently in use, and begin to examine whether the simpler model can in fact rival the performance of more complex systems. Although the empirical evidence is still limited, I conclude that bare-bones dependency parsers fare well in terms of parsing accuracy and often excel in terms of efficiency.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"heinzerling-etal-2017-trust","url":"https:\/\/aclanthology.org\/E17-1078.pdf","title":"Trust, but Verify! Better Entity Linking through Automatic Verification","abstract":"We introduce automatic verification as a post-processing step for entity linking (EL). The proposed method trusts EL system results collectively, by assuming entity mentions are mostly linked correctly, in order to create a semantic profile of the given text using geospatial and temporal information, as well as fine-grained entity types. This profile is then used to automatically verify each linked mention individually, i.e., to predict whether it has been linked correctly or not. Verification allows leveraging a rich set of global and pairwise features that would be prohibitively expensive for EL systems employing global inference. Evaluation shows consistent improvements across datasets and systems. In particular, when applied to state-of-theart systems, our method yields an absolute improvement in linking performance of up to 1.7 F 1 on AIDA\/CoNLL'03 and up to 2.4 F 1 on the English TAC KBP 2015 TEDL dataset.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Matthew Francis-Landau, Maria Pershina, as well as the TAC KBP 2015 organizers for providing system output, and the anonymous reviewers for providing helpful feedback. This work has been supported by the German Research Foundation as part of the Research Training Group \"Adaptive Preparation of Information from Heterogeneous Sources\" (AIPHES) under grant No. GRK 1994\/1.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gwinnup-etal-2018-afrl-ohio","url":"https:\/\/aclanthology.org\/W18-6440.pdf","title":"The AFRL-Ohio State WMT18 Multimodal System: Combining Visual with Traditional","abstract":"AFRL-Ohio State extends its usage of visual domain-driven machine translation for use as a peer with traditional machine translation systems. As a peer, it is enveloped into a system combination of neural and statistical MT systems to present a composite translation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"noll-1980-natural","url":"https:\/\/aclanthology.org\/P80-1035.pdf","title":"Natural Language Interaction With Machines: A Passing Fad? Or the Way of the Future?","abstract":"People communicate primarily by two medea: acoustic --the spoken word; and visual N the written word. It is therefore natural chac people would expect their com--,nications with machines Co likewise use Chess two modes.\nTo a considerable extent, speech is probably the most natural of the natural-language modes.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1980,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"heidenreich-williams-2019-latent","url":"https:\/\/aclanthology.org\/D19-5523.pdf","title":"Latent semantic network induction in the context of linked example senses","abstract":"The Princeton WordNet is a powerful tool for studying language and developing natural language processing algorithms. With significant work developing it further, one line considers its extension through aligning its expert-annotated structure with other lexical resources. In contrast, this work explores a completely data-driven approach to network construction, forming a wordnet using the entirety of the open-source, noisy, userannotated dictionary, Wiktionary. Comparing baselines to WordNet, we find compelling evidence that our network induction process constructs a network with useful semantic structure. With thousands of semanticallylinked examples that demonstrate sense usage from basic lemmas to multiword expressions (MWEs), we believe this work motivates future research.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kamalloo-etal-2021-far","url":"https:\/\/aclanthology.org\/2021.findings-acl.309.pdf","title":"Not Far Away, Not So Close: Sample Efficient Nearest Neighbour Data Augmentation via MiniMax","abstract":"In Natural Language Processing (NLP), finding data augmentation techniques that can produce high-quality human-interpretable examples has always been challenging. Recently, leveraging kNN such that augmented examples are retrieved from large repositories of unlabelled sentences has made a step toward interpretable augmentation. Inspired by this paradigm, we introduce MiniMax-kNN, a sample efficient data augmentation strategy tailored for Knowledge Distillation (KD). We exploit a semi-supervised approach based on KD to train a model on augmented data. In contrast to existing kNN augmentation techniques that blindly incorporate all samples, our method dynamically selects a subset of augmented samples that maximizes KL-divergence between the teacher and student models. This step aims to extract the most efficient samples to ensure our augmented data covers regions in the input space with maximum loss value. We evaluated our technique on several text classification tasks and demonstrated that MiniMax-kNN consistently outperforms strong baselines. Our results show that MiniMax-kNN requires fewer augmented examples and less computation to achieve superior performance over the state-of-the-art kNN-based augmentation techniques.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank MindSpore 5 -a new deep learning framework-for partially supporting this work.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"beltagy-etal-2019-combining","url":"https:\/\/aclanthology.org\/N19-1184.pdf","title":"Combining Distant and Direct Supervision for Neural Relation Extraction","abstract":"In relation extraction with distant supervision, noisy labels make it difficult to train quality models. Previous neural models addressed this problem using an attention mechanism that attends to sentences that are likely to express the relations. We improve such models by combining the distant supervision data with an additional directly-supervised data, which we use as supervision for the attention weights. We find that joint training on both types of supervision leads to a better model because it improves the model's ability to identify noisy sentences. In addition, we find that sigmoidal attention weights with max pooling achieves better performance over the commonly used weighted average attention in this setup. Our proposed method 1 achieves a new state-of-theart result on the widely used FB-NYT dataset.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"All experiments were performed on beaker. org. Computations on beaker.org were supported in part by credits from Google Cloud.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"delisle-etal-1998-experiments-learning","url":"https:\/\/aclanthology.org\/P98-1048.pdf","title":"Experiments with Learning Parsing Heuristics","abstract":"Any large language processing software relies in its operation on heuristic decisions concerning the strategy of processing. These decisions are usually \"hard-wired\" into the software in the form of handcrafted heuristic rules, independent of the nature of the processed texts. We propose an alternative, adaptive approach in which machine learning techniques learn the rules from examples of sentences in each class. We have experimented with a variety of learning techniques on a representative instance of this problem within the realm of parsing. Our approach lead to the discovery of new heuristics that perform significantly better than the current hand-crafted heuristic. We discuss the entire cycle of application of machine learning and suggest a methodology for the use of machine learning as a technique for the adaptive optimisation of language-processing software.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work described here was supported by the Natural Sciences and Engineering Research Council of Canada. Analysis and Manufacturing, 5 (2), pp. 109-124. Hermjakob U. & Mooney R.J. (1997) ","year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"roberts-2009-building","url":"https:\/\/aclanthology.org\/W09-2507.pdf","title":"Building an Annotated Textual Inference Corpus for Motion and Space","abstract":"This paper presents an approach for building a corpus for the domain of motion and spatial inference using a specific class of verbs. The approach creates a distribution of inference features that maximize the discriminatory power of a system trained on the corpus. The paper addresses the issue of using an existing textual inference system for generating the examples. This enables the corpus annotation method to assert whether more data is necessary.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhang-etal-2009-hybrid","url":"https:\/\/aclanthology.org\/W09-1204.pdf","title":"Hybrid Multilingual Parsing with HPSG for SRL","abstract":"In this paper we present our syntactic and semantic dependency parsing system submitted to both the closed and open challenges of the CoNLL 2009 Shared Task. The system extends the system of Zhang, Wang, & Uszkoreit (2008) in the multilingual direction, and achieves 76.49 average macro F1 Score on the closed joint task. Substantial improvements to the open SRL task have been observed that are attributed to the HPSG parses with handcrafted grammars. \u2020","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"poesio-etal-2019-crowdsourced","url":"https:\/\/aclanthology.org\/N19-1176.pdf","title":"A Crowdsourced Corpus of Multiple Judgments and Disagreement on Anaphoric Interpretation","abstract":"We present a corpus of anaphoric information (coreference) crowdsourced through a gamewith-a-purpose. The corpus, containing annotations for about 108,000 markables, is one of the largest corpora for coreference for English, and one of the largest crowdsourced NLP corpora, but its main feature is the large number of judgments per markable: 20 on average, and over 2.2M in total. This characteristic makes the corpus a unique resource for the study of disagreements on anaphoric interpretation. A second distinctive feature is its rich annotation scheme, covering singletons, expletives, and split-antecedent plurals. Finally, the corpus also comes with labels inferred using a recently proposed probabilistic model of annotation for coreference. The labels are of high quality and make it possible to successfully train a state of the art coreference resolver, including training on singletons and non-referring expressions. The annotation model can also result in more than one label, or no label, being proposed for a markable, thus serving as a baseline method for automatically identifying ambiguous markables. A preliminary analysis of the results is presented.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by the DALI project, funded by the European Research Council (ERC), Grant agreement ID: 695662.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gavankar-etal-2012-enriching","url":"https:\/\/aclanthology.org\/W12-5807.pdf","title":"Enriching An Academic knowledge base using Linked Open Data","abstract":"In this paper we present work done towards populating a domain ontology using a public knowledge base like DBpedia. Using an academic ontology as our target we identify mappings between a subset of its predicates and those in DBpedia and other linked datasets. In the semantic web context, ontology mapping allows linking of independently developed ontologies and inter-operation of heterogeneous resources. Linked open data is an initiative in this direction. We populate our ontology by querying the linked open datasets for extracting instances from these resources. We show how these along with semantic web standards and tools enable us to populate the academic ontology. Resulting instances could then be used as seeds in spirit of the typical bootstrapping paradigm.","label_nlp4sg":1,"task":["Enriching An Academic knowledge base"],"method":["Linked Open Data"],"goal1":"Quality Education","goal2":"Industry, Innovation and Infrastructure","goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"suster-etal-2017-short","url":"https:\/\/aclanthology.org\/W17-1610.pdf","title":"A Short Review of Ethical Challenges in Clinical Natural Language Processing","abstract":"Clinical NLP has an immense potential in contributing to how clinical practice will be revolutionized by the advent of large scale processing of clinical records. However, this potential has remained largely untapped due to slow progress primarily caused by strict data access policies for researchers. In this paper, we discuss the concern for privacy and the measures it entails. We also suggest sources of less sensitive data. Finally, we draw attention to biases that can compromise the validity of empirical research and lead to socially harmful applications.","label_nlp4sg":1,"task":["Review of Ethical Challenges"],"method":["Privacy analysis"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"We would like to thank Madhumita and the anonymous reviewers for useful comments. Part of this research was carried out in the framework of the Accumulate IWT SBO project, funded by the government agency for Innovation by Science and Technology (IWT).","year":2017,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bodapati-etal-2019-robustness","url":"https:\/\/aclanthology.org\/D19-5531.pdf","title":"Robustness to Capitalization Errors in Named Entity Recognition","abstract":"Robustness to capitalization errors is a highly desirable characteristic of named entity recognizers, yet we find standard models for the task are surprisingly brittle to such noise. Existing methods to improve robustness to the noise completely discard given orthographic information, which significantly degrades their performance on well-formed text. We propose a simple alternative approach based on data augmentation, which allows the model to learn to utilize or ignore orthographic information depending on its usefulness in the context. It achieves competitive robustness to capitalization errors while making negligible compromise to its performance on well-formed text and significantly improving generalization power on noisy user-generated text. Our experiments clearly and consistently validate our claim across different types of machine learning models, languages, and dataset sizes.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sanfilippo-etal-1992-translation","url":"https:\/\/aclanthology.org\/1992.tmi-1.1.pdf","title":"Translation equivalence and lexicalization in the ACQUILEX LKB","abstract":"We propose a strongly lexicalist treatment of translation equivalence where mismatches due to diverging lexicalization patterns are dealt with by means of translation links which capture crosslinguistic generalizations across sets of semantically related lexical items. We show how this treatment can be developed within a unification-based, multilingual lexical knowledge base which is integrated with facilities for semi-automatic development of bilingual lexicons, and describe an approach to machine translation where generation difficulties arising from the lexicalist approach to complex transfer can be solved without making special assumptions about phrasal transfer.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sreedhar-etal-2020-learning","url":"https:\/\/aclanthology.org\/2020.findings-emnlp.221.pdf","title":"Learning Improvised Chatbots from Adversarial Modifications of Natural Language Feedback","abstract":"The ubiquitous nature of chatbots and their interaction with users generate an enormous amount of data. Can we improve chatbots using this data? A self-feeding chatbot improves itself by asking natural language feedback when a user is dissatisfied with its response and uses this feedback as an additional training sample. However, user feedback in most cases contains extraneous sequences hindering their usefulness as a training sample. In this work, we propose a generative adversarial model that converts noisy feedback into a plausible natural response in a conversation. The generator's goal is to convert the feedback into a response that answers the user's previous utterance and to fool the discriminator which distinguishes feedback from natural responses. We show that augmenting original training data with these modified feedback responses improves the original chatbot performance from 69.94% to 75.96% in ranking correct responses on the PERSONACHAT dataset, a large improvement given that the original model is already trained on 131k samples. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Yue Dong for her helpful discussions during the course of this project. We also thank Sandeep Subramanian for his insightful guidance at a crucial stage of this work. This research was enabled in part by computations support provided by Compute Canada (www.computecanada.ca). The last author is supported by the NSERC Discovery Grant on Robust conversational models for accessing the world's knowledge.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"johnson-2012-conditional","url":"https:\/\/aclanthology.org\/2012.amta-papers.28.pdf","title":"Conditional Significance Pruning: Discarding More of Huge Phrase Tables","abstract":"The technique of pruning phrase tables that are used for statistical machine translation (SMT) can achieve substantial reductions in bulk and improve translation quality, especially for very large corpora such at the Giga-FrEn. This can be further improved by conditioning each significance test on other phrase pair co-occurrence counts resulting in an additional reduction in size and increase in BLEU score. A series of experiments using Moses and the WMT11 corpora for French to English have been performed to quantify the improvement. By adhering strictly to the recommendations for the WMT11 baseline system, a strong reproducible research baseline was employed.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"korkmaz-ucoluk-1997-method","url":"https:\/\/aclanthology.org\/W97-1006.pdf","title":"Method for Improving Automatic Word Categorization","abstract":"This paper presents a new approach to automatic word categorization which improves both the efficiency of the algorithm and the quality of the formed clusters. The unigram and the bigram statistics of a corpus of about two million words are used with an efficient distance function to measure the similarities of words, and a greedy algorithm to put the words into clusters. The notions of fuzzy clustering like cluster prototypes, degree of membership are used to form up the clusters. The algorithm is of unsupervised type and the number of clusters are determined at run-time.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wu-etal-2021-sensei","url":"https:\/\/aclanthology.org\/2021.findings-acl.87.pdf","title":"Sensei: Self-Supervised Sensor Name Segmentation","abstract":"Sensor names as alphanumeric strings typically encode their key contextual information such as their function or physical location. We focus here on sensors used in smart building applications. In these applications, sensor names are curated in a building vendorspecific manner using different structures and esoteric vocabularies. Tremendous manual effort is needed to annotate sensor nodes for each building or even to just segment these sensor names into meaningful chunks for intelligent operation of buildings. We propose here a fully automated self-supervised framework, Sensei, that can learn to segment sensor names without any human annotation. We employ a neural language model to capture the underlying structure in sensor names and then induce self-supervision based on information from the language model to build the segmentation model. Extensive experiments on five real-world buildings comprising thousands of sensors demonstrate the superiority of Sensei over baseline methods.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank reviewers for the anonymous comments and suggestions to improve this work. This work was supported in part by National Science Foundation 1940291 and 2040727. Any opinions, findings, and conclusions or recommendations expressed herein are those of the authors and should not be interpreted as necessarily representing the views, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright annotation hereon.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"viethen-dale-2008-generating","url":"https:\/\/aclanthology.org\/U08-1020.pdf","title":"Generating Relational References: What Makes a Difference?","abstract":"When we describe an object in order to enable a listener to identify it, we often do so by indicating the location of that object with respect to other objects in a scene. This requires the use of a relational referring expression; while these are very common, they are relatively unexplored in work on referring expression generation. In this paper, we describe an experiment in which we gathered data on how humans use relational referring expressions in simple scenes, with the aim of identifying the factors that make a difference to the ways in which humans construct referring expressions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"trandabat-husarciuc-2008-romanian","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2008\/pdf\/715_paper.pdf","title":"Romanian Semantic Role Resource","abstract":"Semantic databases are a stable starting point in developing knowledge based systems. Since creating language resources demands many temporal, financial and human resources, a possible solution could be the import of a resource annotation from one language to another. This paper presents the creation of a semantic role database for Romanian, starting from the English FrameNet semantic resource. The intuition behind the importing program is that most of the frames defined in the English FN are likely to be valid cross-lingual, since semantic frames express conceptual structures, language independent at the deep structure level. The surface realization, the surface level, is realized according to each language syntactic constraints. In the paper we present the advantages of choosing to import the English FrameNet annotation, instead of annotating a new corpus. We also take into account the mismatches encountered in the validation process. The rules created to manage particular situations are used to improve the import program. We believe the information and argumentations in this paper could be of interest for those who wish develop FrameNet-like systems for other languages.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhou-etal-2006-chinese","url":"https:\/\/aclanthology.org\/W06-0140.pdf","title":"Chinese Named Entity Recognition with a Multi-Phase Model","abstract":"Chinese named entity recognition is one of the difficult and challenging tasks of NLP. In this paper, we present a Chinese named entity recognition system using a multi-phase model. First, we segment the text with a character-level CRF model. Then we apply three word-level CRF models to the labeling person names, location names and organization names in the segmentation results, respectively. Our systems participated in the NER tests on open and closed tracks of Microsoft Research (MSRA). The actual evaluation results show that our system performs well on both the open tracks and closed tracks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"basu-roy-chowdhury-chaturvedi-2021-commonsense","url":"https:\/\/aclanthology.org\/2021.insights-1.2.pdf","title":"Does Commonsense help in detecting Sarcasm?","abstract":"Sarcasm detection is important for several NLP tasks such as sentiment identification in product reviews, user feedback, and online forums. It is a challenging task requiring a deep understanding of language, context, and world knowledge. In this paper, we investigate whether incorporating commonsense knowledge helps in sarcasm detection. For this, we incorporate commonsense knowledge into the prediction process using a graph convolution network with pre-trained language model embeddings as input. Our experiments with three sarcasm detection datasets indicate that the approach does not outperform the baseline model. We perform an exhaustive set of experiments to analyze where commonsense support adds value and where it hurts classification.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chang-etal-2016-linguistic","url":"https:\/\/aclanthology.org\/O16-2002.pdf","title":"Linguistic Template Extraction for Recognizing Reader-Emotion","abstract":"Previous studies on emotion classification mainly focus on the emotional state of the writer. By contrast, our research emphasizes emotion detection from the readers' perspective. The classification of documents into reader-emotion categories can be applied in several ways, and one of the applications is to retain only the documents that trigger desired emotions to enable users to retrieve documents that contain relevant contents and at the same time instill proper emotions. However, current information retrieval (IR) systems lack the ability to discern emotions within texts, and the detection of reader's emotion has yet to achieve a comparable performance. Moreover, previous machine learning-based approaches generally use statistical models that are not in a human-readable form. Thereby, it is difficult to pinpoint the reason for recognition failures and understand the types of emotions that the articles inspired on their readers. In this paper, we propose a flexible emotion template-based approach (TBA) for reader-emotion detection that simulates such process in a human perceptive manner. TBA is a highly automated process that incorporates various knowledge sources to learn an emotion template from raw text that characterize an emotion and are comprehensible for humans. Generated templates are adopted to predict reader's emotion through an alignment-based matching algorithm that allows an emotion template to be partially matched through a statistical scoring scheme. Experimental results demonstrate that our approach can effectively detect reader's emotions by exploiting the syntactic structures and semantic associations in the context, while","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by the Ministry of Science and Technology of Taiwan under grant MOST 103-3111-Y-001-027.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"xu-etal-2010-high","url":"https:\/\/aclanthology.org\/W10-4135.pdf","title":"High OOV-Recall Chinese Word Segmenter","abstract":"For the competition of Chinese word segmentation held in the first CIPS-SIGHNA joint conference. We applied a subwordbased word segmenter using CRFs and extended the segmenter with OOV words recognized by Accessor Variety. Moreover, we proposed several post-processing rules to improve the performance. Our system achieved promising OOV recall among all the participants.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by the National Science Foundation of China (60873091).","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sevilla-2020-explaining","url":"https:\/\/aclanthology.org\/2020.nl4xai-1.8.pdf","title":"Explaining data using causal Bayesian networks","abstract":"We introduce Causal Bayesian Networks as a formalism for representing and explaining probabilistic causal relations, review the state of the art on learning Causal Bayesian Networks and suggest and illustrate a research avenue for studying pairwise identification of causal relations inspired by graphical causality criteria.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"I thank my supervisors Ehud Reiter and Nava Tintarev for thorough discussion and support.I also thank the anonymous reviewers for the NL4XAI for kindly providing constructive feedback to improve the paper.This research has been supported by the NL4XAI project, which is funded under the European Union's Horizon 2020 programme, grant agreement 860621.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"arumae-liu-2018-reinforced","url":"https:\/\/aclanthology.org\/P18-3015.pdf","title":"Reinforced Extractive Summarization with Question-Focused Rewards","abstract":"We investigate a new training paradigm for extractive summarization. Traditionally, human abstracts are used to derive goldstandard labels for extraction units. However, the labels are often inaccurate, because human abstracts and source documents cannot be easily aligned at the word level. In this paper we convert human abstracts to a set of Cloze-style comprehension questions. System summaries are encouraged to preserve salient source content useful for answering questions and share common words with the abstracts. We use reinforcement learning to explore the space of possible extractive summaries and introduce a question-focused reward function to promote concise, fluent, and informative summaries. Our experiments show that the proposed method is effective. It surpasses state-of-the-art systems on the standard summarization dataset. Source Document The first doses of the Ebola vaccine were on a commercial flight to West Africa and were expected to arrive on Friday, according to a spokesperson from GlaxoSmithKline (GSK) one of the companies that has created the vaccine with the National Institutes of Health. Another vaccine from Merck and NewLink will also be tested. \"Shipping the vaccine today is a major achievement and shows that we remain on track with the accelerated development of our candidate Ebola vaccine,\" Dr. Moncef Slaoui, chairman of global vaccines at GSK said in a company release. (Rest omitted.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their valuable suggestions. This work is in part supported by an unrestricted gift from Bosch Research. Kristjan Arumae gratefully acknowledges a travel grant provided by the National Science Foundation.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ziak-kern-2013-knce2013","url":"https:\/\/aclanthology.org\/S13-1019.pdf","title":"KnCe2013-CORE:Semantic Text Similarity by use of Knowledge Bases","abstract":"In this paper we describe KnCe2013-CORE, a system to compute the semantic similarity of two short text snippets. The system computes a number of features which are gathered from different knowledge bases, namely WordNet, Wikipedia and Wiktionary. The similarity scores derived from these features are then fed into several multilayer perceptron neuronal networks. Depending on the size of the text snippets different parameters for the neural networks are used. The final output of the neural networks is compared to human judged data. In the evaluation our system performed sufficiently well for text snippets of equal length, but the performance dropped considerably once the pairs of text snippets differ in size.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The Know-Center is funded within the Austrian COMET Program -Competence Centers for Excellent Technologies -under the auspices of the Austrian Federal Ministry of Transport, Innovation and Technology, the Austrian Federal Ministry of Economy, Family and Youth and by the State of Styria. COMET is managed by the Austrian Research Promotion Agency FFG.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"duma-menzel-2017-uhh-submission","url":"https:\/\/aclanthology.org\/W17-4766.pdf","title":"UHH Submission to the WMT17 Metrics Shared Task","abstract":"In this paper the UHH submission to the WMT17 Metrics Shared Task is presented, which is based on sequence and tree kernel functions applied to the reference and candidate translations. In addition we also explore the effect of applying the kernel functions on the source sentence and a back-translation of the MT output, but also on the pair composed of the candidate translation and a pseudo-reference of the source segment. The newly proposed metric was evaluated using the data from WMT16, with the results demonstrating a high correlation with human judgments.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mirza-tonelli-2016-contribution","url":"https:\/\/aclanthology.org\/C16-1265.pdf","title":"On the contribution of word embeddings to temporal relation classification","abstract":"Temporal relation classification is a challenging task, especially when there are no explicit markers to characterise the relation between temporal entities. This occurs frequently in intersentential relations, whose entities are not connected via direct syntactic relations making classification even more difficult. In these cases, resorting to features that focus on the semantic content of the event words may be very beneficial for inferring implicit relations. Specifically, while morpho-syntactic and context features are considered sufficient for classifying event-timex pairs, we believe that exploiting distributional semantic information about event words can benefit supervised classification of other types of pairs. In this work, we assess the impact of using word embeddings as features for event words in classifying temporal relations of event-event pairs and event-DCT (document creation time) pairs.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research leading to this paper was partially supported by the European Union's 7th Framework Programme via the NewsReader Project (ICT-316404) and the National University of Singapore. We thank Ilija Ilievski, Min-Yen Kan and Hwee Tou Ng, who provided insight and expertise that greatly assisted the research.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"claveau-sebillot-2004-efficiency","url":"https:\/\/aclanthology.org\/C04-1038.pdf","title":"From efficiency to portability: acquisition of semantic relations by semi-supervised machine learning","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kipper-etal-2004-using","url":"https:\/\/aclanthology.org\/W04-2604.pdf","title":"Using prepositions to extend a verb lexicon","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"thayaparan-etal-2021-textgraphs","url":"https:\/\/aclanthology.org\/2021.textgraphs-1.17.pdf","title":"TextGraphs 2021 Shared Task on Multi-Hop Inference for Explanation Regeneration","abstract":"The Shared Task on Multi-Hop Inference for Explanation Regeneration asks participants to compose large multi-hop explanations to questions by assembling large chains of facts from a supporting knowledge base. While previous editions of this shared task aimed to evaluate explanatory completeness-finding a set of facts that form a complete inference chain, without gaps, to arrive from question to correct answer, this 2021 instantiation concentrates on the subtask of determining relevance in large multi-hop explanations. To this end, this edition of the shared task makes use of a large set of approximately 250k manual explanatory relevancy ratings that augment the 2020 shared task data. In this summary paper, we describe the details of the explanation regeneration task, the evaluation data, and the participating systems. Additionally, we perform a detailed analysis of participating systems, evaluating various aspects involved in the multi-hop inference process. The best performing system achieved an NDCG of 0.82 on this challenging task, substantially increasing performance over baseline methods by 32%, while also leaving significant room for future improvement.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Peter Jansen's work on the shared task was supported by National Science Foundation (NSF","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gordon-swanson-2007-generalizing","url":"https:\/\/aclanthology.org\/P07-1025.pdf","title":"Generalizing semantic role annotations across syntactically similar verbs","abstract":"Large corpora of parsed sentences with semantic role labels (e.g. PropBank) provide training data for use in the creation of high-performance automatic semantic role labeling systems. Despite the size of these corpora, individual verbs (or rolesets) often have only a handful of instances in these corpora, and only a fraction of English verbs have even a single annotation. In this paper, we describe an approach for dealing with this sparse data problem, enabling accurate semantic role labeling for novel verbs (rolesets) with only a single training example. Our approach involves the identification of syntactically similar verbs found in Prop-Bank, the alignment of arguments in their corresponding rolesets, and the use of their corresponding annotations in Prop-Bank as surrogate training data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The project or effort depicted was or is sponsored by the U.S. Army Research, Development, and Engineering Command (RDECOM), and that the content or information does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred.","year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mosallanezhad-etal-2019-deep","url":"https:\/\/aclanthology.org\/D19-1240.pdf","title":"Deep Reinforcement Learning-based Text Anonymization against Private-Attribute Inference","abstract":"User-generated textual data is rich in content and has been used in many user behavioral modeling tasks. However, it could also leak user private-attribute information that they may not want to disclose such as age and location. User's privacy concerns mandate data publishers to protect privacy. One effective way is to anonymize the textual data. In this paper, we study the problem of textual data anonymization and propose a novel Reinforcement Learning-based Text Anonymizor, RLTA, which addresses the problem of private-attribute leakage while preserving the utility of textual data. Our approach first extracts a latent representation of the original text w.r.t. a given task, then leverages deep reinforcement learning to automatically learn an optimal strategy for manipulating text representations w.r.t. the received privacy and utility feedback. Experiments show the effectiveness of this approach in terms of preserving both privacy and utility.","label_nlp4sg":1,"task":["Text Anonymization"],"method":["Deep Reinforcement Learning"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank Jundong Li for his help throughout the paper. This material is based upon the work supported, in part, by NSF 1614576, ARO W911NF-15-1-0328 and ONR N00014-17-1-2605.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"risch-etal-2021-toxic","url":"https:\/\/aclanthology.org\/2021.woah-1.17.pdf","title":"Data Integration for Toxic Comment Classification: Making More Than 40 Datasets Easily Accessible in One Unified Format","abstract":"With the rise of research on toxic comment classification, more and more annotated datasets have been released. The wide variety of the task (different languages, different labeling processes and schemes) has led to a large amount of heterogeneous datasets that can be used for training and testing very specific settings. Despite recent efforts to create web pages that provide an overview, most publications still use only a single dataset. They are not stored in one central database, they come in many different data formats and it is difficult to interpret their class labels and how to reuse these labels in other projects.","label_nlp4sg":1,"task":["Toxic Comment Classification"],"method":["Data Integration"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"webster-1994-building","url":"https:\/\/aclanthology.org\/C94-2111.pdf","title":"Building a Windows-Based Bilingual Functional Semantic Processor","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhan-etal-2022-mitigating","url":"https:\/\/aclanthology.org\/2022.findings-acl.175.pdf","title":"Mitigating the Inconsistency Between Word Saliency and Model Confidence with Pathological Contrastive Training","abstract":"Neural networks are widely used in various NLP tasks for their remarkable performance. However, the complexity makes them difficult to interpret, i.e., they are not guaranteed right for the right reason. Besides the complexity, we reveal that the model pathology-the inconsistency between word saliency and model confidence, further hurts the interpretability. We show that the pathological inconsistency is caused by the representation collapse issue, which means that the representation of the sentences with tokens in different saliency reduced is somehow collapsed, and thus the important words cannot be distinguished from unimportant words in terms of model confidence changing. In this paper, to mitigate the pathology and obtain more interpretable models, we propose Pathological Contrastive Training (PCT) framework, which adopts contrastive learning and saliency-based samples augmentation to calibrate the sentences representation. Combined with qualitative analysis, we also conduct extensive quantitative experiments and measure the interpretability with eight reasonable metrics. Experiments show that our method can mitigate the model pathology and generate more interpretable models while keeping the model performance. Ablation study also shows the effectiveness.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their insightful comments. This research was supported by National Research and Development Program of China (No.2019YFB1005200).","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"patrick-etal-2002-slinerc","url":"https:\/\/aclanthology.org\/W02-2022.pdf","title":"SLINERC: The Sydney Language-Independent Named Entity Recogniser and Classifier","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"xu-etal-2011-instance","url":"https:\/\/aclanthology.org\/W11-1724.pdf","title":"Instance Level Transfer Learning for Cross Lingual Opinion Analysis","abstract":"This paper presents two instance-level transfer learning based algorithms for cross lingual opinion analysis by transferring useful translated opinion examples from other languages as the supplementary training data for improving the opinion classifier in target language. Starting from the union of small training data in target language and large translated examples in other languages, the Transfer AdaBoost algorithm is applied to iteratively reduce the influence of low quality translated examples. Alternatively, starting only from the training data in target language, the Transfer Self-training algorithm is designed to iteratively select high quality translated examples to enrich the training data set. These two algorithms are applied to sentence-and document-level cross lingual opinion analysis tasks, respectively. The evaluations show that these algorithms effectively improve the opinion analysis by exploiting small target language training data and large cross lingual training data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"aggarwal-etal-2015-non","url":"https:\/\/aclanthology.org\/S15-1010.pdf","title":"Non-Orthogonal Explicit Semantic Analysis","abstract":"Explicit Semantic Analysis (ESA) utilizes the Wikipedia knowledge base to represent the semantics of a word by a vector where every dimension refers to an explicitly defined concept like a Wikipedia article. ESA inherently assumes that Wikipedia concepts are orthogonal to each other, therefore, it considers that two words are related only if they co-occur in the same articles. However, two words can be related to each other even if they appear separately in related articles rather than cooccurring in the same articles. This leads to a need for extending the ESA model to consider the relatedness between the explicit concepts (i.e. Wikipedia articles in Wikipedia based implementation) for computing textual relatedness. In this paper, we present Non-Orthogonal ESA (NESA) which represents more fine grained semantics of a word as a vector of explicit concept dimensions, where every such concept dimension further constitutes a semantic vector built in another vector space. Thus, NESA considers the concept correlations in computing the relatedness between two words. We explore different approaches to compute the concept correlation weights, and compare these approaches with other existing methods. Furthermore, we evaluate our model NESA on several word relatedness benchmarks showing that it outperforms the state of the art methods.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been funded by a research grant from Science Foundation Ireland (SFI) under Grant Number SFI\/12\/RC\/2289 (INSIGHT).","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yu-etal-2021-lv","url":"https:\/\/aclanthology.org\/2021.findings-acl.2.pdf","title":"LV-BERT: Exploiting Layer Variety for BERT","abstract":"Modern pre-trained language models are mostly built upon backbones stacking selfattention and feed-forward layers in an interleaved order. In this paper, beyond this stereotyped layer pattern, we aim to improve pre-trained models by exploiting layer variety from two aspects: the layer type set and the layer order. Specifically, besides the original self-attention and feed-forward layers, we introduce convolution into the layer type set, which is experimentally found beneficial to pre-trained models. Furthermore, beyond the original interleaved order, we explore more layer orders to discover more powerful architectures. However, the introduced layer variety leads to a large architecture space of more than billions of candidates, while training a single candidate model from scratch already requires huge computation cost, making it not affordable to search such a space by directly training large amounts of candidate models. To solve this problem, we first pre-train a supernet from which the weights of all candidate models can be inherited, and then adopt an evolutionary algorithm guided by pre-training accuracy to find the optimal architecture. Extensive experiments show that LV-BERT model obtained by our method outperforms BERT and its variants on various downstream tasks. For example, LV-BERT-small achieves 78.8 on the GLUE testing set, 1.8 higher than the strong baseline ELECTRA-small. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their insightful comments and suggestions. This research\/project is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-100E\/-2019-035). Jiashi Feng was partially supported by MOE2017-T2-2-151, NUS ECRA FY17 P08 and CRP20-2017-0006. The authors also thank Quanhong Fu and Jian Liang for the help to improve the technical writing aspect of this paper. The computational work for this article was partially performed on resources of the National Supercomputing Centre, Singapore (https:\/\/www.nscc.sg). Weihao Yu would like to thank TPU Research Cloud (TRC)","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kaushik-etal-2021-efficacy","url":"https:\/\/aclanthology.org\/2021.acl-long.517.pdf","title":"On the Efficacy of Adversarial Data Collection for Question Answering: Results from a Large-Scale Randomized Study","abstract":"In adversarial data collection (ADC), a human workforce interacts with a model in real time, attempting to produce examples that elicit incorrect predictions. Researchers hope that models trained on these more challenging datasets will rely less on superficial patterns, and thus be less brittle. However, despite ADC's intuitive appeal, it remains unclear when training on adversarial datasets produces more robust models. In this paper, we conduct a large-scale controlled study focused on question answering, assigning workers at random to compose questions either (i) adversarially (with a model in the loop); or (ii) in the standard fashion (without a model). Across a variety of models and datasets, we find that models trained on adversarial data usually perform better on other adversarial datasets but worse on a diverse collection of out-of-domain evaluation sets. Finally, we provide a qualitative analysis of adversarial (vs standard) data, identifying key differences and offering guidance for future research.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors thank Max Bartolo, Robin Jia, Tanya Marwah, Sanket Vaibhav Mehta, Sina Fazelpour, Kundan Krishna, Shantanu Gupta, Simran Kaur, and Aishwarya Kamath for their valuable feedback on the crowdsourcing platform and the paper.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"resnik-etal-2015-university","url":"https:\/\/aclanthology.org\/W15-1207.pdf","title":"The University of Maryland CLPsych 2015 Shared Task System","abstract":"The 2015 ACL Workshop on Computational Linguistics and Clinical Psychology included a shared task focusing on classification of a sample of Twitter users according to three mental health categories: users who have self-reported a diagnosis of depression, users who have self-reported a diagnosis of post-traumatic stress disorder (PTSD), and control users who have done neither Coppersmith et al., 2014) . Like other shared tasks, the goal here was to assess the state of the art with regard to a challenging problem, to advance that state of the art, and to bring together and hopefully expand the community of researchers interested in solving it.","label_nlp4sg":1,"task":["Mental health classification"],"method":[],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"We are grateful to Rebecca Resnik for contributing her comments and clinical expertise, and we thank Glen Coppersmith, Mark Dredze, Jamie Pennebaker, and their colleagues for kindly sharing data and resources. This work was supported in part by NSF awards 1320538, 1018625, and 1211153. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsor.","year":2015,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rothe-schutze-2017-autoextend","url":"https:\/\/aclanthology.org\/J17-3004.pdf","title":"AutoExtend: Combining Word Embeddings with Semantic Resources","abstract":"We present AutoExtend, a system that combines word embeddings with semantic resources by learning embeddings for non-word objects like synsets and entities and learning word embeddings that incorporate the semantic information from the resource. The method is based on encoding and decoding the word embeddings and is flexible in that it can take any word embeddings as input and does not need an additional training corpus. The obtained embeddings live in the same vector space as the input word embeddings. A sparse tensor formalization guarantees efficiency and parallelizability. We use WordNet, GermaNet, and Freebase as semantic resources. AutoExtend achieves state-of-the-art performance on Word-in-Context Similarity and Word Sense Disambiguation tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was funded by Deutsche Forschungsgemeinschaft (DFG SCHU 2246\/2-2). We are grateful to Christiane Fellbaum for discussions leading up to this article and to the anonymous reviewers for their comments.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"reddy-etal-2019-coqa","url":"https:\/\/aclanthology.org\/Q19-1016.pdf","title":"CoQA: A Conversational Question Answering Challenge","abstract":"Humans gather information through conversations involving a series of interconnected questions and answers. For machines to assist in information gathering, it is therefore essential to enable them to answer conversational questions. We introduce CoQA, a novel dataset for building Conversational Question Answering systems. Our dataset contains 127k questions with answers, obtained from 8k conversations about text passages from seven diverse domains. The questions are conversational, and the answers are free-form text with their corresponding evidence highlighted in the passage. We analyze CoQA in depth and show that conversational questions have challenging phenomena not present in existing reading comprehension datasets (e.g., coreference and pragmatic reasoning). We evaluate strong dialogue and reading comprehension models on CoQA. The best system obtains an F1 score of 65.4%, which is 23.4 points behind human performance (88.8%), indicating that there is ample room for improvement. We present CoQA as a challenge to the community at https:\/\/stanfordnlp.github. io\/coqa.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank MTurk workers, especially the Master Chatters and the MTC forum members, for contributing to the creation of CoQA, for giving feedback on various pilot interfaces, and for promoting our hits enthusiastically on various forums. CoQA has been made possible with financial support from the Facebook ParlAI and the Amazon Research awards, and gift funding from Toyota Research Institute. Danqi is supported by a Facebook PhD fellowship. We also would like to thank the members of the Stanford NLP group for critical feedback on the interface and experiments. We especially thank Drew Arad Hudson for participating in initial discussions, and Matthew Lamm for proof-reading the paper. We also thank the VQA team and Spandana Gella for their help in generating Figure 3. ","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"quirk-etal-2012-msr","url":"https:\/\/aclanthology.org\/N12-3006.pdf","title":"MSR SPLAT, a language analysis toolkit","abstract":"We describe MSR SPLAT, a toolkit for language analysis that allows easy access to the linguistic analysis tools produced by the NLP group at Microsoft Research. The tools include both traditional linguistic analysis tools such as part-of-speech taggers, constituency and dependency parsers, and more recent developments such as sentiment detection and linguistically valid morphology. As we expand the tools we develop for our own research, the set of tools available in MSR SPLAT will be extended. The toolkit is accessible as a web service, which can be used from a broad set of programming languages.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"preiss-etal-2009-hmms","url":"https:\/\/aclanthology.org\/W09-4204.pdf","title":"HMMs, GRs, and N-Grams as Lexical Substitution Techniques -- Are They Portable to Other Languages?","abstract":"We introduce a number of novel techniques to lexical substitution, including an application of the Forward-Backward algorithm, a grammatical relation based similarity measure, and a modified form of n-gram matching. We test these techniques on the Semeval-2007 lexical substitution data [McCarthy and Navigli, 2007], to demonstrate their competitive performance. We create a similar (small scale) dataset for Czech, and our evaluation demonstrates language independence of the techniques.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"senda-etal-2004-support","url":"https:\/\/aclanthology.org\/C04-1023.pdf","title":"A Support System for Revising Titles to Stimulate the Lay Reader's Interest in Technical Achievements","abstract":"When we write a report or an explanation on a newly-developed technology for readers including laypersons, it is very important to compose a title that can stimulate their interest in the technology. However, it is difficult for inexperienced authors to come up with an appealing title. In this research, we developed a support system for revising titles. We call it \"title revision wizard\". The wizard provides a guidance on revising draft title to compose a title meeting three key points, and support tools for coming up with and elaborating on comprehensible or appealing phrases. In order to test the effect of our title revision wizard, we conducted a questionnaire survey on the effect of the titles with or without using the wizard on the interest of lay readers. The survey showed that the wizard is effective and helpful for the authors who cannot compose appealing titles for lay readers by themselves.","label_nlp4sg":1,"task":["Revising Titles"],"method":["Support System"],"goal1":"Quality Education","goal2":"Decent Work and Economic Growth","goal3":null,"acknowledgments":"The research fields of subjects were physics, electrical engineering, material science and meteorology. There was no intermission between Ex 1 and 2.","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":1,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sabo-etal-2021-revisiting","url":"https:\/\/aclanthology.org\/2021.tacl-1.42.pdf","title":"Revisiting Few-shot Relation Classification: Evaluation Data and Classification Schemes","abstract":"We explore few-shot learning (FSL) for relation classification (RC). Focusing on the realistic scenario of FSL, in which a test instance might not belong to any of the target categories (none-of-the-above, [NOTA]), we first revisit the recent popular dataset structure for FSL, pointing out its unrealistic data distribution. To remedy this, we propose a novel methodology for deriving more realistic few-shot test data from available datasets for supervised RC, and apply it to the TACRED dataset. This yields a new challenging benchmark for FSL-RC, on which state of the art models show poor performance. Next, we analyze classification schemes within the popular embedding-based nearest-neighbor approach for FSL, with respect to constraints they impose on the embedding space. Triggered by this analysis, we propose a novel classification scheme in which the NOTA category is represented as learned vectors, shown empirically to be an appealing option for FSL.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hagege-roux-2002-robust","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2002\/pdf\/226.pdf","title":"A Robust and Flexible Platform for Dependency Extraction","abstract":"This paper describes a linguistic platform, Xerox Incremental Parser (XIP hereafter), to develop robust grammars. Most robust parsers usually impose one specific strategy (constraint-based or incremental) in the grammar writing, whereas XIP allows mixing both types of analysis. The first part introduces XIP and its main functionalities. The second part illustrates how a linguist can benefit from merging different strategies in grammar writing. Finally, a first evaluation of different grammars is given.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-zhong-1999-new","url":"https:\/\/aclanthology.org\/1999.mtsummit-1.76.pdf","title":"A new way to conceptual meaning representation","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1999,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sun-lepage-2012-word","url":"https:\/\/aclanthology.org\/Y12-1038.pdf","title":"Can Word Segmentation be Considered Harmful for Statistical Machine Translation Tasks between Japanese and Chinese?","abstract":"Unlike most Western languages, there are no typographic boundaries between words in written Japanese and Chinese. Word segmentation is thus normally adopted as an initial step in most natural language processing tasks for these Asian languages. Although word segmentation techniques have improved greatly both theoretically and practically, there still remains some problems to be tackled. In this paper, we present an effective approach in extracting Chinese and Japanese phrases without conducting word segmentation beforehand, using a sampling-based multilingual alignment method. According to our experiments, it is also feasible to train a statistical machine translation system on a small Japanese-Chinese training corpus without performing word segmentation beforehand.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research has been supported in part by the Kitakyushu Foundation for the Advancement of Industry, Science and Technology (FAIS) with Foreign Joint Project funds.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"guo-diab-2010-combining","url":"https:\/\/aclanthology.org\/P10-1156.pdf","title":"Combining Orthogonal Monolingual and Multilingual Sources of Evidence for All Words WSD","abstract":"Word Sense Disambiguation remains one of the most complex problems facing computational linguists to date. In this paper we present a system that combines evidence from a monolingual WSD system together with that from a multilingual WSD system to yield state of the art performance on standard All-Words data sets. The monolingual system is based on a modification of the graph based state of the art algorithm In-Degree. The multilingual system is an improvement over an All-Words unsupervised approach, SALAAM. SALAAM exploits multilingual evidence as a means of disambiguation. In this paper, we present modifications to both of the original approaches and then their combination. We finally report the highest results obtained to date on the SENSEVAL 2 standard data set using an unsupervised method, we achieve an overall F measure of 64.58 using a voting scheme.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"paun-etal-2018-comparing","url":"https:\/\/aclanthology.org\/Q18-1040.pdf","title":"Comparing Bayesian Models of Annotation","abstract":"The analysis of crowdsourced annotations in natural language processing is concerned with identifying (1) gold standard labels, (2) annotator accuracies and biases, and (3) item difficulties and error patterns. Traditionally, majority voting was used for 1, and coefficients of agreement for 2 and 3. Lately, model-based analysis of corpus annotations have proven better at all three tasks. But there has been relatively little work comparing them on the same datasets. This paper aims to fill this gap by analyzing six models of annotation, covering different approaches to annotator ability, item difficulty, and parameter pooling (tying) across annotators and items. We evaluate these models along four aspects: comparison to gold labels, predictive accuracy for new annotations, annotator characterization, and item difficulty, using four datasets with varying degrees of noise in the form of random (spammy) annotators. We conclude with guidelines for model selection, application, and implementation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Paun, Chamberlain, and Poesio are supported by the DALI project, funded by ERC. Carpenter is partly supported by the U.S. National Science Foundation and the U.S. Office of Naval Research.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"schwenk-li-2018-corpus","url":"https:\/\/aclanthology.org\/L18-1560.pdf","title":"A Corpus for Multilingual Document Classification in Eight Languages","abstract":"Cross-lingual document classification aims at training a document classifier on resources in one language and transferring it to a different language without any additional resources. Several approaches have been proposed in the literature and the current best practice is to evaluate them on a subset of the Reuters Corpus Volume 2. However, this subset covers only few languages (English, German, French and Spanish) and almost all published works focus on the the transfer between English and German. In addition, we have observed that the class prior distributions differ significantly between the languages. We argue that this complicates the evaluation of the multilinguality. In this paper, we propose a new subset of the Reuters corpus with balanced class priors for eight languages. By adding Italian, Russian, Japanese and Chinese, we cover languages which are very different with respect to syntax, morphology, etc. We provide strong baselines for all language transfer directions using multilingual word and sentence embeddings respectively. Our goal is to offer a freely available framework to evaluate cross-lingual document classification, and we hope to foster by these means, research in this important area.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nguyen-etal-2014-robust","url":"https:\/\/aclanthology.org\/P14-1076.pdf","title":"Robust Domain Adaptation for Relation Extraction via Clustering Consistency","abstract":"We propose a two-phase framework to adapt existing relation extraction classifiers to extract relations for new target domains. We address two challenges: negative transfer when knowledge in source domains is used without considering the differences in relation distributions; and lack of adequate labeled samples for rarer relations in the new domain, due to a small labeled data set and imbalance relation distributions. Our framework leverages on both labeled and unlabeled data in the target domain. First, we determine the relevance of each source domain to the target domain for each relation type, using the consistency between the clustering given by the target domain labels and the clustering given by the predictors trained for the source domain. To overcome the lack of labeled samples for rarer relations, these clusterings operate on both the labeled and unlabeled data in the target domain. Second, we trade-off between using relevance-weighted sourcedomain predictors and the labeled target data. Again, to overcome the imbalance distribution, the source-domain predictors operate on the unlabeled target data. Our method outperforms numerous baselines and a weakly-supervised relation extraction method on ACE 2004 and YAGO.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by DSO grant DSOCL10021.We thank Jiang for providing the source code for feature extraction and Bollegala for sharing his YAGO dataset.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zweig-burges-2012-challenge","url":"https:\/\/aclanthology.org\/W12-2704.pdf","title":"A Challenge Set for Advancing Language Modeling","abstract":"In this paper, we describe a new, publicly available corpus intended to stimulate research into language modeling techniques which are sensitive to overall sentence coherence. The task uses the Scholastic Aptitude Test's sentence completion format. The test set consists of 1040 sentences, each of which is missing a content word. The goal is to select the correct replacement from amongst five alternates. In general, all of the options are syntactically valid, and reasonable with respect to local N-gram statistics. The set was generated by using an N-gram language model to generate a long list of likely words, given the immediate context. These options were then hand-groomed, to identify four decoys which are globally incoherent, yet syntactically correct. To ensure the right to public distribution, all the data is derived from out-of-copyright materials from Project Gutenberg. The test sentences were derived from five of Conan Doyle's Sherlock Holmes novels, and we provide a large set of Nineteenth and early Twentieth Century texts as training material.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gong-etal-2012-n","url":"https:\/\/aclanthology.org\/D12-1026.pdf","title":"N-gram-based Tense Models for Statistical Machine Translation","abstract":"Tense is a small element to a sentence, however, error tense can raise odd grammars and result in misunderstanding. Recently, tense has drawn attention in many natural language processing applications. However, most of current Statistical Machine Translation (SMT) systems mainly depend on translation model and language model. They never consider and make full use of tense information. In this paper, we propose n-gram-based tense models for SMT and successfully integrate them into a state-of-the-art phrase-based SMT system via two additional features. Experimental results on the NIST Chinese-English translation task show that our proposed tense models are very effective, contributing performance improvement by 0.62 BLUE points over a strong baseline.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is supported by part by NUS FRC Grant R252-000-452-112, the National Natural Sci- ","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"descles-etal-2001-towards","url":"https:\/\/aclanthology.org\/W01-1303.pdf","title":"Towards Invariant Meanings Of Spatial Prepositions and Preverbs","abstract":"This work presents the semantical analysis of the two spatial prepositions and associated prefixes, the French sur, sur-(on) and the Polish przez, prze-(across). We propose a theory of abstract places (loci), as a method of description which helps to build an invariant meanings of the two linguistics units.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2001,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"thomas-1999-designing","url":"https:\/\/aclanthology.org\/P99-1073.pdf","title":"Designing a Task-Based Evaluation Methodology for a Spoken Machine Translation System","abstract":"In this paper, I discuss issues pertinent to the design of a task-based evaluation methodology for a spoken machine translation (MT) system processing human to human communication rather than human to machine communication. I claim that system mediated human to human communication requires new evaluation criteria and metrics based on goal complexity and the speaker's prioritization of goals.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"I would like to thank my advisor Lori Levin, Alon Lavie, Monika Woszczyna, and Aleksan-","year":1999,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dolan-etal-2002-msr","url":"https:\/\/link.springer.com\/chapter\/10.1007\/3-540-45820-4_27.pdf","title":"MSR-MT: the Microsoft research machine translation system","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-etal-2020-analyzing","url":"https:\/\/aclanthology.org\/2020.nlpcss-1.16.pdf","title":"Analyzing Political Bias and Unfairness in News Articles at Different Levels of Granularity","abstract":"Media organizations bear great reponsibility because of their considerable influence on shaping beliefs and positions of our society. Any form of media can contain overly biased content, e.g., by reporting on political events in a selective or incomplete manner. A relevant question hence is whether and how such form of imbalanced news coverage can be exposed. The research presented in this paper addresses not only the automatic detection of bias but goes one step further in that it explores how political bias and unfairness are manifested linguistically. In this regard we utilize a new corpus of 6964 news articles with labels derived from adfontesmedia.com and develop a neural model for bias assessment. By analyzing this model on article excerpts, we find insightful bias patterns at different levels of text granularity, from single words to the whole article discourse.","label_nlp4sg":1,"task":["Analyzing Political Bias and Unfairness"],"method":["neural model"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"This work was partially supported by the German Research Foundation (DFG) within the Collaborative Research Center \"On-The-Fly Computing\" (SFB 901\/3) under the project number 160364472.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"loos-2006-on2l","url":"https:\/\/aclanthology.org\/P06-3011.pdf","title":"On2L -- A Framework for Incremental Ontology Learning in Spoken Dialog Systems","abstract":"An open-domain spoken dialog system has to deal with the challenge of lacking lexical as well as conceptual knowledge. As the real world is constantly changing, it is not possible to store all necessary knowledge beforehand. Therefore, this knowledge has to be acquired during the run time of the system, with the help of the out-of-vocabulary information of a speech recognizer. As every word can have various meanings depending on the context in which it is uttered, additional context information is taken into account, when searching for the meaning of such a word. In this paper, I will present the incremental ontology learning framework On2L. The defined tasks for the framework are: the hypernym extraction from Internet texts for unknown terms delivered by the speech recognizer; the mapping of those and their hypernyms into ontological concepts and instances; and the following integration of them into the system's ontology.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hovy-1989-current","url":"https:\/\/aclanthology.org\/H89-2065.pdf","title":"The Current Status of the Penman Language Generation System","abstract":"Penman is one of the largest English language generation programs in the world. Developed mainly at ISI\/USC, it is the result of over 15 person-years' work, and forms the core of an investigation of the computational aspects of the theories of Systemic Functional Linguistics.\nIn the past year, the Penman project has undergone a number of changes. The program itself has been restructured into a software package and has been distributed to over 15 sites worldwide (mostly to academic institutions). This involved the creation of a number of auxiliary software tools and the writing of over 600 pages of documentation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1989,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"durrani-etal-2010-hindi","url":"https:\/\/aclanthology.org\/P10-1048.pdf","title":"Hindi-to-Urdu Machine Translation through Transliteration","abstract":"We present a novel approach to integrate transliteration into Hindi-to-Urdu statistical machine translation. We propose two probabilistic models, based on conditional and joint probability formulations, that are novel solutions to the problem. Our models consider both transliteration and translation when translating a particular Hindi word given the context whereas in previous work transliteration is only used for translating OOV (out-of-vocabulary) words. We use transliteration as a tool for disambiguation of Hindi homonyms which can be both translated or transliterated or transliterated differently based on different contexts. We obtain final BLEU scores of 19.35 (conditional probability model) and 19.00 (joint probability model) as compared to 14.30 for a baseline phrase-based system and 16.25 for a system which transliterates OOV words in the baseline system. This indicates that transliteration is useful for more than only translating OOV words for language pairs like Hindi-Urdu.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The first two authors were funded by the Higher Education Commission (HEC) of Pakistan. The third author was funded by Deutsche Forschungsgemeinschaft grants SFB 732 and MorphoSynt. The fourth author was funded by Deutsche Forschungsgemeinschaft grant SFB 732.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"welbl-etal-2020-undersensitivity","url":"https:\/\/aclanthology.org\/2020.findings-emnlp.103.pdf","title":"Undersensitivity in Neural Reading Comprehension","abstract":"Current reading comprehension methods generalise well to in-distribution test sets, yet perform poorly on adversarially selected data. Prior work on adversarial inputs typically studies model oversensitivity: semantically invariant text perturbations that cause a model's prediction to change. Here we focus on the complementary problem: excessive prediction undersensitivity, where input text is meaningfully changed but the model's prediction does not, even though it should. We formulate an adversarial attack which searches among semantic variations of the question for which a model erroneously predicts the same answer, and with even higher probability. We demonstrate that models trained on both SQuAD2.0 and NewsQA are vulnerable to this attack, and then investigate data augmentation and adversarial training as defences. Both substantially decrease adversarial vulnerability, which generalises to held-out data and held-out attack spaces. Addressing undersensitivity furthermore improves model robustness on the previously introduced ADDSENT and ADDONE-SENT datasets, and models generalise better when facing train\/evaluation distribution mismatch: they are less prone to overly rely on shallow predictive cues present only in the training set, and outperform a conventional model by as much as 10.9% F 1 .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by an Engineering and Physical Sciences Research Council (EPSRC) scholarship, and the European Union's Horizon 2020 research and innovation programme under grant agreement no. 875160.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"han-etal-2017-dependency","url":"https:\/\/aclanthology.org\/D17-1176.pdf","title":"Dependency Grammar Induction with Neural Lexicalization and Big Training Data","abstract":"We study the impact of big models (in terms of the degree of lexicalization) and big data (in terms of the training corpus size) on dependency grammar induction. We experimented with L-DMV, a lexicalized version of Dependency Model with Valence (Klein and Manning, 2004) and L-NDMV, our lexicalized extension of the Neural Dependency Model with Valence (Jiang et al., 2016). We find that L-DMV only benefits from very small degrees of lexicalization and moderate sizes of training corpora. L-NDMV can benefit from big training data and lexicalization of greater degrees, especially when enhanced with good model initialization, and it achieves a result that is competitive with the current state-of-the-art.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"doddington-1989-summary","url":"https:\/\/aclanthology.org\/H89-2043.pdf","title":"SUMMARY OF SESSION 10 - Continous Speech Recognition II","abstract":"Algorithms and techniques to improve the robustness of speech recognition were the principal theme in session 10.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1989,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"roudaud-1992-typology","url":"https:\/\/aclanthology.org\/C92-4208.pdf","title":"Typology Study of French Technical Texts, With a View to Developing a Machine Translation System","abstract":"Within the industrial context of the information society, technical translation represents a considerable commercial stake. In the light of this, machine translation is considered as being an application of paramount importance. It is for this reason that the activities of B'VITAL have always centered around the processing of technical texts. The following article gives an account of the various tasks carried oat over the last few years on corpus analysis. We have drawn conclusions as to the validity of the notion of text typologies, applied in particular to technical matter, with a view to developing a machine translation system. The study was conducted using a fair amount of French documents and has led us to observe in particular, that a same typology may be identified in texts originating from varying fields.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"civera-etal-2006-computer","url":"https:\/\/aclanthology.org\/2006.eamt-1.5.pdf","title":"A Computer-Assisted Translation Tool based on Finite-State Technology","abstract":"The Computer-Assisted Translation (CAT) paradigm tries to integrate human expertise into the automatic translation process. In this paradigm, a human translator interacts with a translation system that dynamically offers a list of translations that best completes the part of the sentence that is being translated. This human-machine sinergy aims at a double goal, to increase translator productivity and ease translators' work. In this paper, we present a CAT system based on stochastic finite-state transducer technology. This system has been developed and assessed on two real parallel corpora in the framework of the European project TransType2 (TT2).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"namer-hathout-2019-paradis","url":"https:\/\/aclanthology.org\/W19-8502.pdf","title":"ParaDis and D\\'emonette: From Theory to Resources for Derivational Paradigms","abstract":"This article traces the genesis of the French derivational database D\u00e9monette v2 and shows how current architecture and content of derivational morphology resources result from theoretical developments in derivational morphology and from the users' need. The development of this large-scale resource began a year ago and is part of the Demonext project (ANR-17-CE23-0005). Its conception is adapted from theoretical approaches of derivational morphology where lexemes, units of analysis, are grouped into families that are organized into paradigms. More precisely, D\u00e9monette v2 is basically an implementation of ParaDis, a paradigmatic model for representing morphologically complex lexical units, formed by regular processes or presenting discrepancies between form and meaning. The article focuses on the principles of morphological, structural and semantic encoding that reflect the methodological choices that have been made in D\u00e9monette v2. Our proposal will be illustrated with various examples of non-canonical word formations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work benefited from the support of the project DEMONEXT ANR-17-CE23-0005 of the French National Research Agency (ANR). We wish to thank the partners of DEMONEXT, and especially Lucie Barque and Pauline Haas who have also taken part in the results presented in this paper.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kuo-yang-2004-constructing","url":"https:\/\/aclanthology.org\/P04-3003.pdf","title":"Constructing Transliteration Lexicons from Web Corpora","abstract":"This paper proposes a novel approach to automating the construction of transliterated-term lexicons. A simple syllable alignment algorithm is used to construct confusion matrices for cross-language syllable-phoneme conversion. Each row in the confusion matrix consists of a set of syllables in the source language that are (correctly or erroneously) matched phonetically and statistically to a syllable in the target language. Two conversions using phoneme-to-phoneme and text-to-phoneme syllabification algorithms are automatically deduced from a training corpus of paired terms and are used to calculate the degree of similarity between phonemes for transliterated-term extraction. In a large-scale experiment using this automated learning process for conversions, more than 200,000 transliterated-term pairs were successfully extracted by analyzing query results from Internet search engines. Experimental results indicate the proposed approach shows promise in transliterated-term extraction.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tomita-1985-feasibility","url":"https:\/\/aclanthology.org\/1985.tmi-1.19.pdf","title":"Feasibility Study of Personal\/Interactive Machine Translation Systems","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1985,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jagannatha-yu-2016-structured","url":"https:\/\/aclanthology.org\/D16-1082.pdf","title":"Structured prediction models for RNN based sequence labeling in clinical text","abstract":"Sequence labeling is a widely used method for named entity recognition and information extraction from unstructured natural language data. In the clinical domain one major application of sequence labeling involves extraction of relevant entities such as medication, indication, and side-effects from Electronic Health Record Narratives. Sequence labeling in this domain presents its own set of challenges and objectives. In this work we experiment with Conditional Random Field based structured learning models with Recurrent Neural Networks. We extend the previously studied CRF-LSTM model with explicit modeling of pairwise potentials. We also propose an approximate version of skip-chain CRF inference with RNN potentials. We use these methods 1 for structured prediction in order to improve the exact phrase detection of clinical entities.","label_nlp4sg":1,"task":["sequence labeling"],"method":["Conditional Random Field","Recurrent Neural Networks"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"We thank the UMassMed annotation team: Elaine Freund, Wiesong Liu, Steve Belknap, Nadya Frid, Alex Granillo, Heather Keating, and Victoria Wang for creating the gold standard evaluation set used in this work. We also thank the anonymous reviewers for their comments and suggestions.This work was supported in part by the grant HL125089 from the National Institutes of Health (NIH). We also acknowledge the support from the United States Department of Veterans Affairs (VA) through Award 1I01HX001457. This work was also supported in part by the Center for Intelligent Information Retrieval. The contents of this paper do not represent the views of CIIR, NIH, VA or the United States Government.","year":2016,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nariyama-2004-ellipsis","url":"https:\/\/aclanthology.org\/W04-0709.pdf","title":"Ellipsis Resolution for Disguised Agent","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yamashina-obashi-1988-collocational","url":"https:\/\/aclanthology.org\/C88-2157.pdf","title":"Collocational Analysis in Japanese Text Input","abstract":"This paper proposes a new disambiguation method for Japanese text input. This method evaluates candidate sentences by measuring the number of Word Co-occurrence Patterns (WCP) included in the candidate sentences. An automatic WCP extraction method is also developed. An extraction experiment using the example sentences from dictionaries confirms that WCP can be collected automaticMly with an accuracy of 98.7% using syntactic analysis and some heuristic rules to eliminate erroneous extraction. Using this method, about 305,000 sets of WCP are collected. A cooccurrence pattern matrix with semantic categories is built based on these WCP. Using this matrix, the mean number of candidate sentences in Kana.-to-Kanji translation is reduced to about 1\/10 of those fi-om existing morphological methods.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1988,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lin-etal-2019-enhancing","url":"https:\/\/aclanthology.org\/D19-1508.pdf","title":"Enhancing Dialogue Symptom Diagnosis with Global Attention and Symptom Graph","abstract":"Symptom diagnosis is a challenging yet profound problem in natural language processing. Most previous research focus on investigating the standard electronic medical records for symptom diagnosis, while the dialogues between doctors and patients that contain more rich information are not well studied. In this paper, we first construct a dialogue symptom diagnosis dataset based on an online medical forum with a large amount of dialogues between patients and doctors. Then, we provide some benchmark models on this dataset to boost the research of dialogue symptom diagnosis. In order to further enhance the performance of symptom diagnosis over dialogues, we propose a global attention mechanism to capture more symptom related information, and build a symptom graph to model the associations between symptoms rather than treating each symptom independently. Experimental results show that both the global attention and symptom graph are effective to boost dialogue symptom diagnosis. In particular, our proposed model achieves the state-of-the-art performance on the constructed dataset.","label_nlp4sg":1,"task":["Symptom Diagnosis"],"method":["dataset","attention mechanism"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"This work is partially funded by National Natural Science Foundation of China (No. 61751201), National Natural Science Foundation of China (No. 61702106) and Shanghai Science and Technology Commission (No. 17JC1420200, No. 17YF1427600 and No. 16JC1420401).","year":2019,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"maynard-greenwood-2014-cares","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/67_Paper.pdf","title":"Who cares about Sarcastic Tweets? Investigating the Impact of Sarcasm on Sentiment Analysis.","abstract":"Sarcasm is a common phenomenon in social media, and is inherently difficult to analyse, not just automatically but often for humans too. It has an important effect on sentiment, but is usually ignored in social media analysis, because it is considered too tricky to handle. While there exist a few systems which can detect sarcasm, almost no work has been carried out on studying the effect that sarcasm has on sentiment in tweets, and on incorporating this into automatic tools for sentiment analysis. We perform an analysis of the effect of sarcasm scope on the polarity of tweets, and have compiled a number of rules which enable us to improve the accuracy of sentiment analysis when sarcasm is known to be present. We consider in particular the effect of sentiment and sarcasm contained in hashtags, and have developed a hashtag tokeniser for GATE, so that sentiment and sarcasm found within hashtags can be detected more easily. According to our experiments, the hashtag tokenisation achieves 98% Precision, while the sarcasm detection achieved 91% Precision and polarity detection 80%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-etal-2019-perspectroscope","url":"https:\/\/aclanthology.org\/P19-3022.pdf","title":"PerspectroScope: A Window to the World of Diverse Perspectives","abstract":"This work presents PERSPECTROSCOPE, a web-based system which lets users query a discussion-worthy natural language claim, and extract and visualize various perspectives in support or against the claim, along with evidence supporting each perspective. The system thus lets users explore various perspectives that could touch upon aspects of the issue at hand. The system is built as a combination of retrieval engines and learned textualentailment-like classifiers built using a few recent developments in natural language understanding. To make the system more adaptive, expand its coverage, and improve its decisions over time, our platform employs various mechanisms to get corrections from the users.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by a gift from Google and by Contract HR0011-15-2-0025 with the US Defense Advanced Research Projects Agency (DARPA). The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"neale-etal-2015-first","url":"https:\/\/aclanthology.org\/W15-5708.pdf","title":"First Steps in Using Word Senses as Contextual Features in Maxent Models for Machine Translation","abstract":"Despite the common assumption that word sense disambiguation (WSD) should help to improve lexical choice and improve the quality of the output of machine translation systems, how to successfully integrate word senses into such systems remains an unanswered question. While significant improvements have been reported using reformulated approaches to the disambiguation task itself-most notably in predicting translations of full phrases as opposed to the senses of single words-little improvement or encouragement has been gleaned from the incorporation of traditional WSD into machine translation. In this paper, we present preliminary results that suggest that incorporating output from WSD as contextual features in a maxent-based translation model yields a slight improvement in the quality of machine translation and is potentially a step in the right direction, in contrast to other approaches to introducing word senses into a machine translation system which significantly impede its performance.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been undertaken and funded as part of the EU project QTLeap (EC\/FP7\/610516) and the Portuguese project DP4LT (PTDC\/EEI-SII\/1940\/2012).","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"brewster-etal-2004-data","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/737.pdf","title":"Data Driven Ontology Evaluation","abstract":"The evaluation of ontologies is vital for the growth of the Semantic Web. We consider a number of problems in evaluating a knowledge artifact like an ontology. We propose in this paper that one approach to ontology evaluation should be corpus or data driven. A corpus is the most accessible form of knowledge and its use allows a measure to be derived of the 'fit' between an ontology and a domain of knowledge. We consider a number of methods for measuring this 'fit' and propose a measure to evaluate structural fit, and a probabilistic approach to identifying the best ontology.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jiang-diesner-2016-says","url":"https:\/\/aclanthology.org\/C16-1200.pdf","title":"Says Who\\ldots? Identification of Expert versus Layman Critics' Reviews of Documentary Films","abstract":"We extend classic review mining work by building a binary classifier that predicts whether a review of a documentary film was written by an expert or a layman with 90.70% accuracy (F1 score), and compare the characteristics of the predicted classes. A variety of standard lexical and syntactic features was used for this supervised learning task. Our results suggest that experts write comparatively lengthier and more detailed reviews that feature more complex grammar and a higher diversity in their vocabulary. Layman reviews are more subjective and contextualized in peoples' everyday lives. Our error analysis shows that laymen are about twice as likely to be mistaken as experts than vice versa. We argue that the type of author might be a useful new feature for improving the accuracy of predicting the rating, helpfulness and authenticity of reviews. Finally, the outcomes of this work might help researchers and practitioners in the field of impact assessment to gain a more fine-grained understanding of the perception of different types of media consumers and reviewers of a topic, genre or information product.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the FORD Foundation, grant 0155-0370, and by a faculty fellowship from the National Center of Supercomputing Applications (NCSA) at UIUC. We are also grateful to Amazon for giving us permission to collect reviews from their website. We also thank Sandra Franco and Harathi Korrapati from UIUC for their help with this paper.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ozdowska-2006-projecting","url":"https:\/\/aclanthology.org\/W06-2008.pdf","title":"Projecting POS tags and syntactic dependencies from English and French to Polish in aligned corpora","abstract":"This paper presents the first step to project POS tags and dependencies from English and French to Polish in aligned corpora. Both the English and French parts of the corpus are analysed with a POS tagger and a robust parser. The English\/Polish bi-text and the French\/Polish bi-text are then aligned at the word level with the GIZA++ package. The intersection of IBM-4 Viterbi alignments for both translation directions is used to project the annotations from English and French to Polish. The results show that the precision of direct projection vary according to the type of induced annotations as well as the source language. Moreover, the performances are likely to be improved by defining regular conversion rules among POS tags and dependencies.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"trost-etal-1992-datenbank","url":"https:\/\/aclanthology.org\/A92-1038.pdf","title":"Datenbank-DIALOG and the Relevance of Habitability","abstract":"The paper focusses on the issue of habitability and how it is accounted for in Datenbank-DIALOG 1. Examples from the area of comparisons and measures--both ilnportant for many application domains and non-trivial from a linguistic point of view--demonstrate how design strategies can SUl)port the development of a habitahle system. Datenbank-DIALOG is a German language interface to relational databases. Since the development of a first prototype it has been tested in different enviromnents and continually been improved. Currently, in a large field test, Datenbank-DIALOG interfaces to a database about AI research in Austria. Questions sent by einail 2 are answered automatically.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"buhler-minker-2006-stochastic","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2006\/pdf\/363_pdf.pdf","title":"Stochastic Spoken Natural Language Parsing in the Framework of the French MEDIA Evaluation Campaign","abstract":"A stochastic parsing component has been applied on a French spoken language dialogue corpus, recorded in the framework of the MEDIA evaluation campaign. Realized as an ergodic HMM using Viterbi decoding, the parser outputs the most likely semantic representation given a transcribed utterance as input. The semantic sequences used for training and testing have been derived from the semantic representations of the MEDIA corpus. The HMM parameters have been estimated given the word sequences along with their semantic representation. The performance score of the stochastic parser has been automatically determined using the MEDIAVAL tool applied to a held out reference corpus. Evaluation results will be presented in the paper.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lin-etal-2022-inferring","url":"https:\/\/aclanthology.org\/2022.acl-long.585.pdf","title":"Inferring Rewards from Language in Context","abstract":"In classic instruction following, language like \"I'd like the JetBlue flight\" maps to actions (e.g., selecting that flight). However, language also conveys information about a user's underlying reward function (e.g., a general preference for JetBlue), which can allow a model to carry out desirable actions in new contexts. We present a model that infers rewards from language pragmatically: reasoning about how speakers choose utterances not only to elicit desired actions, but also to reveal information about their preferences. On a new interactive flight-booking task with natural language, our model more accurately infers rewards and predicts optimal actions in unseen environments, in comparison to past work that first maps language to actions (instruction following) and then maps actions to rewards (inverse reinforcement learning).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Eric Wallace, Jerry He, and the other members of the Berkeley NLP group and InterACT Lab for helpful feedback and discussion. This work is supported by a grant from the Office of Naval Research (ONR-YIP).","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hoang-kan-2010-towards","url":"https:\/\/aclanthology.org\/C10-2049.pdf","title":"Towards Automated Related Work Summarization","abstract":"We introduce the novel problem of automatic related work summarization. Given multiple articles (e.g., conference\/journal papers) as input, a related work summarization system creates a topic-biased summary of related work specific to the target paper. Our prototype Related Work Summarization system, ReWoS, takes in set of keywords arranged in a hierarchical fashion that describes a target paper's topics, to drive the creation of an extractive summary using two different strategies for locating appropriate sentences for general topics as well as detailed ones. Our initial results show an improvement over generic multi-document summarization baselines in a human evaluation.","label_nlp4sg":1,"task":["Related Work Summarization"],"method":["related work summarization system"],"goal1":"Quality Education","goal2":"Industry, Innovation and Infrastructure","goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sizov-ozturk-2013-automatic","url":"https:\/\/aclanthology.org\/W13-5009.pdf","title":"Automatic Extraction of Reasoning Chains from Textual Reports","abstract":"Many organizations possess large collections of textual reports that document how a problem is solved or analysed, e.g. medical patient records, industrial accident reports, lawsuit records and investigation reports. Effective use of expert knowledge contained in these reports may greatly increase productivity of the organization. In this article, we propose a method for automatic extraction of reasoning chains that contain information used by the author of a report to analyse the problem at hand. For this purpose, we developed a graph-based text representation that makes the relations between textual units explicit. This representation is acquired automatically from a report using natural language processing tools including syntactic and discourse parsers. When applied to aviation investigation reports, our method generates reasoning chains that reveal the connection between initial information about the aircraft incident and its causes.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kwak-etal-2003-glr","url":"https:\/\/aclanthology.org\/W03-3013.pdf","title":"GLR Parser with Conditional Action Model using Surface Phrasal Types for Korean","abstract":"In this paper, we propose a new probabilistic GLR parsing method that can solve the problems of conventional methods. Our proposed Conditional Action Model uses Surface Phrasal Types (SPTs) encoding the functional word sequences of the sub-trees for describing structural characteristics of the partial parse. And, the proposed GLR model outperforms the previous methods by about 6~8%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"church-mercer-1993-introduction","url":"https:\/\/aclanthology.org\/J93-1001.pdf","title":"Introduction to the Special Issue on Computational Linguistics Using Large Corpora","abstract":"The 1990s have witnessed a resurgence of interest in 1950s-style empirical and statistical methods of language analysis. Empiricism was at its peak in the 1950s, dominating a broad set of fields ranging from psychology (behaviorism) to electrical engineering (information theory). At that time, it was common practice in linguistics to classify words not only on the basis of their meanings but also on the basis of their cooccurrence with other words. Firth, a leading figure in British linguistics during the 1950s, summarized the approach with the memorable line: \"You shall know a word by the company it keeps\" (Firth 1957). Regrettably, interest in empiricism faded in the late 1950s and early 1960s with a number of significant events including Chomsky's criticism of n-grams in Syntactic Structures (Chomsky 1957) and Minsky and Papert's criticism of neural networks in Perceptrons (Minsky and Papert 1969). Perhaps the most immediate reason for this empirical renaissance is the availability of massive quantities of data: more text is available than ever before. Just ten years ago, the one-million word Brown Corpus (Francis and Ku~era, 1982) was considered large, but even then, there were much larger corpora such as the Birmingham Corpus (Sinclair et al. 1987; Sinclair 1987). Today, many locations have samples of text running into the hundreds of millions or even billions of words. Collections of this magnitude are becoming widely available, thanks to data collection efforts such as the Association for Computational Linguistics' Data Collection Initiative (ACL\/DCI), the European Corpus Initiative (ECI), ICAME, the British National Corpus (BNC), the Linguistic Data Consortium (LDC), the Consortium for Lexical Research (CLR), Electronic Dictionary Research (EDR), and standardization efforts such as the Text Encoding Initiative (TEI). 1 The data-intensive approach to language, which is becoming known as Text Analysis, takes a pragmatic approach that is well suited to meet the recent emphasis on numerical evaluations and concrete deliverables. Text Analysis focuses on broad (though possibly superficial) coverage of unrestricted text, rather than deep analysis of (artificially) restricted domains.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1993,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"takeshita-1992-recognizing","url":"https:\/\/aclanthology.org\/C92-3167.pdf","title":"Recognizing Topics through the Use of Interaction Structures","abstract":"A crucial problem in topic recognition is how to identify topic continuation. Domain knowledge is generally indispensable for this. How~ ever, knowledge-based approaches are impractical because not all domain knowledge needed for the identification can be prepared in advance. This paper presents a topic recognition model using dialogue interaction structures. The model can deal with both task-oriented and non-taskoriented dialogues in any language. Topic continuation is identified without domain knowledge because utterances of relevant topics are indicated by certain interaction structures. The model avoids the weak point of knowledgebased approaches. The model is validated by the result of a topic recognition experiment.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"heisterkamp-2001-linguatronic","url":"https:\/\/aclanthology.org\/H01-1047.pdf","title":"Linguatronic: Product-Level Speech System for Mercedes-Benz Car","abstract":"A recent press release (Murray 2000) indicates that many car manufacturers have announced speech recognition and voiceoperated Command&Control systems for their cars, but so far have not introduced any. They are still struggling with technology, both in reliability and pricing. The article finishes by a quote from an industry person saying:","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2001,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kuribayashi-etal-2019-empirical","url":"https:\/\/aclanthology.org\/P19-1464.pdf","title":"An Empirical Study of Span Representations in Argumentation Structure Parsing","abstract":"For several natural language processing (NLP) tasks, span representation is attracting considerable attention as a promising new technique; a common basis for an effective design has been established. With such basis, exploring task-dependent extensions for argumentation structure parsing (ASP) becomes an interesting research direction. This study investigates (i) span representation originally developed for other NLP tasks and (ii) a simple task-dependent extension for ASP. Our extensive experiments and analysis show that these representations yield high performance for ASP and provide some challenging types of instances to be parsed. ADU1: In addition, I believe that city provides more work opportunities than the countryside. ADU2: There are not only more jobs, but they are also well-paid.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by JST CREST Grant Number JPMJCR1513, Japan. We would like to thank the laboratory members who gave us advice and all reviewers of this work for their useful comments and feedback.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mita-etal-2020-self","url":"https:\/\/aclanthology.org\/2020.findings-emnlp.26.pdf","title":"A Self-Refinement Strategy for Noise Reduction in Grammatical Error Correction","abstract":"Existing approaches for grammatical error correction (GEC) largely rely on supervised learning with manually created GEC datasets. However, there has been little focus on verifying and ensuring the quality of the datasets, and on how lower-quality data might affect GEC performance. We indeed found that there is a non-negligible amount of \"noise\" where errors were inappropriately edited or left uncorrected. To address this, we designed a self-refinement method where the key idea is to denoise these datasets by leveraging the prediction consistency of existing models, and outperformed strong denoising baseline methods. We further applied task-specific techniques and achieved state-of-the-art performance on the CoNLL-2014, JFLEG, and BEA-2019 benchmarks. We then analyzed the effect of the proposed denoising method, and found that our approach leads to improved coverage of corrections and facilitated fluency edits which are reflected in higher recall and overall performance.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the Tohoku NLP laboratory members who provided us with their valuable advice. We are grateful to Tomoya Mizumoto and Ana Brassard for their insightful comments and suggestions.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"krishnamurthy-2015-visually","url":"https:\/\/aclanthology.org\/W15-2801.pdf","title":"Visually-Verifiable Textual Entailment: A Challenge Task for Combining Language and Vision","abstract":"We propose visually-verifiable textual entailment as a challenge task for the emerging field of combining language and vision. This task is a variant of the wellstudied NLP task of recognizing textual entailment (Dagan et al., 2006) where every entailment judgment can be made purely by reasoning with visual knowledge. We believe that this task will spur innovation in the language and vision field while simultaneously producing inference algorithms that can be used in NLP.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We gratefully acknowledge Aria Haghighi, Oren Etzioni, Mark Yatskar and the anonymous reviewers for their helpful comments.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"moon-lee-2000-representation","url":"https:\/\/aclanthology.org\/C00-1079.pdf","title":"Representation and Recognition Method for Multi-Word Translation Units in Korean-to-Japanese MT System","abstract":"Due to grammatical similarities, even a one-to-one mapping between Korean and Japanese words (or morphemes) can usually result in a high quality Korean-to-Japanese machine translation. However, multi-word translation units (MWTU) such as idioms, compound words, etc., need an n-to-m mapping, and their component words often do not appear adjacently, resulting in a discontinuous MWTU. During translation, the MWTU should be treated as one lexical item rather than a phrase. In this paper, we define the types of MWTUs and propose their representation and recognition method depending on their characteristics in Korean-to-Japanese MT system. In an experimental evaluation, the proposed method turned out to be very effective in handling MWTUs, showing an average recognition accuracy of 98.4% and a fast recognition time.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"paris-etal-2004-intelligent","url":"https:\/\/aclanthology.org\/U04-1012.pdf","title":"Intelligent Multi Media Presentation of information in a semi-immersive Command and Control environment","abstract":"We describe the framework for an intelligent multimedia presentation system we designed to be part of the FOCAL laboratory, a semi-immersive environment for Command and Control Environment. FOCAL comprises a number of input devices and output media, animated virtual conversational characters, a spoken dialogue system, and sophisticated visual displays. These need to be coordinated to provide a useful and effective presentation to the user. In this paper, we describe the principles which underlie intelligent multimedia presentation (IMMP) systems and the design of such a system within the FOCAL multiagent architecture.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank our colleagues in the FOCAL team, Dr Steven Wark, Michael Broughton and Andrew Zschorn for their invaluable contribution to this project. We also wish to acknowledge the support of Nuance for the development of the speech recognition system.","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yang-etal-2005-twin","url":"https:\/\/aclanthology.org\/I05-1063.pdf","title":"A Twin-Candidate Model of Coreference Resolution with Non-Anaphor Identification Capability","abstract":"Although effective for antecedent determination, the traditional twin-candidate model can not prevent the invalid resolution of non-anaphors without additional measures. In this paper we propose a modified learning framework for the twin-candidate model. In the new framework, we make use of non-anaphors to create a special class of training instances, which leads to a classifier capable of identifying the cases of non-anaphors during resolution. In this way, the twin-candidate model itself could avoid the resolution of non-anaphors, and thus could be directly deployed to coreference resolution. The evaluation done on newswire domain shows that the twin-candidate based system with our modified framework achieves better and more reliable performance than those with other solutions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"landry-1987-termium","url":"https:\/\/aclanthology.org\/1987.tc-1.12.pdf","title":"The Termium termbank: Today and tomorrow","abstract":"The Canadian Government Linguistic Data Bank was established in 1974, after Cabinet made the Translation Bureau responsible for 'verifying and standardising English and French terminology used throughout the federal public service and in all government agencies reporting to the Parliament of Canada'. Fulfilment of this mandate required, among other things, the organisation and promotion of terminology research projects, the establishment of a termbank for the purpose of increasing the efficiency of translation services in all fields, and the development of cooperative ties with language research and standardisation centres across Canada and abroad.\nThe bank was to serve three main purposes:","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1987,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gutierrez-vasques-etal-2021-characters","url":"https:\/\/aclanthology.org\/2021.eacl-main.302.pdf","title":"From characters to words: the turning point of BPE merges","abstract":"The distributions of orthographic word types are very different across languages due to typological characteristics, different writing traditions, and other factors. The wide range of cross-linguistic diversity is still a major challenge for NLP, and for the study of language more generally. We use BPE and informationtheoretic measures to investigate if distributions become more similar under specific levels of subword tokenization. We perform a cross-linguistic comparison, following incremental BPE merges (we go from characters to words) for 47 diverse languages. We show that text entropy values (a feature of probability distributions) converge at specific subword levels: relatively few BPE merges (around 200 for our corpus) lead to the most similar distributions across languages. Additionally, we analyze the interaction between subword and word-level distributions and show that our findings can be interpreted in light of the ongoing discussion about different morphological complexity types. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the EACL reviewers. This work has been partially supported by the SNSF grant no. 176305 and CONACYT.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"patel-etal-2018-semi","url":"https:\/\/aclanthology.org\/2018.gwc-1.31.pdf","title":"Semi-automatic WordNet Linking using Word Embeddings","abstract":"Wordnets are rich lexico-semantic resources. Linked wordnets are extensions of wordnets, which link similar concepts in wordnets of different languages. Such resources are extremely useful in many Natural Language Processing (NLP) applications, primarily those based on knowledge-based approaches. In such approaches, these resources are considered as gold standard\/oracle. Thus, it is crucial that these resources hold correct information. Thereby, they are created by human experts. However, manual maintenance of such resources is a tedious and costly affair. Thus techniques that can aid the experts are desirable. In this paper, we propose an approach to link wordnets. Given a synset of the source language, the approach returns a ranked list of potential candidate synsets in the target language from which the human expert can choose the correct one(s). Our technique is able to retrieve a winner synset in the top 10 ranked list for 60% of all synsets and 70% of noun synsets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pappu-etal-2014-conversational","url":"https:\/\/aclanthology.org\/W14-0211.pdf","title":"Conversational Strategies for Robustly Managing Dialog in Public Spaces","abstract":"Open environments present an attention management challenge for conversational systems. We describe a kiosk system (based on Ravenclaw-Olympus) that uses simple auditory and visual information to interpret human presence and manage the system's attention. The system robustly differentiates intended interactions from unintended ones at an accuracy of 93% and provides similar task completion rates in both a quiet room and a public space.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"higgins-2000-use","url":"https:\/\/aclanthology.org\/A00-3006.pdf","title":"The use of error tags in ARTFL's Encyclop\\'edie: Does good error identification lead to good error correction?","abstract":"Many corpora which are prime candidates for automatic error correction, such as the output of OCR software, and electronic texts incorporating markup tags, include information on which portions of the text are most likely to contain errors. This paper describes how the error markup tag is being incorporated in the spell-checking of an electronic version of Diderot's Encyclopddie, and evaluates whether the presence of this tag has significantly aided in correcting the errors which","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bouma-2003-dutch","url":"https:\/\/aclanthology.org\/W03-2603.pdf","title":"Doing Dutch Pronouns Automatically in Optimality Theory","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ws-1997-spoken","url":"https:\/\/aclanthology.org\/W97-0400.pdf","title":"Spoken Language Translation","abstract":"Some 15 years ago, when Machine Translation had become fashionable again in Europe, few people would be prepared to consider seriously embarking upon spoken language translation (SLT). After all, where neither machine translation of written text, nor speech understanding or speech production had led to any significant results yet, it seemed clear that putting three not even halfway understood systems together would be premature, and bound to fail.\nSince then, the world has changed. If we look at the papers contained in the proceedings of this workshop we can clearly see that many researchers, both in academia and in industry, have taken up the challenge to build systems capable of translating spoken language. Does that mean that most of the problems involved in speech-to-text, text-to-text translation, and text-to-speech have been solved? Or should we rather conclude that all these courageous people are heading for another traumatic experience, just as we have seen happen in the sixties and, to a lesser extent, in the eighties.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"martelli-1987-stochastic","url":"https:\/\/aclanthology.org\/E87-1016.pdf","title":"Stochastic Modeling of Language Via Sentence Space Partitioning","abstract":"ABSTRACT In some computer applications of linguistics (such as maximum-likelihood decoding of speech or handwriting), the purpose of the language-handling component (Language Model) is to estimate the linguistic (a priori) probability of arbitrary natural-language sentences. This paper discusses theoretical and practical issues regarding an approach to building such a language model based on any equivalence criterion defined on incomplete sentences, and experimental results and measurements performed on such a model of the Italian language, which is a part of the prototype for the recognition of spoken Italian built at the IBM Rome Scintific Center.\nIn some computer applications, it is necessary to have a way to estimate the probability of any arbitrary natural-language sentence.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1987,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"brekke-etal-2006-automatic","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2006\/pdf\/807_pdf.pdf","title":"Automatic Term Extraction from Knowledge Bank of Economics","abstract":"KB-N is a web-accessible searchable Knowledge Bank comprising A) a parallel corpus of quality assured and calibrated English and Norwegian text drawn from economic-administrative knowledge domains, and B) a domain-focused database representing that knowledge universe in terms of defined concepts and their respective bilingual terminological entries. A central mechanism in connecting A and B is an algorithm for the automatic extraction of term candidates from aligned translation pairs on the basis of linguistic, lexical and statistical filtering (first ever for Norwegian). The system is designed and programmed by Paul Meurer at Aksis (UiB). An important pilot application of the term base is subdomain and collocations based word-sense disambiguation for LOGON, a system for Norwegian-to-English MT currently being developed.","label_nlp4sg":1,"task":["Automatic Term Extraction"],"method":["statistical filtering"],"goal1":"Decent Work and Economic Growth","goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":1,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kobayashi-etal-2015-effects","url":"https:\/\/aclanthology.org\/W15-4656.pdf","title":"Effects of Game on User Engagement with Spoken Dialogue System","abstract":"In this study, we examine the effects of using a game for encouraging the use of a spoken dialogue system. As a case study, we developed a word-chain game, called Shiritori in Japanese, and released the game as a module in a Japanese Android\/iOS app, Onsei-Assist, which is a Siri-like personal assistant based on a spoken dialogue technology. We analyzed the log after the release and confirmed that the game can increase the number of user utterances. Furthermore, we discovered a positive side effect, in which users who have played the game tend to begin using non-game modules. This suggests that just adding a game module to the system can improve user engagement with an assistant agent.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"reveil-etal-2010-improving","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/281_Paper.pdf","title":"Improving Proper Name Recognition by Adding Automatically Learned Pronunciation Variants to the Lexicon","abstract":"This paper deals with the task of large vocabulary proper name recognition. In order to accomodate a wide diversity of possible name pronunciations (due to non-native name origins or speaker tongues) a multilingual acoustic model is combined with a lexicon comprising 3 grapheme-to-phoneme (G2P) transcriptions (from G2P transcribers for 3 different languages) and up to 4 so-called phoneme-tophoneme (P2P) transcriptions. The latter are generated with (speaker tongue, name source) specific P2P converters that try to transform a set of baseline name transcriptions into a pool of transcription variants that lie closer to the 'true' name pronunciations. The experimental results show that the generated P2P variants can be employed to improve name recognition, and that the obtained accuracy is comparable to what is achieved with typical (TY) transcriptions (made by a human expert). Furthermore, it is demonstrated that the P2P conversion can best be instantiated from a baseline transcription in the name source language, and that knowledge of the speaker tongue is an important input as well for the P2P transcription process.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The presented work was carried out in the context of the Autonomata Too research project, granted under the Dutch-Flemish STEVIN program.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-etal-2020-multi-task","url":"https:\/\/aclanthology.org\/2020.sdp-1.14.pdf","title":"Multi-task Peer-Review Score Prediction","abstract":"Automatic prediction of the peer-review aspect scores of academic papers can be a useful assistant tool for both reviewers and authors. To handle the small size of published datasets on the target aspect of scores, we propose a multi-task approach to leverage additional information from other aspects of scores for improving the performance of the target aspect. Because one of the problems of building multi-task models is how to select the proper resources of auxiliary tasks and how to select the proper shared structures, we thus propose a multi-task shared structure encoding approach that automatically selects good shared network structures as well as good auxiliary resources. The experiments based on peer-review datasets show that our approach is effective and has better performance on the target scores than the single-task method and na\u00efve multi-task methods.","label_nlp4sg":1,"task":["Peer - Review Score Prediction"],"method":["multi - task shared structure encoding approach"],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":"This work was partially supported by KDDI Foundation Research Grant Program.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"vo-yamamoto-2018-vietsentilex","url":"https:\/\/aclanthology.org\/Y18-1081.pdf","title":"VietSentiLex: a sentiment dictionary that considers the polarity of ambiguous sentiment words","abstract":"The ability to analyze sentiment is a major technology to analyze social media process. The sentiment analysis involves reading and understanding what is being said about a brand as well as advertising campaigns in online services to determine the nature of a product. Because the Vietnamese language has few resources for applying machine learning tasks, use of sentiment dictionaries is required. In this study, a sentiment dictionary called \"Viet-SentiLex\" is introduced for the aforementioned task in the Vietnamese language. Most notably, , In this study, instead of applying scores for every word, ambiguous words are considered more carefully as it is periodically positive or negative. Related words such as target nouns or verbs are used as contextual information for a sentiment word. Experiments to compare the performance of our dictionary to others are conducted. We prove that our dictionary has a high potential in predicting the polarity of reviews as compared to other dictionaries. In addition, various challenges and disadvantages of this dictionary are also outlined for future improvement until VietSentiLex can be a commercial product.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pallett-1994-nist","url":"https:\/\/aclanthology.org\/H94-1104.pdf","title":"NIST-ARPA Interagency Agreement: Human Language Technology Program","abstract":"1. To coordinate the design, development and distribution of speech and natural language corpora for the ARPA Spoken Language research community, and the use of these corpora for technology development and evaluation.\n2. To design, coordinate the implementation of, and analyze the results of performance assessment benchmark tests for ARPA's speech recognition and spoken language understanding systems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhu-lee-2006-using","url":"https:\/\/aclanthology.org\/O06-2001.pdf","title":"Using Duration Information in Cantonese Connected-Digit Recognition","abstract":"This paper presents an investigation on the use of explicit statistical duration models for Cantonese connected-digit recognition. Cantonese is a major Chinese dialect. The phonetic compositions of Cantonese digits are generally very simple. Some of them contain only a single vowel or nasal segment. This makes it difficult to attain high accuracy in the automatic recognition of Cantonese digit strings. Recognition errors are mainly due to the insertion or deletion of short digits. It is widely admitted that the hidden Markov model does not impose effective control on the duration of the speech segments being modeled. Our approach uses a set of statistical duration models that are built explicitly from automatically segmented training data. They parametrically describe the distributions of various absolute and relative duration features. The duration models are used to assess recognition hypotheses and produce probabilistic duration scores. The duration scores are added with an empirically determined weight to the acoustic score. In this way, a hypothesis that is competitive in acoustic likelihood, but unfavorable in temporal organization, will be pruned. The conventional Viterbi search algorithms for connected-word recognition are modified to incorporate both state-level and word-level duration features. Experimental results show that absolute state duration gives the most noticeable improvement in digit recognition accuracy. With the use of duration information, insertion errors are much reduced, while deletion errors increase slightly. It is also found that explicit duration models are more effective for slow speech than for fast speech.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was partially supported by a research grant from the Hong Kong Research Grants Council (Ref: CUHK4206\/01E).","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"diaz-etal-2010-development","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/569_Paper.pdf","title":"Development and Use of an Evaluation Collection for Personalisation of Digital Newspapers","abstract":"This paper presents the process of development and the characteristics of an evaluation collection for a personalisation system for digital newspapers. This system selects, adapts and presents contents according to a user model that define information needs. The collection presented here contains data that are cross-related over four different axes: a set of news items from an electronic newspaper, collected into subsets corresponding to a particular sequence of days, packaged together and cross-indexed with a set of user profiles that represent the particular evolution of interests of a set of real users over the given days, expressed in each case according to four different representation frameworks: newspaper sections, Yahoo categories, keywords, and relevance feedback over the set of news items for the previous day. This information provides a minimum starting material over which one can evaluate for a given system how it addresses the first two observations-adapting to different users and adapting to particular users over time-providing that the particular system implements the representation of information needs according to the four frameworks employed in the collection. This collection has been successfully used to perform some different experiments to determine the effectiveness of the personalization system presented.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kim-etal-2019-textbook","url":"https:\/\/aclanthology.org\/P19-1347.pdf","title":"Textbook Question Answering with Multi-modal Context Graph Understanding and Self-supervised Open-set Comprehension","abstract":"In this work, we introduce a novel algorithm for solving the textbook question answering (TQA) task which describes more realistic QA problems compared to other recent tasks. We mainly focus on two related issues with analysis of the TQA dataset. First, solving the TQA problems requires to comprehend multimodal contexts in complicated input data. To tackle this issue of extracting knowledge features from long text lessons and merging them with visual features, we establish a context graph from texts and images, and propose a new module f-GCN based on graph convolutional networks (GCN). Second, scientific terms are not spread over the chapters and subjects are split in the TQA dataset. To overcome this so called 'out-of-domain' issue, before learning QA problems, we introduce a novel self-supervised open-set learning process without any annotations. The experimental results show that our model significantly outperforms prior state-of-the-art methods. Moreover, ablation studies validate that both methods of incorporating f-GCN for extracting knowledge from multi-modal contexts and our newly proposed self-supervised learning process are effective for TQA problems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"shubha-etal-2019-customizing","url":"https:\/\/aclanthology.org\/N19-1322.pdf","title":"Customizing Grapheme-to-Phoneme System for Non-Trivial Transcription Problems in Bangla Language","abstract":"Grapheme to phoneme (G2P) conversion is an integral part in various text and speech processing systems, such as: Text to Speech system, Speech Recognition system, etc. The existing methodologies for G2P conversion in Bangla language are mostly rule-based. However, data-driven approaches have proved their superiority over rule-based approaches for largescale G2P conversion in other languages, such as: English, German, etc. As the performance of data-driven approaches for G2P conversion depend largely on pronunciation lexicon on which the system is trained, in this paper, we investigate on developing an improved training lexicon by identifying and categorizing the critical cases in Bangla language and include those critical cases in training lexicon for developing a robust G2P conversion system in Bangla language. Additionally, we have incorporated nasal vowels in our proposed phoneme list. Our methodology outperforms other stateof-the-art approaches for G2P conversion in Bangla language.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research work is conducted at the Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology (BUET) and is supported by Samsung Research.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dita-roxas-2011-philippine","url":"https:\/\/aclanthology.org\/W11-3410.pdf","title":"Philippine Languages Online Corpora: Status, issues, and prospects","abstract":"This paper presents the work being done so far on the building of online corpus for Philippine languages. As for the status, the Philippine Languages Online Corpora (PLOC) now boasts a 250,000-word written corpus of the eight major languages in the archipelago. Some of the issues confronting the corpus building and future directions for this project are likewise discussed in this paper.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This project has been partially funded by the National Commission for Culture and the Arts, Philippine Government.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"costa-jussa-etal-2006-talp-phrase","url":"https:\/\/aclanthology.org\/2006.iwslt-evaluation.18.pdf","title":"TALP phrase-based system and TALP system combination for IWSLT 2006","abstract":"This paper describes the TALP phrase-based statistical machine translation system, enriched with the statistical machine reordering technique. We also report the combination of this system and the TALP-tuple, the n-gram-based statistical machine translation system. We report the results for all the tasks (Chinese, Arabic, Italian and Japanese to English) in the framework of the third evaluation campaign of the International Workshop on Spoken Language Translation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partially funded by the European Union under the integrated project TC-STAR (IST-2002-FP6-506738, http:\/\/www.tc-star.org), by the Spanish government under an FPU grant, by the Autonomous Government of Catalonia, the European Social Fund and the Technical University of Catalonia.The authors wish to thank Nizar Habash for making MADA available for the Arabic experiments.","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"inan-etal-2022-modeling","url":"https:\/\/aclanthology.org\/2022.findings-acl.228.pdf","title":"Modeling Intensification for Sign Language Generation: A Computational Approach","abstract":"End-to-end sign language generation models do not accurately represent the prosody in sign language. A lack of temporal and spatial variations leads to poor-quality generated presentations that confuse human interpreters. In this paper, we aim to improve the prosody in generated sign languages by modeling intensification in a data-driven manner. We present different strategies grounded in linguistics of sign language that inform how intensity modifiers can be represented in gloss annotations. To employ our strategies, we first annotate a subset of the benchmark PHOENIX-14T, a German Sign Language dataset, with different levels of intensification. We then use a supervised intensity tagger to extend the annotated dataset and obtain labels for the remaining portion of it. This enhanced dataset is then used to train state-of-the-art transformer models for sign language generation. We find that our efforts in intensification modeling yield better results when evaluated with automatic metrics. Human evaluation also indicates a higher preference of the videos generated using our model.","label_nlp4sg":1,"task":["Sign Language Generation"],"method":["supervised intensity tagger","transformer"],"goal1":"Reduced Inequalities","goal2":null,"goal3":null,"acknowledgments":null,"year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":1,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"benbow-1987-new","url":"https:\/\/aclanthology.org\/1987.tc-1.8.pdf","title":"The New Oxford English Dictionary Project","abstract":"The Oxford English Dictionary is the largest and most authoritative dictionary of the English language. It is a dictionary based on historical principles: that is, it takes as its subject-matter the entire vocabulary of the English language since 1150 AD. The OED, which is in twelve volumes, took approximately fifty years to prepare and the completed work was published in 1928. A Supplement to the Dictionary, on which work started in the 1950s, was published in four volumes between 1972 and 1986. Almost half a million words are defined in the OED and its Supplement and the definitions are illustrated by over two million quotations. The vast size of the work, as we shall see, has an important influence on the way in which the New OED project has to be handled (see Table 1 ).\nA work of reference like the OED requires continuous updating and revision to keep up with constant linguistic, social and technological changes. The publication of further supplements would be an inadequate, impractical and uneconomic solution to this problem. So would traditional paper-based methods of revision. Computerisation offers the only practicable solution. It also offers additional benefits in the form of a new and powerful research tool for literary and linguistic scholars and for other professionals in disciplines such as law and medicine, and for scientists, authors, translators and journalists: a lexical database of English.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1987,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sakai-masuyama-2002-unsupervised","url":"https:\/\/aclanthology.org\/W02-1907.pdf","title":"Unsupervised Knowledge Acquisition about the Deletion Possibility of Adnominal Verb Phrases","abstract":"EQUATION","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"auli-etal-2013-joint","url":"https:\/\/aclanthology.org\/D13-1106.pdf","title":"Joint Language and Translation Modeling with Recurrent Neural Networks","abstract":"We present a joint language and translation model based on a recurrent neural network which predicts target words based on an unbounded history of both source and target words. The weaker independence assumptions of this model result in a vastly larger search space compared to related feedforward-based language or translation models. We tackle this issue with a new lattice rescoring algorithm and demonstrate its effectiveness empirically. Our joint model builds on a well known recurrent neural network language model (Mikolov, 2012) augmented by a layer of additional inputs from the source language. We show competitive accuracy compared to the traditional channel model features. Our best results improve the output of a system trained on WMT 2012 French-English data by up to 1.5 BLEU, and by 1.1 BLEU on average across several test sets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Anthony Aue, Hany Hassan Awadalla, Jon Clark, Li Deng, Sauleh Eetemadi, Jianfeng Gao, Qin Gao, Xiaodong He, Will Lewis, Arul Menezes, and Kristina Toutanova for helpful discussions related to this work as well as for comments on previous drafts. We would also like to thank the anonymous reviewers for their comments.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zymla-2018-annotation","url":"https:\/\/aclanthology.org\/W18-4706.pdf","title":"Annotation of the Syntax\/Semantics interface as a Bridge between Deep Linguistic Parsing and TimeML","abstract":"This paper presents the development of an annotation scheme for the syntax\/semantics interface that may feed into the generation of (ISO-)TimeML style annotations. The annotation scheme accounts for compositionality and calculates the semantic contribution of tense and aspect. The annotation builds on output from syntactic parsers and links information from morphosyntactic cues to a representation grounded in formal semantics\/pragmatics that may be used to automatize the process of annotating tense\/aspect and temporal relations. 1 Credits We gratefully acknowledge funding from the Nuance Foundation. We also thank collaborators from the Infrastructure for the Exploration of Syntax and Semantics (INESS) and the ParGram projects.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"popov-etal-2004-creation","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/267.pdf","title":"Creation of Reusable Components and Language Resources for Named Entity Recognition in Russian","abstract":"This paper describes the development of the RussIE system in which we experimented with the creation of reusable processing components and language resources for a Russian Information Extraction system. The work was done as part of a multilingual project to adapt existing tools and resources for HLT to new domains and languages. The system was developed within the GATE architecture for language processing, and aims to explore the boundaries of language resource reuse and adaptability across languages and language types, rather than to create a full-scale IE system at the very peak of performance. Nevertheless, the systgem achieves a very creditable 71% F-Measure on news texts, and there is much scope for future improvement of this score.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dai-etal-2020-learning","url":"https:\/\/aclanthology.org\/2020.acl-main.57.pdf","title":"Learning Low-Resource End-To-End Goal-Oriented Dialog for Fast and Reliable System Deployment","abstract":"Existing end-to-end dialog systems perform less effectively when data is scarce. To obtain an acceptable success in real-life online services with only a handful of training examples, both fast adaptability and reliable performance are highly desirable for dialog systems. In this paper, we propose the Meta-Dialog System (MDS), which combines the advantages of both meta-learning approaches and human-machine collaboration. We evaluate our methods on a new extended-bAbI dataset and a transformed MultiWOZ dataset for lowresource goal-oriented dialog learning. Experimental results show that MDS significantly outperforms non-meta-learning baselines and can achieve more than 90% per-turn accuracies with only 10 dialogs on the extended-bAbI dataset.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research of the last author is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC).","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chung-glass-2020-improved","url":"https:\/\/aclanthology.org\/2020.acl-main.213.pdf","title":"Improved Speech Representations with Multi-Target Autoregressive Predictive Coding","abstract":"Training objectives based on predictive coding have recently been shown to be very effective at learning meaningful representations from unlabeled speech. One example is Autoregressive Predictive Coding (Chung et al., 2019), which trains an autoregressive RNN to generate an unseen future frame given a context such as recent past frames. The basic hypothesis of these approaches is that hidden states that can accurately predict future frames are a useful representation for many downstream tasks. In this paper we extend this hypothesis and aim to enrich the information encoded in the hidden states by training the model to make more accurate future predictions. We propose an auxiliary objective that serves as a regularization to improve generalization of the future frame prediction task. Experimental results on phonetic classification, speech recognition, and speech translation not only support the hypothesis, but also demonstrate the effectiveness of our approach in learning representations that contain richer phonetic content.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"korhonen-preiss-2003-improving","url":"https:\/\/aclanthology.org\/P03-1007.pdf","title":"Improving Subcategorization Acquisition Using Word Sense Disambiguation","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"menard-barriere-2017-pacte","url":"https:\/\/aclanthology.org\/W17-7410.pdf","title":"PACTE: A colloaborative platform for textual annotation","abstract":"In this article, we provide an overview of a web-based text annotation platform, called PACTE. We highlight the various features contributing to making PACTE an ideal platform for research projects involving textual annotation of large corpora performed by geographically distributed teams. 1 Introduction With the availability of large amount of textual data on the web, or from legacy documents, various text analysis projects emerge to study, analyze and understand the content of these texts. Projects arise from various disciplines, such as psychological studies (e.g. detecting language patterns related to particular mental states) or literary studies (e.g. studying patterns used by particular authors), or criminology studies (e.g. analyzing crime-related locations). Text analysis projects of large scale often involve multiple actors, in a distributed spatial setting, with collaborators all over the world. While their perspectives are different and their goals are varied, most text analysis projects require some common functionalities: document selection (to gather a proper corpus for pattern analysis), text annotation (to mark actual metadata about documents, paragraphs, sentences, words or word segments) and annotation search (to search the annotated segments for the ones of interest). Furthermore, many projects would benefit from basic automatic annotation of textual components (sentences, nominal compounds, named entities, etc). Yet, each project would likely also have its particularities as to what are the important text patterns to study, and perhaps such patterns are best annotated by human experts. We are in the process of developing a text project management and annotation platform, called PACTE (http:\/\/pacte.crim.ca), to support such large-scale distributed text analysis. A key component of PACTE is to not only allow for easy annotation (whether manual or automatic), but to also provide the very essential search component, to retrieve through the mass of texts, segments of information containing specific annotations (e.g. retrieving all documents mentioning a particular city). In its final state, PACTE will contain the common required project management functionalities, as well as common annotation services, but also allow for particularities (e.g. specialized schema definition). The platform also aims at encouraging interdisciplinary collaborations, as much automatic textual analysis in the recent years is data-driven, using machine learning models which require a lot of annotated data. A known bottleneck to these supervised models is the lack of availability of annotated data. By providing a platform which makes it easy to annotate using user-defined schemas, we hope to encourage various users from various disciplines to perform annotation. In the remaining of this demonstration note, we will show (Section 2) an example of an annotation project with definitions of the various terms used when discussing annotation projects (e.g. types, schemas, features, groups, etc). We will then highlight (section 3) the distinctive features of PACTE, mainly focusing on eight important aspects of PACTE, that it (1) is web-based, (2) handles large volumes of text for both annotation and search, (3) allows easy project management, (4) allows collaborative annotation, (5) provides some automatic annotation services, (6) allows users to define specific schemas for targeted manual annotation, (7) provides text search capabilities, (8) offers management of custom lexicon. Then, we compare PACTE to other platforms (Section 4) and we give the current state and future development of PACTE (Section 5).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgments : This project was supported in part by Canarie grant RS-10 for Software Research Platform and the Minist\u00e8re de l'\u00c9conomie, de la Science et de l'Innovation (MESI) of the Government of Qu\u00e9bec.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"goodwin-etal-2022-compositional","url":"https:\/\/aclanthology.org\/2022.acl-long.448.pdf","title":"Compositional Generalization in Dependency Parsing","abstract":"Compositionality-the ability to combine familiar units like words into novel phrases and sentences-has been the focus of intense interest in artificial intelligence in recent years. To test compositional generalization in semantic parsing, Keysers et al. (2020) introduced Compositional Freebase Queries (CFQ). This dataset maximizes the similarity between the test and train distributions over primitive units, like words, while maximizing the compound divergence-the dissimilarity between test and train distributions over larger structures, like phrases. Dependency parsing, however, lacks a compositional generalization benchmark. In this work, we introduce a gold-standard set of dependency parses for CFQ, and use this to analyze the behavior of a state-of-the art dependency parser (Qi et al., 2020) on the CFQ dataset. We find that increasing compound divergence degrades dependency parsing performance, although not as dramatically as semantic parsing performance. Additionally, we find the performance of the dependency parser does not uniformly degrade relative to compound divergence, and the parser performs differently on different splits with the same compound divergence. We explore a number of hypotheses for what causes the non-uniform degradation in dependency parsing performance, and identify a number of syntactic structures that drive the dependency parser's lower performance on the most challenging splits.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Christopher Manning, the Montreal Computational and Quantitative Linguistics lab at McGill University, and the Human and Machine Interaction Through Language group at ServiceNow Research for helpful feedback. We are grateful to ServiceNow Research for providing extensive compute and other support. We also gratefully acknowledge the support of the MITACS accelerate internship, the Natural Sciences and Engineering Research Council of Canada, the Fonds de Recherche du Qu\u00e9bec, Soci\u00e9t\u00e9 et Culture, and the Canada CIFAR AI Chairs Program.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"leacock-etal-1998-using","url":"https:\/\/aclanthology.org\/J98-1006.pdf","title":"Using Corpus Statistics and WordNet Relations for Sense Identification","abstract":"Corpus-based approaches to word sense identification have flexibility and generality but suffer from a knowledge acquisition bottleneck. We show how knowledge-based techniques can be used to open the bottleneck by automatically locating training corpora. We describe a statistical classifier that combines topical context with local cues to ident~y a word sense. The classifier is used to disambiguate a noun, a verb, and an adjective. A knowledge base in the form of WordNet's lexical relations is used to automatically locate training examples in a general text corpus. Test results are compared with those from manually tagged training examples.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are indebted to the other members of the WordNet group who have provided advice and technical support: Christiane Fellbaum, Shari Landes, and Randee Tengi. We are also grateful to Paul Bagyenda, Ben Johnson-Laird and Joshua Schecter. We thank Scott Wayland, Tim Allison and Jill Hollifield for tagging the serve and hard corpora. Finally we are grateful to the three anonymous CL reviewers for their comments and advice.This material is based upon work supported in part by the National Science Foundation under NSF Award No. IRI95-28983 and by the Defense Advanced Research Projects Agency, Grant No. N00014-91-1634.","year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sanchis-trilles-etal-2009-online","url":"https:\/\/aclanthology.org\/2009.iwslt-papers.5.pdf","title":"Online language model adaptation for spoken dialog translation","abstract":"This paper focuses on the problem of language model adaptation in the context of Chinese-English cross-lingual dialogs, as setup by the challenge task of the IWSLT 2009 Evaluation Campaign. Mixtures of n-gram language models are investigated, which are obtained by clustering bilingual training data according to different available human annotations, respectively, at the dialog level, turn level, and dialog act level. For the latter case, clustering of IWSLT data was in fact induced through a comparable Italian-English parallel corpus provided with dialog act annotations. For the sake of adaptation, mixture weight estimation is performed either at the level of single source sentence or test set. Estimated weights are then transferred to the target language mixture model. Experimental results show that, by training different specific language models weighted according to the actual input instead of using a single target language model, significant gains in terms of perplexity and BLEU can be achieved.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the EuroMatrixPlus project (IST-231720), which is funded by the European Commission under the Seventh Framework Programme for Research and Technological Development and by the Spanish MEC under scholarship AP2005-4023 and grant CONSOLIDER Ingenio-2010 CSD2007-00018.","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"coursey-etal-2009-using","url":"https:\/\/aclanthology.org\/W09-1126.pdf","title":"Using Encyclopedic Knowledge for Automatic Topic Identification","abstract":"This paper presents a method for automatic topic identification using an encyclopedic graph derived from Wikipedia. The system is found to exceed the performance of previously proposed machine learning algorithms for topic identification, with an annotation consistency comparable to human annotations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been partially supported by an award #CR72105 from the Texas Higher Education Coordinating Board and by an award from Google Inc. The authors are grateful to the Waikato group for making their data set available.","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tonelli-menini-2021-framenet","url":"https:\/\/aclanthology.org\/2021.latechclfl-1.2.pdf","title":"FrameNet-like Annotation of Olfactory Information in Texts","abstract":"Although olfactory references play a crucial role in our cultural memory, only few works in NLP have tried to capture them from a computational perspective. Currently, the main challenge is not much the development of technological components for olfactory information extraction, given recent advances in semantic processing and natural language understanding, but rather the lack of a theoretical framework to capture this information from a linguistic point of view, as a preliminary step towards the development of automated systems. Therefore, in this work we present the annotation guidelines, developed with the help of history scholars and domain experts, aimed at capturing all the relevant elements involved in olfactory situations or events described in texts. These guidelines have been inspired by FrameNet annotation, but underwent some adaptations, which are detailed in this paper. Furthermore, we present a case study concerning the annotation of olfactory situations in English historical travel writings describing trips to Italy. An analysis of the most frequent role fillers show that olfactory descriptions pertain to some typical domains such as religion, food, nature, ancient past, poor sanitation, all supporting the creation of a stereotypical imagery related to Italy. On the other hand, positive feelings triggered by smells are prevalent, and contribute to framing travels to Italy as an exciting experience involving all senses.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research has been supported by the European Union's Horizon 2020 program project ODEU-ROPA under grant agreement number 101004469. We thank in particular Inger Leemans, William Tullet, Caro Verbeek and Cecilia Bembibre for their suggestions on how to define and model olfactory events.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"trujillo-1995-bi","url":"https:\/\/aclanthology.org\/1995.tmi-1.4.pdf","title":"Bi-Lexical Rules for Multi-Lexeme Translation in Lexicalist MT","abstract":"The paper presents a prototype lexicalist Machine Translation system (based on the so-called 'Shake-and-Bake' approach of Whitelock (1992)) consisting of an analysis component, a dynamic bilingual lexicon, and a generation component, and shows how it is applied to a range of MT problems. Multi-Lexeme translations are handled through bi-lexical rules which map bilingual lexical signs into new bilingual lexical signs. It is argued that much translation can be handled by equating translationally equivalent lists of lexical signs, either directly in the bilingual lexicon, or by deriving them through bi-lexical rules. Lexical semantic information organized as Qualia structures Pustejovsky (1991) is used as a mechanism for restricting the domain of the rules.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Thanks to two anonymous reviewers for their valuable comments. The LKB was implemented by Ann Copestake as part of the ESPRIT ACQUILEX project. Remaining errors are mine.","year":1995,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"boytcheva-etal-2009-extraction","url":"https:\/\/aclanthology.org\/W09-4501.pdf","title":"Extraction and Exploration of Correlations in Patient Status Data","abstract":"The paper discusses an Information Extraction approach, which is applied for the automatic processing of hospital Patient Records (PRs) in Bulgarian language. The main task reported here is retrieval of status descriptions related to anatomical organs. Due to the specific telegraphic PR style, the approach is focused on shallow analysis. Missing text descriptions and default values are another obstacle. To overcome it, we propose an algorithm for exploring the correlations between patient status data and the corresponding diagnosis. Rules for interdependencies of the patient status data are generated by clustering according to chosen metrics. In this way it becomes possible to fill in status templates for each patient when explicit descriptions are unavailable in the text. The article summarises evaluation results which concern the performance of the current IE prototype.","label_nlp4sg":1,"task":["Information Extraction"],"method":["correlations"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"This work is a part of the project EVTIMA (\"Effective search of conceptual information with applications in medical informatics\", 2009-2011) which is funded by the Bulgarian National Science Fund by grant No DO 02-292\/December 2008.","year":2009,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zeng-etal-2014-relation","url":"https:\/\/aclanthology.org\/C14-1220.pdf","title":"Relation Classification via Convolutional Deep Neural Network","abstract":"The state-of-the-art methods used for relation classification are primarily based on statistical machine learning, and their performance strongly depends on the quality of the extracted features. The extracted features are often derived from the output of pre-existing natural language processing (NLP) systems, which leads to the propagation of the errors in the existing tools and hinders the performance of these systems. In this paper, we exploit a convolutional deep neural network (DNN) to extract lexical and sentence level features. Our method takes all of the word tokens as input without complicated pre-processing. First, the word tokens are transformed to vectors by looking up word embeddings 1. Then, lexical level features are extracted according to the given nouns. Meanwhile, sentence level features are learned using a convolutional approach. These two level features are concatenated to form the final extracted feature vector. Finally, the features are fed into a softmax classifier to predict the relationship between two marked nouns. The experimental results demonstrate that our approach significantly outperforms the state-of-the-art methods.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was sponsored by the National Basic Research Program of China (No. 2014CB340503) and the National Natural Science Foundation of China (No. 61272332, 61333018, 61202329, 61303180). This work was supported in part by Noah's Ark Lab of Huawei Tech. Co. Ltd. We thank the anonymous reviewers for their insightful comments.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kongyoung-etal-2020-multi","url":"https:\/\/aclanthology.org\/2020.scai-1.3.pdf","title":"Multi-Task Learning using Dynamic Task Weighting for Conversational Question Answering","abstract":"Conversational Question Answering (Con-vQA) is a Conversational Search task in a simplified setting, where an answer must be extracted from a given passage. Neural language models, such as BERT, fine-tuned on large-scale ConvQA datasets such as CoQA and QuAC have been used to address this task. Recently, Multi-Task Learning (MTL) has emerged as a particularly interesting approach for developing ConvQA models, where the objective is to enhance the performance of a primary task by sharing the learned structure across several related auxiliary tasks. However, existing ConvQA models that leverage MTL have not investigated the dynamic adjustment of the relative importance of the different tasks during learning, nor the resulting impact on the performance of the learned models. In this paper, we first study the effectiveness and efficiency of dynamic MTL methods including Evolving Weighting, Uncertainty Weighting, and Loss-Balanced Task Weighting, compared to static MTL methods such as the uniform weighting of tasks. Furthermore, we propose a novel hybrid dynamic method combining Abridged Linear for the main task with a Loss-Balanced Task Weighting (LBTW) for the auxiliary tasks, so as to automatically fine-tune task weighting during learning, ensuring that each of the tasks' weights is adjusted by the relative importance of the different tasks. We conduct experiments using QuAC, a large-scale ConvQA dataset. Our results demonstrate the effectiveness of our proposed method, which significantly outperforms both the single-task learning and static task weighting methods with improvements ranging from +2.72% to +3.20% in F1 scores. Finally, our findings show that the performance of using MTL in developing ConvQA model is sensitive to the correct selection of the auxiliary tasks as well as to an adequate balancing of the loss rates of these tasks during training by using LBTW.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dorffner-etal-1990-integrating","url":"https:\/\/aclanthology.org\/C90-2016.pdf","title":"Integrating Stress and Intonation into a Concept-to-Speech System","abstract":"Abstract: The paper deals with the integration of intonation algorithms into a concept-to-speech system for German 1). The algorithm for computing the stress hierarchy of a sentence introduced by Kiparski (1973) and the theory of syntactic grouping for intonation patterns developed by Bierwisch (1973) have been studied extensively, but they have never been implemented in a concept-to-speech system like the one presented here. We describe the back end of this concept-to-speech system: The surface generator transfers a hierarchical dependency structure of a sentence into a phoneme string by traversing it in a recurs~ve-descent manner.\nSurface structures unfold while generation proceeds, which means that at no point of the process does the full syntactic tree structure exist. As they depend on syntactic features, both the indices introduced by the Kiparski (degrees of stress) and the Bierwisch (indexed border markers) formalism have to be inserted by the generator. This implies some changes to the original algorithms, which are demonstrated in this paper. The generator has been tested in the domain of an expert system that helps to debug electronic circuits. The synthesized utterances of the test domain show significant improvements over monotonous forms of speech produced by systems not making use of intonation information.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1990,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"vodolazova-lloret-2019-impact","url":"https:\/\/aclanthology.org\/R19-1146.pdf","title":"The Impact of Rule-Based Text Generation on the Quality of Abstractive Summaries","abstract":"In this paper we describe how an abstractive text summarization method improved the informativeness of automatic summaries by integrating syntactic text simplification, subject-verb-object concept frequency scoring and a set of rules that transform text into its semantic representation. We analyzed the impact of each component of our approach on the quality of generated summaries and tested it on DUC 2002 dataset. Our experiments showed that our approach outperformed other state-of-the-art abstractive methods while maintaining acceptable linguistic quality and redundancy rate.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research work has been partially funded by the University of Alicante (Spain), Generalitat Valenciana and the Spanish Government through the projects SIIA (PROMETEU\/2018\/089), LIVING-LANG (RTI2018-094653-B-C22), IN-TEGER (RTI2018-094649-B-I00) and Red iGLN (TIN2017-90773-REDT).","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-etal-2022-emocaps","url":"https:\/\/aclanthology.org\/2022.findings-acl.126.pdf","title":"EmoCaps: Emotion Capsule based Model for Conversational Emotion Recognition","abstract":"Emotion recognition in conversation (ERC) aims to analyze the speaker's state and identify their emotion in the conversation. Recent works in ERC focus on context modeling but ignore the representation of contextual emotional tendency. In order to extract multi-modal information and the emotional tendency of the utterance effectively, we propose a new structure named Emoformer to extract multi-modal emotion vectors from different modalities and fuse them with sentence vector to be an emotion capsule. Furthermore, we design an end-to-end ERC model called EmoCaps, which extracts emotion vectors through the Emoformer structure and obtain the emotion classification results from a context analysis model. Through the experiments with two benchmark datasets, our model shows better performance than the existing state-of-the-art models.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"newman-1988-combinatorial","url":"https:\/\/aclanthology.org\/A88-1033.pdf","title":"Combinatorial Disambiguation","abstract":"The disambiguation of sentences is a combinatorial problem. This paper describes a method for treating it as such, directly, by adapting standard combinatorial search optimizations. Traditional disambiguation heuristics are applied but, instead of being embedded in individual decision procedures for specific types of ambiguities, they contribute to numerical weights that are considered by a single global optimizer. The result is increased power and simpler code. The method is being implemented for a machine translation projecl, but could be adapted to any natural language system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1988,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chou-etal-2020-combining","url":"https:\/\/aclanthology.org\/2020.rocling-1.7.pdf","title":"Combining Dependency Parser and GNN models for Text Classification","abstract":"As the amount of data increases, manually classifying texts is expensive. Therefore, automated text classification has become important, such as spam detection, news classification, and sentiment analysis. Recently, deep learning models in natural language are roughly divided into two categories: sequential and graph based. The sequential models usually use RNN and CNN, as well as the BERT model and its variants; In recent years, researchers started to apply the graph based deep learning model to NLP, using word co-occurrence and TF-IDF weights to build graphs in order to learn the features of words and documents for classification.\nIn the experiment, we use different datasets, MR, R8, R52 and Ohsumed for verification.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"falenska-cetinoglu-2021-assessing","url":"https:\/\/aclanthology.org\/2021.gebnlp-1.9.pdf","title":"Assessing Gender Bias in Wikipedia: Inequalities in Article Titles","abstract":"Potential gender biases existing in Wikipedia's content can contribute to biased behaviors in a variety of downstream NLP systems. Yet, efforts in understanding what inequalities in portraying women and men occur in Wikipedia focused so far only on biographies, leaving open the question of how often such harmful patterns occur in other topics. In this paper, we investigate gender-related asymmetries in Wikipedia titles from all domains. We assess that for only half of gender-related articles, i.e., articles with words such as women or male in their titles, symmetrical counterparts describing the same concept for the other gender (and clearly stating it in their titles) exist. Among the remaining imbalanced cases, the vast majority of articles concern sports-and social-related issues. We provide insights on how such asymmetries can influence other Wikipedia components and propose steps towards reducing the frequency of observed patterns.","label_nlp4sg":1,"task":["Assessing Gender Bias"],"method":["insights"],"goal1":"Gender Equality","goal2":null,"goal3":null,"acknowledgments":"We thank P\u0131nar Arp\u0131nar-Av\u015far for pointing out realworld sport title inequalities. The second author is funded by DFG via project CE 326\/1-1 \"Computational Structural Analysis of German-Turkish Code-Switching\" (SAGT).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":1,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ramsay-mansour-2011-exploiting","url":"https:\/\/aclanthology.org\/R11-1062.pdf","title":"Exploiting Hidden Morphophonemic Constraints for Finding the Underlying Forms of `weak' Arabic Verbs","abstract":"We present a treatment of Arabic morphology which allows us to deal with 'weak' verbs by paying attention to the underlying phonological process. This provides us with a very clean way of thinking about such verbs, and also makes maintenance of the lexicon very straightforward.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sandrih-etal-2019-development","url":"https:\/\/aclanthology.org\/R19-1122.pdf","title":"Development and Evaluation of Three Named Entity Recognition Systems for Serbian - The Case of Personal Names","abstract":"In this paper we present a rule-and lexicon-based system for the recognition of Named Entities (NE) in Serbian newspaper texts that was used to prepare a gold standard annotated with personal names. It was further used to prepare training sets for four different levels of annotation, which were further used to train two Named Entity Recognition (NER) systems: Stanford and spaCy. All obtained models, together with a rule-and lexiconbased system were evaluated on two sample texts: a part of the gold standard and an independent newspaper text of approximately the same size. The results show that rule-and lexicon-based system outperforms trained models in all four scenarios (measured by F 1), while Stanford models have the highest recall. The produced models are incorporated into a Web platform NER&Beyond that provides various NE-related functions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by Serbian Ministry of Education and Science under the grants #III 47003 and 178006.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dowling-etal-2018-smt","url":"https:\/\/aclanthology.org\/W18-2202.pdf","title":"SMT versus NMT: Preliminary comparisons for Irish","abstract":"In this paper, we provide a preliminary comparison of statistical machine translation (SMT) and neural machine translation (NMT) for English\u2192Irish in the fixed domain of public administration. We discuss the challenges for SMT and NMT of a less-resourced language such as Irish, and show that while an out-of-the-box NMT system may not fare quite as well as our tailor-made domain-specific SMT system, the future may still be promising for EN\u2192GA NMT.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was part-funded by the Department of Culture, Heritage and the Gaeltacht (DCHG) and is also supported by the ADAPT Centre for Digital Content Technology, which is funded under the SFI Research Centres Programme (Grant 13\/RC\/2016) and is co-funded by the European Regional Development Fund. We would also like to thank the four anonymous reviewers for their useful comments.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chai-2000-evaluation","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2000\/pdf\/259.pdf","title":"Evaluation of a Generic Lexical Semantic Resource in Information Extraction","abstract":"We have created an information extraction system that allows users to train the system on a domain of interest. The system helps to maximize the effect of user training by applying WordNet to rule generation and validation. The results show that, with careful control, WordNet is helpful in generating useful rules to cover more instances and hence improve the overall performance. This is particularly true when the training set is small, where F-measure is increased from 65% to 72%. However, the impact of WordNet diminishes as the size of training data increases. This paper describes our experience in applying WordNet to this system and gives an evaluation of such an effort.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The author would like to thank Alan Biermann for insightful discussions and guidance; Jerry Hobbs for the finite state rules for the Partial Parser; Amit Bagga for developing the Tokenizer and the Semantic Classifier; and Robert McGough for his contributions to this manuscript.","year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhang-etal-2000-query","url":"https:\/\/aclanthology.org\/W00-1313.pdf","title":"Query Translation in Chinese-English Cross-Language Information Retrieval","abstract":"This paper proposed a new query translation method based on the mutual information matrices of terms in the Chinese and English corpora. Instead of looking up a \u2022 bilingual phrase dictionary, the compositional phrase (the translation of phrase can be derived from the translation of its components) in the query can be indirectly translated via a general-purpose Chinese-English dictionary look-up procedure. A novel selection method for translations of query terms is also presented in detail. Our query translation method ultimately constructs an English query in which each query term has a weight. The evaluation results show that the retrieval performance achieved by our query translation method is about 73% of monolingual information retrieval and is about 28% higher than that of simple wordby-word translation way.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors wish to express their appreciation to those interpreters of computer manuals. Without theft selfless contribution, our experiment would be impossible. Thanks to the anonymous reviewers for their helpful comments.","year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"levi-etal-2019-identifying","url":"https:\/\/aclanthology.org\/D19-5004.pdf","title":"Identifying Nuances in Fake News vs. Satire: Using Semantic and Linguistic Cues","abstract":"The blurry line between nefarious fake news and protected-speech satire has been a notorious struggle for social media platforms. Further to the efforts of reducing exposure to misinformation on social media, purveyors of fake news have begun to masquerade as satire sites to avoid being demoted. In this work, we address the challenge of automatically classifying fake news versus satire. Previous work have studied whether fake news and satire can be distinguished based on language differences. Contrary to fake news, satire stories are usually humorous and carry some political or social message. We hypothesize that these nuances could be identified using semantic and linguistic cues. Consequently, we train a machine learning method using semantic representation, with a state-of-the-art contextual language model, and with linguistic features based on textual coherence metrics. Empirical evaluation attests to the merits of our approach compared to the language-based baseline and sheds light on the nuances between fake news and satire. As avenues for future work, we consider studying additional linguistic features related to the humor aspect, and enriching the data with current news events, to help identify a political or social message.","label_nlp4sg":1,"task":["Identifying Nuances in Fake News vs . Satire"],"method":["contextual language model"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"felt-riloff-2020-recognizing","url":"https:\/\/aclanthology.org\/2020.figlang-1.20.pdf","title":"Recognizing Euphemisms and Dysphemisms Using Sentiment Analysis","abstract":"This paper presents the first research aimed at recognizing euphemistic and dysphemistic phrases with natural language processing. Euphemisms soften references to topics that are sensitive, disagreeable, or taboo. Conversely, dysphemisms refer to sensitive topics in a harsh or rude way. For example, \"passed away\" and \"departed\" are euphemisms for death, while \"croaked\" and \"six feet under\" are dysphemisms for death. Our work explores the use of sentiment analysis to recognize euphemistic and dysphemistic language. First, we identify near-synonym phrases for three topics (FIRING, LYING, and STEALING) using a bootstrapping algorithm for semantic lexicon induction. Next, we classify phrases as euphemistic, dysphemistic, or neutral using lexical sentiment cues and contextual sentiment analysis. We introduce a new gold standard data set and present our experimental results for this task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We gratefully thank Shelley Felt, Shauna Felt, and Claire Moore for their help annotating the gold data for this research.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bak-oh-2020-speaker","url":"https:\/\/aclanthology.org\/2020.acl-main.568.pdf","title":"Speaker Sensitive Response Evaluation Model","abstract":"Automatic evaluation of open-domain dialogue response generation is very challenging because there are many appropriate responses for a given context. Existing evaluation models merely compare the generated response with the ground truth response and rate many of the appropriate responses as inappropriate if they deviate from the ground truth. One approach to resolve this problem is to consider the similarity of the generated response with the conversational context. In this paper, we propose an automatic evaluation model based on that idea and learn the model parameters from an unlabeled conversation corpus. Our approach considers the speakers in defining the different levels of similar context. We use a Twitter conversation corpus that contains many speakers and conversations to test our evaluation model. Experiments show that our model outperforms the other existing evaluation metrics in terms of high correlation with human annotation scores. We also show that our model trained on Twitter can be applied to movie dialogues without any additional training. We provide our code and the learned parameters so that they can be used for automatic evaluation of dialogue response generation models.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Jeongmin Byun 3 for building the annotation webpage, and the anonymous reviewers for helpful questions and comments. ","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"blaheta-charniak-1999-automatic","url":"https:\/\/aclanthology.org\/P99-1066.pdf","title":"Automatic Compensation for Parser Figure-of-Merit Flaws","abstract":"Best-first chart parsing utilises a figure of merit (FOM) to efficiently guide a parse by first attending to those edges judged better. In the past it has usually been static; this paper will show that with some extra information, a parser can compensate for FOM flaws which otherwise slow it down. Our results are faster than the prior best by a factor of 2.5; and the speedup is won with no significant decrease in parser accuracy.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1999,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"su-etal-2018-natural","url":"https:\/\/aclanthology.org\/N18-2010.pdf","title":"Natural Language Generation by Hierarchical Decoding with Linguistic Patterns","abstract":"Natural language generation (NLG) is a critical component in spoken dialogue systems. Classic NLG can be divided into two phases: (1) sentence planning: deciding on the overall sentence structure, (2) surface realization: determining specific word forms and flattening the sentence structure into a string. Many simple NLG models are based on recurrent neural networks (RNN) and sequence-to-sequence (seq2seq) model, which basically contains a encoder-decoder structure; these NLG models generate sentences from scratch by jointly optimizing sentence planning and surface realization using a simple cross entropy loss training criterion. However, the simple encoderdecoder architecture usually suffers from generating complex and long sentences, because the decoder has to learn all grammar and diction knowledge. This paper introduces a hierarchical decoding NLG model based on linguistic patterns in different levels, and shows that the proposed method outperforms the traditional one with a smaller model size. Furthermore, the design of the hierarchical decoding is flexible and easily-extensible in various NLG systems 1 .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank reviewers for their insightful comments on the paper. The authors are supported by the Institute for Information Industry, Ministry of Science and Technology of Taiwan, Google Research, Microsoft Research, and Medi-aTek Inc..","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"song-etal-2012-linguistic","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/785_Paper.pdf","title":"Linguistic Resources for Handwriting Recognition and Translation Evaluation","abstract":"We describe efforts to create corpora to support development and evaluation of handwriting recognition and translation technology. LDC has developed a stable pipeline and infrastructures for collecting and annotating handwriting linguistic resources to support the evaluation of MADCAT and OpenHaRT. We collect handwritten samples of pre-processed Arabic and Chinese data that has been already translated in English that is used in the GALE program. To date, LDC has recruited more than 600 scribes and collected, annotated and released more than 225,000 handwriting images. Most linguistic resources created for these programs will be made available to the larger research community by publishing in LDC's catalog. The phase 1 MADCAT corpus is now available.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We acknowledge and appreciate the work of David Lee on technical infrastructure of MADCAT. This work was supported in part by the Defense Advanced Research Projects Agency, MADCAT Program Grant No. HR0011-08-1-004. The content of this paper does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"carberry-etal-2003-understanding","url":"https:\/\/aclanthology.org\/W03-2101.pdf","title":"Understanding Information Graphics: A Discourse-Level Problem","abstract":"Information graphics that appear in newspapers and magazines generally have a message that the viewer is intended to recognize. This paper argues that understanding such information graphics is a discourse-level problem. In particular, it requires assimilating information from multiple knowledge sources to recognize the intended message of the graphic, just as recognizing intention in text does. Moreover, when an article is composed of text and graphics, the intended message of the information graphic (its discourse intention) must be integrated into the discourse structure of the surrounding text and contributes to the overall discourse intention of the article. This paper describes how we extend plan-based techniques that have been used for understanding traditional discourse to the understanding of information graphics. This work is part of a project to develop an interactive natural language system that provides sight-impaired users with access to information graphics.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"karttunen-1986-patr","url":"https:\/\/aclanthology.org\/C86-1016.pdf","title":"D-PATR: A Development Environment for Unification-Based Grammars","abstract":", and functional unification grammar (Kay) . At the other end of the range covered by D-PATR are unification-based categorial grammars (Klein, Steedman, Uszkoreit, Wittenburg) in which all the syntactic information is incorporated in the lexicon and the remaining few combinatorial rules that build phrases are function application and composition.\nDefinite-clause grammars (Pereira and Warren) can also be encoded in the PATR formalism.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1986,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wu-etal-2007-building","url":"https:\/\/aclanthology.org\/W07-1521.pdf","title":"Building Chinese Sense Annotated Corpus with the Help of Software Tools","abstract":"This paper presents the building procedure of a Chinese sense annotated corpus. A set of software tools is designed to help human annotator to accelerate the annotation speed and keep the consistency. The software tools include 1) a tagger for word segmentation and POS tagging, 2) an annotating interface responsible for the sense describing in the lexicon and sense annotating in the corpus, 3) a checker for consistency keeping, 4) a transformer responsible for the transforming from text file to XML format, and 5) a counter for sense frequency distribution calculating.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ghosh-etal-2010-clause","url":"https:\/\/aclanthology.org\/W10-3603.pdf","title":"Clause Identification and Classification in Bengali","abstract":"This paper reports about the development of clause identification and classification techniques for Bengali language. A syntactic rule based model has been used to identify the clause boundary. For clause type identification a Conditional random Field (CRF) based statistical model has been used. The clause identification system and clause classification system demonstrated 73% and 78% precision values respectively.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kim-etal-2018-automatic","url":"https:\/\/aclanthology.org\/L18-1230.pdf","title":"Automatic Wordnet Mapping: from CoreNet to Princeton WordNet","abstract":"CoreNet is a lexico-semantic network of 73,100 Korean word senses, which are categorized under 2,937 semantic categories organized in a taxonomy. Recently, to foster the more widespread use of CoreNet, there was an attempt to map the semantic categories of CoreNet into synsets of Princeton WordNet by lexical relations such as synonymy, hyponymy, and hypernymy relations. One of the limitations of the existing mapping is that it is only focused on mapping the semantic categories, but not on mapping the word senses, which are the majority part (96%) of CoreNet. To boost bridging the gap between CoreNet and WordNet, we introduce the automatic mapping approach to link the word senses of CoreNet into WordNet synsets. The evaluation shows that our approach successfully maps previously unmapped 38,028 word senses into WordNet synsets with the precision of 91.2% (\u00b11.14 with 99% confidence).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was financially supported by the Ministry of Trade, Industry and Energy(MOTIE) and Korea Institute for Advancement of Technology(KIAT) through the International Cooperative R&D program.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"attamimi-etal-2015-learning","url":"https:\/\/aclanthology.org\/D15-1269.pdf","title":"Learning Word Meanings and Grammar for Describing Everyday Activities in Smart Environments","abstract":"If intelligent systems are to interact with humans in a natural manner, the ability to describe daily life activities is important. To achieve this, sensing human activities by capturing multimodal information is necessary. In this study, we consider a smart environment for sensing activities with respect to realistic scenarios. We next propose a sentence generation system from observed multimodal information in a bottom up manner using multilayered multimodal latent Dirichlet allocation and Bayesian hidden Markov models. We evaluate the grammar learning and sentence generation as a complete process within a realistic setting. The experimental result reveals the effectiveness of the proposed method.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is partly supported by JSPS KAKENHI 26280096.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"skantze-gustafson-2009-attention","url":"https:\/\/aclanthology.org\/W09-3945.pdf","title":"Attention and Interaction Control in a Human-Human-Computer Dialogue Setting","abstract":"This paper presents a simple, yet effective model for managing attention and interaction control in multimodal spoken dialogue systems. The model allows the user to switch attention between the system and other humans, and the system to stop and resume speaking. An evaluation in a tutoring setting shows that the user's attention can be effectively monitored using head pose tracking, and that this is a more reliable method than using push-to-talk.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is supported by MonAMI, an Integrated Project under the European Commission's 6 th Framework Program (IP-035147), and the Swedish research council project GENDIAL (VR #2007-6431).","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hillard-etal-2004-improving","url":"https:\/\/aclanthology.org\/N04-4018.pdf","title":"Improving Automatic Sentence Boundary Detection with Confusion Networks","abstract":"We extend existing methods for automatic sentence boundary detection by leveraging multiple recognizer hypotheses in order to provide robustness to speech recognition errors. For each hypothesized word sequence, an HMM is used to estimate the posterior probability of a sentence boundary at each word boundary. The hypotheses are combined using confusion networks to determine the overall most likely events. Experiments show improved detection of sentences for conversational telephone speech, though results are mixed for broadcast news.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported in part by DARPA contract no. MDA972-02-C-0038, and made use of prosodic feature extraction and modeling tools developed under NSF-STIMULATE grant IRI-9619921. Any opinions, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of these agencies.","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hui-2002-measuring","url":"https:\/\/aclanthology.org\/W02-1609.pdf","title":"Measuring User Acceptability of Machine Translations to Diagnose System Errors: An Experience Report","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sedoc-ungar-2019-role","url":"https:\/\/aclanthology.org\/W19-3808.pdf","title":"The Role of Protected Class Word Lists in Bias Identification of Contextualized Word Representations","abstract":"Systemic bias in word embeddings has been widely reported and studied, and efforts made to debias them; however, new contextualized embeddings such as ELMo and BERT are only now being similarly studied. Standard debiasing methods require large, heterogeneous lists of target words to identify the \"bias subspace\". We show that using new contextualized word embeddings in conceptor debiasing allows us to more accurately debias word embeddings by breaking target word lists into more homogeneous subsets and then combining (\"Or'ing\") the debiasing conceptors of the different subsets.","label_nlp4sg":1,"task":["Bias Identification of Contextualized Word Representations"],"method":["contextualized word embeddings"],"goal1":"Reduced Inequalities","goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":1,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bell-etal-2019-context","url":"https:\/\/aclanthology.org\/W19-4410.pdf","title":"Context is Key: Grammatical Error Detection with Contextual Word Representations","abstract":"Grammatical error detection (GED) in nonnative writing requires systems to identify a wide range of errors in text written by language learners. Error detection as a purely supervised task can be challenging, as GED datasets are limited in size and the label distributions are highly imbalanced. Contextualized word representations offer a possible solution, as they can efficiently capture compositional information in language and can be optimized on large amounts of unsupervised data. In this paper, we perform a systematic comparison of ELMo, BERT and Flair embeddings (Peters et al., 2017; Devlin et al., 2018; Akbik et al., 2018) on a range of public GED datasets, and propose an approach to effectively integrate such representations in current methods, achieving a new state of the art on GED. We further analyze the strengths and weaknesses of different contextual embeddings for the task at hand, and present detailed analyses of their impact on different types of errors.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their valuable feedback. Marek Rei and Helen Yannakoudakis were supported by Cambridge Assessment, University of Cambridge.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"miller-elsner-2017-click","url":"https:\/\/aclanthology.org\/W17-0115.pdf","title":"Click reduction in fluent speech: a semi-automated analysis of Mangetti Dune !Xung","abstract":"We compare click production in fluent speech to previously analyzed clear productions in the Namibian Kx'a language Mangetti Dune !Xung. Using a rule-based software system, we extract clicks from recorded folktales, with click detection accuracy about 65% f-score for one storyteller, reducing manual annotation time by two thirds; we believe similar methods will be effective for other loud, short consonants like ejectives. We use linear discriminant analysis to show that the four click types of !Xung are harder to differentiate in the folktales than in clear productions, and conduct a feature analysis which suggests that rapid production obscures some acoustic cues to click identity. An analysis of a second storyteller suggests that clicks can also be phonetically reduced due to language attrition. We argue that analysis of fluent speech, especially where it can be semi-automated, is an important addition to analysis of clear productions in understanding the phonology of endangered languages.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Mehdi Reza Ghola Lalani, Muyoto Kazungu and Benjamin Niwe Gumi. This work was funded by ELDP SG0123 to the first author and NSF 1422987 to the second author.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jahangir-etal-2012-n","url":"https:\/\/aclanthology.org\/W12-5211.pdf","title":"N-gram and Gazetteer List Based Named Entity Recognition for Urdu: A Scarce Resourced Language","abstract":"Extraction of named entities (NEs) from the text is an important operation in many natural language processing applications like information extraction, question answering, machine translation etc. Since early 1990s the researchers have taken greater interest in this field and a lot of work has been done regarding Named Entity Recognition (NER) in different languages of the world. Unfortunately Urdu language which is a scarce resourced language has not been taken into account. In this paper we present a statistical Named Entity Recognition (NER) system for Urdu language using two basic n-gram models, namely unigram and bigram. We have also made use of gazetteer lists with both techniques as well as some smoothing techniques with bigram NER tagger. This NER system is capable to recognize 5 classes of NEs using a training data containing 2313 NEs and test data containing 104 NEs. The unigram NER Tagger using gazetteer lists achieves up to 65.21% precision, 88.63% recall and 75.14% f-measure. While the bigram NER Tagger using gazetteer lists and Backoff smoothing achieves up to 66.20% precision, 88.18% recall and 75.83 f-measure.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bouamor-etal-2013-building-specialized","url":"https:\/\/aclanthology.org\/I13-1125.pdf","title":"Building Specialized Bilingual Lexicons Using Word Sense Disambiguation","abstract":"This paper presents an extension of the standard approach used for bilingual lexicon extraction from comparable corpora. We study the ambiguity problem revealed by the seed bilingual dictionary used to translate context vectors and augment the standard approach by a Word Sense Disambiguation process. Our aim is to identify the translations of words that are more likely to give the best representation of words in the target language. On two specialized French-English and Romanian-English comparable corpora, empirical experimental results show that the proposed method consistently outperforms the standard approach.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kryscinski-etal-2018-improving","url":"https:\/\/aclanthology.org\/D18-1207.pdf","title":"Improving Abstraction in Text Summarization","abstract":"ive text summarization aims to shorten long text documents into a human readable form that contains the most important facts from the original document. However, the level of actual abstraction as measured by novel phrases that do not appear in the source document remains low in existing approaches. We propose two techniques to improve the level of abstraction of generated summaries. First, we decompose the decoder into a contextual network that retrieves relevant parts of the source document, and a pretrained language model that incorporates prior knowledge about language generation. Second, we propose a novelty metric that is optimized directly through policy learning to encourage the generation of novel phrases. Our model achieves results comparable to state-of-the-art models, as determined by ROUGE scores and human evaluations, while achieving a significantly higher level of abstraction as measured by n-gram overlap with the source document.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hohenecker-etal-2020-systematic","url":"https:\/\/aclanthology.org\/2020.emnlp-main.690.pdf","title":"Systematic Comparison of Neural Architectures and Training Approaches for Open Information Extraction","abstract":"The goal of open information extraction (OIE) is to extract facts from natural language text, and to represent them as structured triples of the form subject, predicate, object. For example, given the sentence \u00bbBeethoven composed the Ode to Joy.\u00ab, we are expected to extract the triple Beethoven, composed, Ode to Joy. In this work, we systematically compare different neural network architectures and training approaches, and improve the performance of the currently best models on the OIE16 benchmark (Stanovsky and Dagan, 2016) by 0.421 F 1 score and 0.420 AUC-PR, respectively, in our experiments (i.e., by more than 200% in both cases). Furthermore, we show that appropriate problem and loss formulations often affect the performance more than the network architecture.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Frank Mtumbuka was supported by the Rhodes Trust under a Rhodes Scholarship. This work was also supported by the Alan Turing Institute under the EPSRC grant EP\/N510129\/1, the AXA Research Fund, the ESRC grant \u00bbUnlocking the Potential of AI for Law\u00ab, and the EPSRC studentship OUCS\/EPSRC-NPIF\/VK\/1123106. We also acknowledge the use of the EPSRC-funded Tier 2 facility JADE (EP\/P020275\/1) and GPU computing support by Scan Computers International Ltd.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-etal-2020-active-learning","url":"https:\/\/aclanthology.org\/2020.acl-main.738.pdf","title":"Active Learning for Coreference Resolution using Discrete Annotation","abstract":"We improve upon pairwise annotation for active learning in coreference resolution, by asking annotators to identify mention antecedents if a presented mention pair is deemed not coreferent. This simple modification, when combined with a novel mention clustering algorithm for selecting which examples to label, is much more efficient in terms of the performance obtained per annotation budget. In experiments with existing benchmark coreference datasets, we show that the signal from this additional question leads to significant performance gains per human-annotation hour. Future work can use our annotation protocol to effectively develop coreference models for new domains. Our code is publicly available. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Christopher Clark, Terra Blevins, and the anonymous reviewers for their helpful feedback, and Aaron Jaech, Mason Kamb, Madian Khabsa, Kaushal Mangipudi, Nayeon Lee, and Anisha Uppugonduri for their participation in our timing experiments.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"borgida-1978-critical","url":"https:\/\/aclanthology.org\/J78-3005.pdf","title":"A Critical Look at a Formal Model for Stratificational Linguistics","abstract":"We present here a formalization of the straiificational model of linguistics proposed by Sampson C131 and investigate its generative power. In addition to uncovering a number of counterintuitive properties, the results presented here bear on meta-theoretic claims found in the linguistic literature. For example, Postal [ l l j claimed that stratificational theory was equivalent to context-free phrase-structure grammar, and hence not worthy of further interest. We show, however, that Sampson's model, and several of its restricted versions, allow a far wider range of generative powers. In the cases where the model appears to be too powerful, we suggest possible alterations which may make it more acceptable.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1978,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"specia-2021-multimodal","url":"https:\/\/aclanthology.org\/2021.mmtlrl-1.5.pdf","title":"Multimodal Simultaneous Machine Translation","abstract":"Simultaneous machine translation (SiMT) aims to translate a continuous input text stream into another language with the lowest latency and highest quality possible. Therefore, translation has to start with an incomplete source text, which is read progressively, creating the need for anticipation. In this talk I will present work where we seek to understand whether the addition of visual information can compensate for the missing source context. We analyse the impact of different multimodal approaches and visual features on state-of-the-art SiMT frameworks, including fixed and dynamic policy approaches using reinforcement learning. Our results show that visual context is helpful and that visually-grounded models based on explicit object region information perform the best. Our qualitative analysis illustrates cases where only the multimodal systems are able to translate correctly from English into gender-marked languages, as well as deal with differences in word order, such as adjective-noun placement between English and French.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"meaney-2020-crossing","url":"https:\/\/aclanthology.org\/2020.acl-srw.24.pdf","title":"Crossing the Line: Where do Demographic Variables Fit into Humor Detection?","abstract":"Recent shared tasks in humor classification have struggled with two issues: scope and subjectivity. Regarding scope, many task datasets either comprise a highly constrained genre of humor which does not broadly represent the genre, or the data collection is so indiscriminate that the inter-annotator agreement on its comic content is drastically low. In terms of subjectivity, these tasks typically average over all annotators' judgments, in spite of the fact that humor is highly subjective and varies both between and within cultures. We propose a dataset which maintains a broad scope but which addresses subjectivity. We will collect demographic information about the data's humor annotators in order to bin ratings more sensibly. We also suggest the addition of an 'offensive' label to reflect the fact a text may be humorous to one group, but offensive to another. This would allow for more meaningful shared tasks and could lead to better performance on downstream applications, such as content moderation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by the EPSRC Centre for Doctoral Training in Data Science, funded by the UK Engineering and Physical Sciences Research Council (grant EP\/L016427\/1) and the University of Edinburgh.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"vazquez-etal-2021-differences","url":"https:\/\/aclanthology.org\/2021.acl-srw.35.pdf","title":"On the differences between BERT and MT encoder spaces and how to address them in translation tasks","abstract":"Various studies show that pretrained language models such as BERT cannot straightforwardly replace encoders in neural machine translation despite their enormous success in other tasks. This is even more astonishing considering the similarities between the architectures. This paper sheds some light on the embedding spaces they create, using average cosine similarity, contextuality metrics and measures for representational similarity for comparison, revealing that BERT and NMT encoder representations look significantly different from one another. In order to address this issue, we propose a supervised transformation from one into the other using explicit alignment and fine-tuning. Our results demonstrate the need for such a transformation to improve the applicability of BERT in MT.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is part of the FoTran project, funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement \u2116 771113). We also acknowledge the CSC -IT Center for Science Ltd., for computational resources.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"luque-etal-2012-spectral","url":"https:\/\/aclanthology.org\/E12-1042.pdf","title":"Spectral Learning for Non-Deterministic Dependency Parsing","abstract":"In this paper we study spectral learning methods for non-deterministic split headautomata grammars, a powerful hiddenstate formalism for dependency parsing. We present a learning algorithm that, like other spectral methods, is efficient and nonsusceptible to local minima. We show how this algorithm can be formulated as a technique for inducing hidden structure from distributions computed by forwardbackward recursions. Furthermore, we also present an inside-outside algorithm for the parsing model that runs in cubic time, hence maintaining the standard parsing costs for context-free grammars.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to Gabriele Musillo and the anonymous reviewers for providing us with helpful comments. This work was supported by a Google Research Award and by the European Commission (PASCAL2 NoE FP7-216886, XLike STREP FP7-288342). Borja Balle was supported by an FPU fellowship (AP2008-02064) of the Spanish Ministry of Education. The Spanish Ministry of Science and Innovation supported Ariadna Quattoni (JCI-2009-04240) and Xavier Carreras (RYC-2008-02223 and \"KNOW2\" TIN2009-14715-C04-04).","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"workshops-2013-ttc","url":"https:\/\/aclanthology.org\/2013.mtsummit-european.22.pdf","title":"TTC: Terminology Extraction, Translation Tools and Comparable Corpora Cross-lingual Knowledge Extraction (XLike)","abstract":"Consortium The TTC project leveraged machine translation (MT) systems, computer-assisted translation (CAT) tools and multilingual content (corpora and terminology) management tools by developing methods and tools that allow users to generate bilingual terminologies automatically from comparable (non-parallel) corpora in seven languages: five European languages (English, French, German, Spanish, Latvian) as well as Chinese and Russian, and twelve translation directions. The TTC project has developed generic methods and tools for the automatic extraction and alignment of terminologies, in order to break the lexical acquisition bottleneck in both statistical and rule-based MT. It has also developed and adapted tools for gathering and managing comparable corpora, collected from the web, and managing terminologies. In particular, a topical web crawler and the MyEuroTermBank open terminology platform have been developed. The key output of the project is the TTC web platform. It allows to create thematic corpora given some clues (such as terms or documents on a specific domain), to expand a given corpus, to create a comparable corpora from seeds in two languages, to choose the tools to apply for terminology extraction, to extract monolingual terminology from such corpora, to translate bilingual terminologies, and to export monolingual or bilingual terminologies in order to use them easily in automatic and semi-automatic translation tools. For generating bilingual terminologies automatically from comparable corpora innovative approaches have been researched, implemented and evaluated that constituted the specificities of the TTC approaches: (1) topical web crawling which will gather comparable corpora from domain-specific Web portals or using querybased crawling technologies with several types of conditional analysis; (2) for monolingual term extraction, different techniques, a knowledge-rich and a knowledge-poor approaches were followed; a massive use of morphological knowledge to handle morphologically complex lexical items; (3) for bilingual term extraction, an unified treatment for single word term and multi-word term was designed as well as an hybrid method that used both the internal structure and the context information of the term.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"van-ess-dykema-etal-2009-translation","url":"https:\/\/aclanthology.org\/2009.mtsummit-government.8.pdf","title":"Translation Memory Technology Assessment","abstract":"\u2022 NVTC translates many other genres as well: \u2022 The quality of the translation for Arabic and Chinese.\n-Journal","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"edouard-etal-2017-building","url":"https:\/\/doi.org\/10.26615\/978-954-452-049-6_029.pdf","title":"Building timelines of soccer matches from Twitter","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bansal-etal-2014-structured","url":"https:\/\/aclanthology.org\/P14-1098.pdf","title":"Structured Learning for Taxonomy Induction with Belief Propagation","abstract":"We present a structured learning approach to inducing hypernym taxonomies using a probabilistic graphical model formulation. Our model incorporates heterogeneous relational evidence about both hypernymy and siblinghood, captured by semantic features based on patterns and statistics from Web n-grams and Wikipedia abstracts. For efficient inference over taxonomy structures, we use loopy belief propagation along with a directed spanning tree algorithm for the core hypernymy factor. To train the system, we extract sub-structures of WordNet and discriminatively learn to reproduce them, using adaptive subgradient stochastic optimization. On the task of reproducing sub-hierarchies of WordNet, our approach achieves a 51% error reduction over a chance baseline, including a 15% error reduction due to the non-hypernym-factored sibling features. On a comparison setup, we find up to 29% relative error reduction over previous work on ancestor F1.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their insightful comments. This work was supported by BBN under DARPA contract HR0011-12-C-0014, 973 Program China Grants 2011CBA00300, 2011CBA00301, and NSFC Grants 61033001, 61361136003.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tran-etal-2017-named","url":"https:\/\/aclanthology.org\/I17-1057.pdf","title":"Named Entity Recognition with Stack Residual LSTM and Trainable Bias Decoding","abstract":"Recurrent Neural Network models are the state-of-the-art for Named Entity Recognition (NER). We present two innovations to improve the performance of these models. The first innovation is the introduction of residual connections between the Stacked Recurrent Neural Network model to address the degradation problem of deep neural networks. The second innovation is a bias decoding mechanism that allows the trained system to adapt to non-differentiable and externally computed objectives, such as the entitybased F-measure. Our work improves the state-of-the-art results for both Spanish and English languages on the standard train\/development\/test split of the CoNLL 2003 Shared Task NER dataset.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"blache-2003-meta","url":"https:\/\/aclanthology.org\/W03-3004.pdf","title":"Meta-Level Contstraints for Linguistic Domain Interaction","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sauper-barzilay-2009-automatically","url":"https:\/\/aclanthology.org\/P09-1024.pdf","title":"Automatically Generating Wikipedia Articles: A Structure-Aware Approach","abstract":"In this paper, we investigate an approach for creating a comprehensive textual overview of a subject composed of information drawn from the Internet. We use the high-level structure of human-authored texts to automatically induce a domainspecific template for the topic structure of a new overview. The algorithmic innovation of our work is a method to learn topicspecific extractors for content selection jointly for the entire template. We augment the standard perceptron algorithm with a global integer linear programming formulation to optimize both local fit of information into each topic and global coherence across the entire overview. The results of our evaluation confirm the benefits of incorporating structural information into the content selection process.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors acknowledge the support of the NSF (CA-REER grant IIS-0448168, grant IIS-0835445, and grant IIS-0835652) and NIH (grant V54LM008748). Thanks to Mike Collins, Julia Hirschberg, and members of the MIT NLP group for their helpful suggestions and comments. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations.","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cai-etal-2007-nus","url":"https:\/\/aclanthology.org\/S07-1053.pdf","title":"NUS-ML:Improving Word Sense Disambiguation Using Topic Features","abstract":"We participated in SemEval-1 English coarse-grained all-words task (task 7), English fine-grained all-words task (task 17, subtask 3) and English coarse-grained lexical sample task (task 17, subtask 1). The same method with different labeled data is used for the tasks; SemCor is the labeled corpus used to train our system for the allwords tasks while the labeled corpus that is provided is used for the lexical sample task. The knowledge sources include part-of-speech of neighboring words, single words in the surrounding context, local collocations, and syntactic patterns. In addition, we constructed a topic feature, targeted to capture the global context information, using the latent dirichlet allocation (LDA) algorithm with unlabeled corpus. A modified na\u00efve Bayes classifier is constructed to incorporate all the features. We achieved 81.6%, 57.6%, 88.7% for coarse-grained allwords task, fine-grained all-words task and coarse-grained lexical sample task respectively.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"murray-chiang-2015-auto","url":"https:\/\/aclanthology.org\/D15-1107.pdf","title":"Auto-Sizing Neural Networks: With Applications to n-gram Language Models","abstract":"Neural networks have been shown to improve performance across a range of natural-language tasks. However, designing and training them can be complicated. Frequently, researchers resort to repeated experimentation to pick optimal settings. In this paper, we address the issue of choosing the correct number of units in hidden layers. We introduce a method for automatically adjusting network size by pruning out hidden units through \u221e,1 and 2,1 regularization. We apply this method to language modeling and demonstrate its ability to correctly choose the number of hidden units while maintaining perplexity. We also include these models in a machine translation decoder and show that these smaller neural models maintain the significant improvements of their unpruned versions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Tomer Levinboim, Antonios Anastasopoulos, and Ashish Vaswani for their helpful discussions, as well as the reviewers for their assistance and feedback.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sellami-etal-2013-exploiting","url":"https:\/\/aclanthology.org\/2013.mtsummit-wpt.5.pdf","title":"Exploiting multiple resources for Japanese to English patent translation","abstract":"This paper describes the development of a Japanese to English translation system using multiple resources and NTCIR-10 Patent translation collection. The MT system is based on different training data, the Wiktionary as a bilingual dictionary and Moses decoder. Due to the lack of parallel data on the patent domain, additional training data of the general domain was extracted from Wikipedia. Experiments using NTCIR-10 Patent translation data collection showed an improvement of the BLEU score when using a 5-grams language model and when adding the data extracted from Wikipedia but no improvement when adding the Wiktionary.","label_nlp4sg":1,"task":["patent translation"],"method":["data collection"],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"odonnell-etal-1998-integrating","url":"https:\/\/aclanthology.org\/W98-0607.pdf","title":"Integrating Referring and Informing in NP Planning","abstract":"Two of the functions of an NP are to refer (identify a particular entity) and to inform (provide new information about an entity). While many NPs may serve only one of these functions, some NPs conflate the functions, not only referring but also providing new information about the referent. For instance, this delicious apple indicates not only which apple the speaker is referring to, but also provides information as to the speaker's appreciation of the apple. This paper describes an implemented NPplanning system which integrates informing into the referring expression generation process. The integration involves allowing informing to influence decisions at each stage of the formation of the referring form, including: the selection of the form of the NP; the choice of the head of a common NP; the choice of the Deictic in common NPs; the choice of restrictive modifiers, and the inclusion of non-referring modifiers. The system is domain-independent, and is presently functioning within a full text generation system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"stymne-2020-cross","url":"https:\/\/aclanthology.org\/2020.tlt-1.6.pdf","title":"Cross-Lingual Domain Adaptation for Dependency Parsing","abstract":"We show how we can adapt parsing to low-resource domains by combining treebanks across languages for a parser model with treebank embeddings. We demonstrate how we can take advantage of in-domain treebanks from other languages, and show that this is especially useful when only out-of-domain treebanks are available for the target language. The method is also extended to low-resource languages by using out-of-domain treebanks from related languages. Two parameter-free methods for applying treebank embeddings at test time are proposed, which give competitive results to tuned methods when applied to Twitter data and transcribed speech. This gives us a method for selecting treebanks and training a parser targeted at any combination of domain and language.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Thank you to current and former members of the Uppsala parsing group for many fruitful discussions: Ali Basirat, Daniel Dakota, Miryam de Lhoneux, Artur Kulmizev, Joakim Nivre, and Aaron Smith. I would also like to thank the anonymous reviewers for their insightful comments.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-etal-2019-detecting","url":"https:\/\/aclanthology.org\/N19-1199.pdf","title":"Detecting dementia in Mandarin Chinese using transfer learning from a parallel corpus","abstract":"Machine learning has shown promise for automatic detection of Alzheimer's disease (AD) through speech; however, efforts are hampered by a scarcity of data, especially in languages other than English. We propose a method to learn a correspondence between independently engineered lexicosyntactic features in two languages, using a large parallel corpus of outof-domain movie dialogue data. We apply it to dementia detection in Mandarin Chinese, and demonstrate that our method outperforms both unilingual and machine translation-based baselines. This appears to be the first study that transfers feature domains in detecting cognitive decline.","label_nlp4sg":1,"task":["Detecting dementia"],"method":["transfer learning"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"We thank Kathleen Fraser and Nicklas Linz for their helpful comments and earlier collaboration which inspired this project.","year":2019,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jiang-etal-2010-context","url":"https:\/\/aclanthology.org\/D10-1105.pdf","title":"Context Comparison of Bursty Events in Web Search and Online Media","abstract":"In this paper, we conducted a systematic comparative analysis of language in different contexts of bursty topics, including web search, news media, blogging, and social bookmarking. We analyze (1) the content similarity and predictability between contexts, (2) the coverage of search content by each context, and (3) the intrinsic coherence of information in each context. Our experiments show that social bookmarking is a better predictor to the bursty search queries, but news media and social blogging media have a much more compelling coverage. This comparison provides insights on how the search behaviors and social information sharing behaviors of users are correlated to the professional news media in the context of bursty events.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Prof. Kevin Chang for his support in data and useful discussion. We thank the three anonymous reviewers for their useful comments. This work is in part supported by the National Science Foundation under award number IIS-0968489.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"han-etal-2015-uir","url":"https:\/\/aclanthology.org\/S15-2111.pdf","title":"UIR-PKU: Twitter-OpinMiner System for Sentiment Analysis in Twitter at SemEval 2015","abstract":"Microblogs are considered as We-Media information with many real-time opinions. This paper presents a Twitter-OpinMiner system for Twitter sentiment analysis evaluation at SemEval 2015. Our approach stems from two different angles: topic detection for discovering the sentiment distribution on different topics and sentiment analysis based on a variety of features. Moreover, we also implemented intra-sentence discourse relations for polarity identification. We divided the discourse relations into 4 predefined categories, including continuation, contrast, condition, and cause. These relations could facilitate us to eliminate polarity ambiguities in compound sentences where both positive and negative sentiments are appearing. Based on the SemEval 2014 and SemEval 2015 Twitter sentiment analysis task datasets, the experimental results show that the performance of Twitter-OpinMiner could effectively recognize opinionated messages and identify the polarities.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is partially supported by Fundamental Research Funds for the Central Universities (3262014T75, 3262015T20), Shenzhen Fundamental Research Program (JCYJ20130401172046450), General Research Fund of Hong Kong (417112). We also thank Liyu Chen, Jianxiong Wu, and anonymous reviewers for their helpful comments.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gupta-etal-2020-reinforced","url":"https:\/\/aclanthology.org\/2020.coling-main.249.pdf","title":"Reinforced Multi-task Approach for Multi-hop Question Generation","abstract":"Question generation (QG) attempts to solve the inverse of question answering (QA) problem by generating a natural language question given a document and an answer. While sequence to sequence neural models surpass rule-based systems for QG, they are limited in their capacity to focus on more than one supporting fact. For QG, we often require multiple supporting facts to generate high-quality questions. Inspired by recent works on multi-hop reasoning in QA, we take up Multi-hop question generation, which aims at generating relevant questions based on supporting facts in the context. We employ multitask learning with the auxiliary task of answeraware supporting fact prediction to guide the question generator. In addition, we also proposed a question-aware reward function in a Reinforcement Learning (RL) framework to maximize the utilization of the supporting facts. We demonstrate the effectiveness of our approach through experiments on the multi-hop question answering dataset, HotPotQA. Empirical evaluation shows our model to outperform the single-hop neural question generation models on both automatic evaluation metrics such as BLEU, METEOR, and ROUGE, and human evaluation metrics for quality and coverage of the generated questions. * Work done during an internship at IIT Patna.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Asif Ekbal gratefully acknowledges the Young Faculty Research Fellowship (YFRF) Award supported by the Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, and implemented by Digital India Corporation (formerly Media Lab Asia).","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bell-schafer-2013-semantic","url":"https:\/\/aclanthology.org\/W13-0601.pdf","title":"Semantic transparency: challenges for distributional semantics","abstract":"Using data from Reddy et al. (2011), we present a series of regression models of semantic transparency in compound nouns. The results indicate that the frequencies of the compound constituents, the semantic relation between the constituents, and metaphorical shift of a constituent or of the compound as a whole, all contribute to the overall perceived level of transparency. While not proposing an actual distributional model of transparency, we hypothesise that incorporating this information into such a model would improve its success and we suggest some ways this might be possible.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the three anonymous reviewers for their advice and comments, and we especially thank Aurelie Herbelot for a most fruitful abundance of the same.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"liu-etal-2019-incorporating-contextual","url":"https:\/\/aclanthology.org\/D19-1114.pdf","title":"Incorporating Contextual and Syntactic Structures Improves Semantic Similarity Modeling","abstract":"Semantic similarity modeling is central to many NLP problems such as natural language inference and question answering. Syntactic structures interact closely with semantics in learning compositional representations and alleviating long-range dependency issues. However, such structure priors have not been well exploited in previous work for semantic modeling. To examine their effectiveness, we start with the Pairwise Word Interaction Model, one of the best models according to a recent reproducibility study, then introduce components for modeling context and structure using multi-layer BiLSTMs and TreeLSTMs. In addition, we introduce residual connections to the deep convolutional neural network component of the model. Extensive evaluations on eight benchmark datasets show that incorporating structural information contributes to consistent improvements over strong baselines.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada, and enabled by computational resources provided by Compute Ontario and Compute Canada.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gallina-etal-2019-kptimes","url":"https:\/\/aclanthology.org\/W19-8617.pdf","title":"KPTimes: A Large-Scale Dataset for Keyphrase Generation on News Documents","abstract":"Keyphrase generation is the task of predicting a set of lexical units that conveys the main content of a source text. Existing datasets for keyphrase generation are only readily available for the scholarly domain and include nonexpert annotations. In this paper we present KPTimes, a large-scale dataset of news texts paired with editor-curated keyphrases. Exploring the dataset, we show how editors tag documents, and how their annotations differ from those found in existing datasets. We also train and evaluate state-of-the-art neural keyphrase generation models on KPTimes to gain insights on how well they perform on the news domain.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"schumaker-2010-analysis","url":"https:\/\/aclanthology.org\/W10-0502.pdf","title":"An Analysis of Verbs in Financial News Articles and their Impact on Stock Price","abstract":"Article terms can move stock prices. By analyzing verbs in financial news articles and coupling their usage with a discrete machine learning algorithm tied to stock price movement, we can build a model of price movement based upon the verbs used, to not only identify those terms that can move a stock price the most, but also whether they move the predicted price up or down.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cavicchio-2009-modulation","url":"https:\/\/aclanthology.org\/P09-3010.pdf","title":"The Modulation of Cooperation and Emotion in Dialogue: The REC Corpus","abstract":"In this paper we describe the Rovereto Emotive Corpus (REC) which we collected to investigate the relationship between emotion and cooperation in dialogue tasks. It is an area where still many unsolved questions are present. One of the main open issues is the annotation of the socalled \"blended\" emotions and their recognition. Usually, there is a low agreement among raters in annotating emotions and, surprisingly, emotion recognition is higher in a condition of modality deprivation (i. e. only acoustic or only visual modality vs. bimodal display of emotion). Because of these previous results, we collected a corpus in which \"emotive\" tokens are pointed out during the recordings by psychophysiological indexes (ElectroCardioGram, and Galvanic Skin Conductance). From the output values of these indexes a general recognition of each emotion arousal is allowed. After this selection we will annotate emotive interactions with our multimodal annotation scheme, performing a kappa statistic on annotation results to validate our coding scheme. In the near future, a logistic regression on annotated data will be performed to find out correlations between cooperation and negative emotions. A final step will be an fMRI experiment on emotion recognition of blended emotions from face displays.","label_nlp4sg":1,"task":["Modulation of Cooperation and Emotion"],"method":["Corpus"],"goal1":"Partnership for the goals","goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":1} {"ID":"hutchens-alder-1998-introducing","url":"https:\/\/aclanthology.org\/W98-1233.pdf","title":"Introducing MegaHAL","abstract":"Conversation simulators are computer programs which give the appearance of conversing with a user in natural language. Alan Turing devised a simple test in order to decide whether such programs are intelligent. In 1991, the Cambridge Centre for Behavioural Studies held the first formal instantiation of the Turing Test. In this incarnation the test was known as the Loebner contest, as Dr. Hugh Loebner pledged a $100,000 grand prize for the first computer program'to pass the test. In this paper we give a brief background to the contest, before describing in detail the workings of MegaHAL, the primary author's entry to the 1998 Loebner contest.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"serrano-smith-2019-attention","url":"https:\/\/aclanthology.org\/P19-1282.pdf","title":"Is Attention Interpretable?","abstract":"Attention mechanisms have recently boosted performance on a range of NLP tasks. Because attention layers explicitly weight input components' representations, it is also often assumed that attention can be used to identify information that models found important (e.g., specific contextualized word tokens). We test whether that assumption holds by manipulating attention weights in already-trained text classification models and analyzing the resulting differences in their predictions. While we observe some ways in which higher attention weights correlate with greater impact on model predictions, we also find many ways in which this does not hold, i.e., where gradient-based rankings of attention weights better predict their effects than their magnitudes. We conclude that while attention noisily predicts input components' overall importance to a model, it is by no means a fail-safe indicator. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by a grant from the Allstate Corporation; findings do not necessarily represent the views of the sponsor. We thank R. Andrew Kreek, Paul Koester, Kourtney Traina, and Rebecca Jones for early conversations leading to this work. We also thank Omer Levy, Jesse Dodge, Sarthak Jain, Byron Wallace, and Dan Weld for helpful conversations, and our anonymous reviewers for their feedback.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hesse-etal-2020-annotating","url":"https:\/\/aclanthology.org\/2020.dt4tp-1.3.pdf","title":"Annotating QUDs for generating pragmatically rich texts","abstract":"We describe our work on QUD-oriented annotation of driving reports for the generation of corresponding texts-texts that are a mix of technical details of the new vehicle that has been put on the market together with the impressions of the test driver on driving characteristics. Generating these texts pose a challenge since they express non-at-issue and expressive content that cannot be retrieved from a database. Instead these subjective meanings must be justified by comparisons with attributes of other vehicles. We describe our current annotation task for the extraction of the relevant information for generating these driving reports.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"krstev-etal-2008-usage","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2008\/pdf\/67_paper.pdf","title":"The Usage of Various Lexical Resources and Tools to Improve the Performance of Web Search Engines","abstract":"In this paper we present how resources and tools developed within the Human Language Technology Group at the University of Belgrade can be used for tuning queries before submitting them to a web search engine. We argue that the selection of words chosen for a query, which are of paramount importance for the quality of results obtained by the query, can be substantially improved by using various lexical resources, such as morphological dictionaries and wordnets. These dictionaries enable semantic and morphological expansion of the query, the latter being very important in highly inflective languages, such as Serbian. Wordnets can also be used for adding another language to a query, if appropriate, thus making the query bilingual. Problems encountered in retrieving documents of interest are discussed and illustrated by examples. A brief description of resources is given, followed by an outline of the web tool which enables their integration. Finally, a set of examples is chosen in order to illustrate the use of the lexical resources and tool in question. Results obtained for these examples show that the number of documents obtained through a query by using our approach can double and even quadruple in some cases.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-etal-2020-attentive","url":"https:\/\/aclanthology.org\/2020.findings-emnlp.12.pdf","title":"An Attentive Recurrent Model for Incremental Prediction of Sentence-final Verbs","abstract":"Verb prediction is important for understanding human processing of verb-final languages, with practical applications to real-time simultaneous interpretation from verb-final to verbmedial languages. While previous approaches use classical statistical models, we introduce an attention-based neural model to incrementally predict final verbs on incomplete sentences in Japanese and German SOV sentences. To offer flexibility to the model, we further incorporate synonym awareness. Our approach both better predicts the final verbs in Japanese and German and provides more interpretable explanations of why those verbs are selected. 1 German is rich in both SOV and SVO sentences. It has been argued that its underlying structure is SOV (Bach, 1962; Koster, 1975), but this is not immediately relevant to our task. German Cazeneuve dankte dort den M\u00e4nnern und sagte, ohne deren k\u00fchlen Kopf h\u00e4tte es vielleicht ein \"furchtbares Drama\" gegeben. English Cazeneuve thanked the men there and said that without their cool heads there might have been a \"terrible drama\". Japanese \u307e\u305f\u5927\u548c\u56fd\u5948\u826f\u770c\u306e\u845b\u57ce\u5c71\u306b \u7bed\u308a\u5bc6\u6559\u306e\u5bbf\u66dc\u79d8\u6cd5\u3092\u7fd2\u5f97\u3057\u305f\u3068\u3082 \u8a00 \u8a00 \u8a00\u308f \u308f \u308f. English It also said that he was acquainted with a secret lodging accommodation in Katsuragiyama in Nara Prefecture of Yamato.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This material is based upon work supported by the National Science Foundation under Grant No. 1748663 (UMD). The views expressed in this paper are our own. We thank Graham Neubig and Hal Daum\u00e9 III for useful feedback.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"postma-etal-2016-moving","url":"https:\/\/aclanthology.org\/W16-6004.pdf","title":"Moving away from semantic overfitting in disambiguation datasets","abstract":"Entities and events in the world have no frequency, but our communication about them and the expressions we use to refer to them do have a strong frequency profile. Language expressions and their meanings follow a Zipfian distribution, featuring a small amount of very frequent observations and a very long tail of low frequent observations. Since our NLP datasets sample texts but do not sample the world, they are no exception to Zipf's law. This causes a lack of representativeness in our NLP tasks, leading to models that can capture the head phenomena in language, but fail when dealing with the long tail. We therefore propose a referential challenge for semantic NLP that reflects a higher degree of ambiguity and variance and captures a large range of small real-world phenomena. To perform well, systems would have to show deep understanding on the linguistic tail.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nasution-etal-2016-constraint","url":"https:\/\/aclanthology.org\/L16-1524.pdf","title":"Constraint-Based Bilingual Lexicon Induction for Closely Related Languages","abstract":"The lack or absence of parallel and comparable corpora makes bilingual lexicon extraction becomes a difficult task for low-resource languages. Pivot language and cognate recognition approach have been proven useful to induce bilingual lexicons for such languages. We analyze the features of closely related languages and define a semantic constraint assumption. Based on the assumption, we propose a constraint-based bilingual lexicon induction for closely related languages by extending constraints and translation pair candidates from recent pivot language approach. We further define three constraint sets based on language characteristics. In this paper, two controlled experiments are conducted. The former involves four closely related language pairs with different language pair similarities, and the latter focuses on sense connectivity between non-pivot words and pivot words. We evaluate our result with F-measure. The result indicates that our method works better on voluminous input dictionaries and high similarity languages. Finally, we introduce a strategy to use proper constraint sets for different goals and language characteristics.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by a Grant-in-Aid for Scientific Research (S) (24220002, 2012-2016) from Japan Society for the Promotion of Science (JSPS).","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kuo-chang-1989-systemic","url":"https:\/\/aclanthology.org\/O89-1004.pdf","title":"Systemic Generation of Chinese Sentences","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1989,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dahlmeier-etal-2009-joint","url":"https:\/\/aclanthology.org\/D09-1047.pdf","title":"Joint Learning of Preposition Senses and Semantic Roles of Prepositional Phrases","abstract":"The sense of a preposition is related to the semantics of its dominating prepositional phrase. Knowing the sense of a preposition could help to correctly classify the semantic role of the dominating prepositional phrase and vice versa. In this paper, we propose a joint probabilistic model for word sense disambiguation of prepositions and semantic role labeling of prepositional phrases. Our experiments on the PropBank corpus show that jointly learning the word sense and the semantic role leads to an improvement over state-of-theart individual classifier models on the two tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by a research grant R-252-000-225-112 from National University of Singapore Academic Research Fund.","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chronopoulou-etal-2019-embarrassingly","url":"https:\/\/aclanthology.org\/N19-1213.pdf","title":"An Embarrassingly Simple Approach for Transfer Learning from Pretrained Language Models","abstract":"A growing number of state-of-the-art transfer learning methods employ language models pretrained on large generic corpora. In this paper we present a conceptually simple and effective transfer learning approach that addresses the problem of catastrophic forgetting. Specifically, we combine the task-specific optimization function with an auxiliary language model objective, which is adjusted during the training process. This preserves language regularities captured by language models, while enabling sufficient adaptation for solving the target task. Our method does not require pretraining or finetuning separate components of the network and we train our models end-toend in a single step. We present results on a variety of challenging affective and text classification tasks, surpassing well established transfer learning methods with greater level of complexity.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Katerina Margatina and Georgios Paraskevopoulos for their helpful suggestions and comments. This work has been partially supported by computational time granted from the Greek Research & Technology Network (GR-NET) in the National HPC facility -ARIS. Also, the authors would like to thank NVIDIA for supporting this work by donating a TitanX GPU.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nicolov-etal-1996-approximate","url":"https:\/\/aclanthology.org\/W96-0404.pdf","title":"Approximate Generation from Non-Hierarchical Representations","abstract":"This paper presents a technique for sentence generation. We argue that the input to generators should have a non-hierarchical nature. This allows us to investigate a more general version of the sentence generation problem where one is not pre-committed to a choice of the syntactically prominent elements in the initial semantics. We also consider that a generator can happen to convey more (or less) information than is originally specified in its semantic input. In order to constrain this approximate matching of the input we impose additional restrictions on the semantics of the generated sentence. Our technique provides flexibility to address cases where the entire input cannot be precisely expressed in a single sentence. Thus the generator does not rely on the strategic component having linguistic knowledge. We show clearly how the semantic structure is declaratively related to linguistically motivated syntactic representation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"xia-etal-2019-syntax","url":"https:\/\/aclanthology.org\/D19-1541.pdf","title":"A Syntax-aware Multi-task Learning Framework for Chinese Semantic Role Labeling","abstract":"Semantic role labeling (SRL) aims to identify the predicate-argument structure of a sentence. Inspired by the strong correlation between syntax and semantics, previous works pay much attention to improve SRL performance on exploiting syntactic knowledge, achieving significant results. Pipeline methods based on automatic syntactic trees and multi-task learning (MTL) approaches using standard syntactic trees are two common research orientations. In this paper, we adopt a simple unified span-based model for both span-based and word-based Chinese SRL as a strong baseline. Besides, we present a MTL framework that includes the basic SRL module and a dependency parser module. Different from the commonly used hard parameter sharing strategy in MTL, the main idea is to extract implicit syntactic representations from the dependency parser as external inputs for the basic SRL model. Experiments on the benchmarks of Chinese Proposition Bank 1.0 and CoNLL-2009 Chinese datasets show that our proposed framework can effectively improve the performance over the strong baselines. With the external BERT representations, our framework achieves new state-of-the-art 87.54 and 88.5 F1 scores on the two test data of the two benchmarks, respectively. In-depth analysis are conducted to gain more insights on the proposed framework and the effectiveness of syntax.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank our anonymous reviewers for their helpful comments. This work was supported by National Natural Science Foundation of China (Grant No. 61525205, 61876116, 61432013) and a project funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"karmakar-ghosh-2016-syntax","url":"https:\/\/aclanthology.org\/W16-6309.pdf","title":"Syntax and Pragmatics of Conversation: A Case of Bangla","abstract":"Conversation is often considered as the most problematic area in the field of formal linguistics, primarily because of its dynamic emerging nature. The degree of complexity is also high in comparison to traditional sentential analysis. The challenge for developing a formal account for conversational analysis is bipartite: Since the smallest structural unit at the level of conversational analysis is utterance, existing theoretical framework has to be developed in such a manner so that it can take an account of the utterance. In addition to this, a system should be developed to explain the interconnections of the utterances in a conversation. This paper tries to address these two tasks within the transformational and generative framework of Minimalism, proposed by Chomsky, with an emphasis on the Bengali particle totraditionally classified as indeclinable.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"deep-etal-2020-punjabi","url":"https:\/\/aclanthology.org\/2020.icon-demos.3.pdf","title":"Punjabi to English Bidirectional NMT System","abstract":"Machine Translation is ongoing research for last few decades. Today, Corpus-based Machine Translation systems are very popular. Statistical Machine Translation and Neural Machine Translation are based on the parallel corpus. In this research, the Punjabi to English Bidirectional Neural Machine Translation system is developed. To improve the accuracy of the Neural Machine Translation system, Word Embedding and Byte Pair Encoding is used. The claimed BLEU score is 38.30 for Punjabi to English Neural Machine Translation system and 36.96 for English to Punjabi Neural Machine Translation system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"poncelas-etal-2020-multiple","url":"https:\/\/aclanthology.org\/2020.sltu-1.33.pdf","title":"Multiple Segmentations of Thai Sentences for Neural Machine Translation","abstract":"Thai is a low-resource language, so it is often the case that data is not available in sufficient quantities to train an Neural Machine Translation (NMT) model which perform to a high level of quality. In addition, the Thai script does not use white spaces to delimit the boundaries between words, which adds more complexity when building sequence to sequence models. In this work, we explore how to augment a set of English-Thai parallel data by replicating sentence-pairs with different word segmentation methods on Thai, as training data for NMT model training. Using different merge operations of Byte Pair Encoding, different segmentations of Thai sentences can be obtained. The experiments show that combining these datasets, performance is improved for NMT models trained with a dataset that has been split using a supervised splitting tool.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-etal-2008-learning","url":"https:\/\/aclanthology.org\/I08-1037.pdf","title":"Learning Patterns from the Web to Translate Named Entities for Cross Language Information Retrieval","abstract":"Named entity (NE) translation plays an important role in many applications. In this paper, we focus on translating NEs from Korean to Chinese to improve Korean-Chinese cross-language information retrieval (KCIR). The ideographic nature of Chinese makes NE translation difficult because one syllable may map to several Chinese characters. We propose a hybrid NE translation system. First, we integrate two online databases to extend the coverage of our bilingual dictionaries. We use Wikipedia as a translation tool based on the inter-language links between the Korean edition and the Chinese or English editions. We also use Naver.com's people search engine to find a query name's Chinese or English translation. The second component is able to learn Korean-Chinese (K-C), Korean-English (K-E), and English-Chinese (E-C) translation patterns from the web. These patterns can be used to extract K-C, K-E and E-C pairs from Google snippets. We found KCIR performance using this hybrid configuration over five times better than that a dictionary-based configuration using only Naver people search. Mean average precision was as high as 0.3385 and recall reached 0.7578. Our method can handle Chinese, Japanese, Korean, and non-CJK NE translation and improve performance of KCIR substantially.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yavuz-etal-2016-improving","url":"https:\/\/aclanthology.org\/D16-1015.pdf","title":"Improving Semantic Parsing via Answer Type Inference","abstract":"In this work, we show the possibility of inferring the answer type before solving a factoid question and leveraging the type information to improve semantic parsing. By replacing the topic entity in a question with its type, we are able to generate an abstract form of the question, whose answer corresponds to the answer type of the original question. A bidirectional LSTM model is built to train over the abstract form of questions and infer their answer types. It is also observed that if we convert a question into a statement form, our LSTM model achieves better accuracy. Using the predicted type information to rerank the logical forms returned by AgendaIL, one of the leading semantic parsers, we are able to improve the F1-score from 49.7% to 52.6% on the WE-BQUESTIONS data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their valuable comments, and Huan Sun for fruitful discussions. This research was sponsored in part by the Army Research Laboratory under cooperative agreements W911NF09-2-0053, NSF IIS 1528175, and NSF CCF 1548848. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notice herein.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"baroni-lenci-2009-one","url":"https:\/\/aclanthology.org\/W09-0201.pdf","title":"One Distributional Memory, Many Semantic Spaces","abstract":"We propose an approach to corpus-based semantics, inspired by cognitive science, in which different semantic tasks are tackled using the same underlying repository of distributional information, collected once and for all from the source corpus. Task-specific semantic spaces are then built on demand from the repository. A straightforward implementation of our proposal achieves state-of-the-art performance on a number of unrelated tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Ken McRae and Peter Turney for providing data-sets, Ama\u00e7 Herda\u01e7delen for access to his results, Katrin Erk for making us look at DM as a graph, and the reviewers for helpful comments.","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lepage-2004-lower","url":"https:\/\/aclanthology.org\/C04-1106.pdf","title":"Lower and higher estimates of the number of ``true analogies'' between sentences contained in a large multilingual corpus","abstract":"The reality of analogies between words is refuted by noone (e.g., I walked is to to walk as I laughed is to to laugh, noted I walked : to walk :: I laughed : to laugh). But computational linguists seem to be quite dubious about analogies between sentences: they would not be enough numerous to be of any use. We report experiments conducted on a multilingual corpus to estimate the number of analogies among the sentences that it contains. We give two estimates, a lower one and a higher one. As an analogy must be valid on the level of form as well as on the level of meaning, we relied on the idea that translation should preserve meaning to test for similar meanings.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research reported here was supported in part by a contract with the National Institute of Information and Communications Technology entitled \"A study of speech dialogue translation technology based on a large corpus\".","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"park-etal-2007-semi","url":"https:\/\/aclanthology.org\/Y07-1040.pdf","title":"Semi-Automatic Annotation Tool to Build Large Dependency Tree-Tagged Corpus","abstract":"Corpora annotated with lots of linguistic information are required to develop robust and statistical natural language processing systems. Building such corpora, however, is an expensive, labor-intensive, and time-consuming work. To help the work, we design and implement an annotation tool for establishing a Korean dependency tree-tagged corpus. Compared with other annotation tools, our tool is characterized by the following features: independence of applications, localization of errors, powerful error checking, instant annotated information sharing, user-friendly. Using our tool, we have annotated 100,904 Korean sentences with dependency structures. The number of annotators is 33, the average annotation time is about 4 minutes per sentence, and the total period of the annotation is 5 months. We are confident that we can have accurate and consistent annotations as well as reduced labor and time.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"choe-etal-2020-word2word","url":"https:\/\/aclanthology.org\/2020.lrec-1.371.pdf","title":"word2word: A Collection of Bilingual Lexicons for 3,564 Language Pairs","abstract":"We present word2word, a publicly available dataset and an open-source Python package for cross-lingual word translations extracted from sentence-level parallel corpora. Our dataset provides top-k word translations in 3,564 (directed) language pairs across 62 languages in OpenSubtitles2018 (Lison et al., 2018). To obtain this dataset, we use a count-based bilingual lexicon extraction model based on the observation that not only source and target words but also source words themselves can be highly correlated. We illustrate that the resulting bilingual lexicons have high coverage and attain competitive translation quality for several language pairs. We wrap our dataset and model in an easy-to-use Python library, which supports downloading and retrieving top-k word translations in any of the supported language pairs as well as computing top-k word translations for custom parallel corpora.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"streiff-1983-new","url":"https:\/\/aclanthology.org\/1983.tc-1.21.pdf","title":"New developments in TITUS 4","abstract":"The TITUS 4 system was originally designed to produce abstracts in the form of sentences or phrases written in controlled syntax. It is now being improved, partly to give the user more flexibility in writing sentences, and partly so that the system can be implemented in other fields than abstracting services. Improvements being introduced to enhance TITUS 4's versatility include multiple-clause sentences. Certain restrictions, however, remain owing to linguistic problems associated with translation from one language to another.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1983,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lin-etal-2021-csds","url":"https:\/\/aclanthology.org\/2021.emnlp-main.365.pdf","title":"CSDS: A Fine-Grained Chinese Dataset for Customer Service Dialogue Summarization","abstract":"Dialogue summarization has drawn much attention recently. Especially in the customer service domain, agents could use dialogue summaries to help boost their works by quickly knowing customer's issues and service progress. These applications require summaries to contain the perspective of a single speaker and have a clear topic flow structure, while neither are available in existing datasets. Therefore, in this paper, we introduce a novel Chinese dataset for Customer Service Dialogue Summarization (CSDS). CSDS improves the abstractive summaries in two aspects: (1) In addition to the overall summary for the whole dialogue, role-oriented summaries are also provided to acquire different speakers' viewpoints. (2) All the summaries sum up each topic separately, thus containing the topic-level structure of the dialogue. We define tasks in CSDS as generating the overall summary and different role-oriented summaries for a given dialogue. Next, we compare various summarization methods on CSDS, and experiment results show that existing methods are prone to generate redundant and incoherent summaries. Besides, the performance becomes much worse when analyzing the performance on role-oriented summaries and topic structures. We hope that this study could benchmark Chinese dialogue summarization and benefit further studies.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"First, We thank anonymous reviewers for helpful suggestions. Second, we thank all the annotators and volunteers for constructing the dataset and making the human evaluation. This work was supported by the National Key R&D Program of China under Grant No. 2020AAA0108600.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cao-etal-2015-improving-event","url":"https:\/\/aclanthology.org\/R15-1011.pdf","title":"Improving Event Detection with Dependency Regularization","abstract":"Event Detection (ED) is an Information Extraction task which involves identifying instances of specified types of events in text. Most recent research on Event Detection relies on pattern-based or featurebased approaches, trained on annotated corpora, to recognize combinations of event triggers, arguments, and other contextual information. These combinations may each appear in a variety of linguistic forms. Not all of these event expressions will have appeared in the training data, thus adversely affecting ED performance. In this paper, we demonstrate the effectiveness of Dependency Regularization techniques to generalize the patterns extracted from the training data to boost ED performance. The experimental results on the ACE 2005 corpus show that our pattern-based system with the expanded patterns can achieve 70.49% (with 2.57% absolute improvement) F-measure over the baseline, which advances the state-of-the-art for such systems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bender-etal-2015-layers","url":"https:\/\/aclanthology.org\/W15-0128.pdf","title":"Layers of Interpretation: On Grammar and Compositionality","abstract":"With the recent resurgence of interest in semantic annotation of corpora for improved semantic parsing, we observe a tendency which we view as ill-advised, to conflate sentence meaning and speaker meaning into a single mapping, whether done by annotators or by a parser. We argue instead for the more traditional hypothesis that sentence meaning, but not speaker meaning, is compositional, and accordingly that NLP systems would benefit from reusable, automatically derivable, taskindependent semantic representations which target sentence meaning, in order to capture exactly the information in the linguistic signal itself. We further argue that compositional construction of such sentence meaning representations affords better consistency, more comprehensiveness, greater scalability, and less duplication of effort for each new NLP application. For concreteness, we describe one well-tested grammar-based method for producing sentence meaning representations which is efficient for annotators, and which exhibits many of the above benefits. We then report on a small inter-annotator agreement study to quantify the consistency of semantic representations produced via this grammar-based method.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"vo-zhang-2016-dont","url":"https:\/\/aclanthology.org\/P16-2036.pdf","title":"Don't Count, Predict! An Automatic Approach to Learning Sentiment Lexicons for Short Text","abstract":"We describe an efficient neural network method to automatically learn sentiment lexicons without relying on any manual resources. The method takes inspiration from the NRC method, which gives the best results in SemEval13 by leveraging emoticons in large tweets, using the PMI between words and tweet sentiments to define the sentiment attributes of words. We show that better lexicons can be learned by using them to predict the tweet sentiment labels. By using a very simple neural network, our method is fast and can take advantage of the same data volume as the NRC method. Experiments show that our lexicons give significantly better accuracies on multiple languages compared to the current best methods.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kwon-etal-2004-framenet","url":"https:\/\/aclanthology.org\/C04-1179.pdf","title":"FrameNet-based Semantic Parsing using Maximum Entropy Models","abstract":"As part of its description of lexico-semantic predicate frames or conceptual structures, the FrameNet project defines a set of semantic roles specific to the core predicate of a sentence. Recently, researchers have tried to automatically produce semantic interpretations of sentences using this information. Building on prior work, we describe a new method to perform such interpretations. We define sentence segmentation first and show how Maximum Entropy re-ranking helps achieve a level of 76.2% F-score (answer among topfive candidates) or 61.5% (correct answer).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chang-etal-2007-guiding","url":"https:\/\/aclanthology.org\/P07-1036.pdf","title":"Guiding Semi-Supervision with Constraint-Driven Learning","abstract":"Over the last few years, two of the main research directions in machine learning of natural language processing have been the study of semi-supervised learning algorithms as a way to train classifiers when the labeled data is scarce, and the study of ways to exploit knowledge and global information in structured learning tasks. In this paper, we suggest a method for incorporating domain knowledge in semi-supervised learning algorithms. Our novel framework unifies and can exploit several kinds of task specific constraints. The experimental results presented in the information extraction domain demonstrate that applying constraints helps the model to generate better feedback during learning, and hence the framework allows for high performance learning with significantly less training data than was possible before on these tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"moldovan-etal-1992-usc","url":"https:\/\/aclanthology.org\/M92-1023.pdf","title":"USC: MUC-4 Test Results and Analysis","abstract":"California is participating, for the first time, in the message understanding conferences. A team consisting of one faculty and ifive doctoral students started the work for MUC-4 i n January 1992. This work is an extension of a project to build a massively parallel computer for natural language processing called Semantic Network Array Processor (SNAP). RESULT S Scoring Result s During the final week of testing, our system was run on test sets TST3 and TST4. Test set TST3 contains 100 articles from the same time period as the training corpus (DEV), and test sets TST1 and TST2. Th e summary of score results for TST3 is shown in Table 1. Test set TST4 contains 100 articles from a differen t time period then those of TST3. The summary of score results for TST4 is shown in Table 2. The complet e score results for TST3 and TST4 can be found in Appendix G .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"shimorina-gardent-2018-handling","url":"https:\/\/aclanthology.org\/W18-6543.pdf","title":"Handling Rare Items in Data-to-Text Generation","abstract":"Neural approaches to data-to-text generation generally handle rare input items using either delexicalisation or a copy mechanism. We investigate the relative impact of these two methods on two datasets (E2E and WebNLG) and using two evaluation settings. We show (i) that rare items strongly impact performance; (ii) that combining delexicalisation and copying yields the strongest improvement; (iii) that copying underperforms for rare and unseen items and (iv) that the impact of these two mechanisms greatly varies depending on how the dataset is constructed and on how it is split into train, dev and test 1 .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research presented in this paper was partially supported by the French National Research Agency (ANR) within the framework of the ANR-14-CE24-0033 WebNLG Project.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"khanna-etal-2022-idiap","url":"https:\/\/aclanthology.org\/2022.ltedi-1.49.pdf","title":"IDIAP\\_TIET@LT-EDI-ACL2022 : Hope Speech Detection in Social Media using Contextualized BERT with Attention Mechanism","abstract":"With the increase of users on social media platforms, manipulating or provoking masses of people has become a piece of cake. This spread of hatred among people, which has become a loophole for freedom of speech, must be minimized. Hence, it is essential to have a system that automatically classifies the hatred content, especially on social media, to take it down. This paper presents a simple modular pipeline classifier with BERT embeddings and attention mechanism to classify hope speech content in the Hope Speech Detection shared task for Equality, Diversity, and Inclusion-ACL 2022. Our system submission ranks fourth with an F1-score of 0.84. We release our code-base here https: \/\/github.com\/Deepanshu-beep\/ hope-speech-attention.","label_nlp4sg":1,"task":["Hope Speech Detection"],"method":["BERT"],"goal1":"Peace, Justice and Strong Institutions","goal2":"Good Health and Well-Being","goal3":null,"acknowledgments":"This work was supported by the European Union's Horizon 2020 research and innovation program under grant agreement No. 833635 (project ROX-ANNE: Real-time network, text, and speaker analytics for combating organized crime, 2019-2022).","year":2022,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"graff-etal-2018-ingeotec","url":"https:\/\/aclanthology.org\/S18-1020.pdf","title":"INGEOTEC at SemEval-2018 Task 1: EvoMSA and \u03bcTC for Sentiment Analysis","abstract":"This paper describes our participation in Affective Tweets task for emotional intensity and sentiment intensity subtasks for English, Spanish, and Arabic languages. We used two approaches, \u00b5TC and EvoMSA. The first one is a generic text categorization and regression system; and the second one is a two-stage architecture for Sentiment Analysis. Both approaches are multilingual and domain independent.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"solorio-etal-2013-case","url":"https:\/\/aclanthology.org\/W13-1107.pdf","title":"A Case Study of Sockpuppet Detection in Wikipedia","abstract":"This paper presents preliminary results of using authorship attribution methods for the detection of sockpuppeteering in Wikipedia. Sockpuppets are fake accounts created by malicious users to bypass Wikipedia's regulations. Our dataset is composed of the comments made by the editors on the talk pages. To overcome the limitations of the short lengths of these comments, we use an voting scheme to combine predictions made on individual user entries. We show that this approach is promising and that it can be a viable alternative to the current human process that Wikipedia uses to resolve suspected sockpuppet cases.","label_nlp4sg":1,"task":["Sockpuppet Detection"],"method":["authorship attribution","voting scheme"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by ONR grant N00014-12-1-0217. The authors would like to thank the anonymous reviewers for their comments on a previous version of this paper.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"platonov-etal-2021-modeling","url":"https:\/\/aclanthology.org\/2021.splurobonlp-1.4.pdf","title":"Modeling Semantics and Pragmatics of Spatial Prepositions via Hierarchical Common-Sense Primitives","abstract":"Understanding spatial expressions and using them appropriately is necessary for seamless and natural human-machine interaction. However, capturing the semantics and appropriate usage of spatial prepositions is notoriously difficult, because of their vagueness and polysemy. Although modern data-driven approaches are good at capturing statistical regularities in the usage, they usually require substantial sample sizes, often do not generalize well to unseen instances and, most importantly, their structure is essentially opaque to analysis, which makes diagnosing problems and understanding their reasoning process difficult. In this work, we discuss our attempt at modeling spatial senses of prepositions in English using a combination of rule-based and statistical learning approaches. Each preposition model is implemented as a tree where each node computes certain intuitive relations associated with the preposition, with the root computing the final value of the prepositional relation itself. The models operate on a set of artificial 3D \"room world\" environments, designed in Blender, taking the scene itself as an input. We also discuss our annotation framework used to collect human judgments employed in the model training. Both our factored models and black-box baseline models perform quite well, but the factored models will enable reasoned explanations of spatial relation judgements.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bjorkelund-etal-2013-ranking","url":"https:\/\/aclanthology.org\/W13-4916.pdf","title":"(Re)ranking Meets Morphosyntax: State-of-the-art Results from the SPMRL 2013 Shared Task","abstract":"This paper describes the IMS-SZEGED-CIS contribution to the SPMRL 2013 Shared Task. We participate in both the constituency and dependency tracks, and achieve state-of-theart for all languages. For both tracks we make significant improvements through high quality preprocessing and (re)ranking on top of strong baselines. Our system came out first for both tracks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Rich\u00e1rd Farkas is funded by the European Union and the European Social Fund through the project Fu-turICT.hu (grant no.: T\u00c1MOP-4.2.2.C-11\/1\/KONV-","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sornlertlamvanich-2005-non","url":"https:\/\/aclanthology.org\/U05-1003.pdf","title":"From Non-segmenting Language Processing to Web Language Engineering","abstract":"From Non-Segmenting Language Processing to Web Language Engineering Virach Sornlertlamvanich Thai Computational Linguistics Laboratory (TCL) , NICT, Thailand virach@tcllab.org It is interesting to look at the statistics of the online languages reported by the Global Reach (www.globalreacg.biz). In September 2004, it was reported that the top six online language populations were English 35.2%, Chinese 13.7%, Spanish 9.0%, Japanese 8.4%, German 6.9%, and French 4.2% while the web contents were English 68.4%, Japanese 5.9%, German 5.8%, Chinese 3.9%, French 3.0% ,and Spanish 2.4%. There are some changes in ranking between the online language populations and the existing of the web contents. However, English is still the majority language used in the online community. Many efforts have been making to prevent the fall-off in using of other languages, especially the less computerized languages. It is said that there are about 7,000 languages using in all over the world. To deal with languages as many as we can find online, it is much more efficient to consider the language independent approaches. The big difference between segmenting languages (i.e. English and other European languages) and non-segmenting languages (i.e. Thai, Lao, Khmer, Japanese, Chinese and a lot of Asian languages) in the existing of word boundary marker causes the change in language processing. Most of the current approaches are based on the assumption that words are already identified disregarding the existing of the word boundary markers. The research on word boundary is separately conducted under the topic of word segmentation. On contrary, we proposed some algorithms to handle the non-segmenting languages (Virach 2005a , Virach 2005b to establish a language independent approach.\nIn our recent research, we proposed a language interpretation model to deal with an input text as a byte sequence rather than a sequence of words. It is an approach to unify the language processing model to cope with the ambiguities in word determination problem. The approach takes an input text in the early stage of language processing when the exhaustive recognition of total word identity is not necessary. In our research, we present the achievements in language identification, indexing for full text retrieval, and word candidate extraction based on the unified input byte sequence. Our experiments show comparable results with the existing word-based approaches.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"imankulova-etal-2017-improving","url":"https:\/\/aclanthology.org\/W17-5704.pdf","title":"Improving Low-Resource Neural Machine Translation with Filtered Pseudo-Parallel Corpus","abstract":"Large-scale parallel corpora are indispensable to train highly accurate machine translators. However, manually constructed large-scale parallel corpora are not freely available in many language pairs. In previous studies, training data have been expanded using a pseudoparallel corpus obtained using machine translation of the monolingual corpus in the target language. However, in lowresource language pairs in which only low-accuracy machine translation systems can be used, translation quality is reduces when a pseudo-parallel corpus is used naively. To improve machine translation performance with low-resource language pairs, we propose a method to expand the training data effectively via filtering the pseudo-parallel corpus using a quality estimation based on back-translation. As a result of experiments with three language pairs using small, medium, and large parallel corpora, language pairs with fewer training data filtered out more sentence pairs and improved BLEU scores more significantly.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"oboronko-2000-wired","url":"https:\/\/aclanthology.org\/2000.amta-workshop.5.pdf","title":"Wired for peace and multi-language communication","abstract":"Our project Wired for Peace: Virtual Diplomacy in Northeast Asia (Http:\/\/wwwneacd.ucsd.edu\/) has as its main aim to provide policymakers and researchers of the U.S., China, Russia, Japan, and Korea with Internet based tools to allow for continuous communication on issues of the regional security and cooperation. Since the very beginning of the project, we have understood that Web-based translation between English and Asian languages would be one of the most necessary tools for successful development of the project. With this understanding, we have partnered with Systran (www.systransoft.com), one of the leaders in MT field, in order to develop Internet-based tools for both synchronous and asynchronous translation of texts and discussions. This submission is a report on a work in progress.","label_nlp4sg":1,"task":["continuous communication"],"method":["Web - based translation"],"goal1":"Peace, Justice and Strong Institutions","goal2":"Partnership for the goals","goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":1} {"ID":"mishra-sachdeva-2020-need","url":"https:\/\/aclanthology.org\/2020.sustainlp-1.23.pdf","title":"Do We Need to Create Big Datasets to Learn a Task?","abstract":"Deep Learning research has been largely accelerated by the development of huge datasets such as Imagenet. The general trend has been to create big datasets to make a deep neural network learn. A huge amount of resources is being spent in creating these big datasets, developing models, training them, and iterating this process to dominate leaderboards. We argue that the trend of creating bigger datasets needs to be revised by better leveraging the power of pre-trained language models. Since the language models have already been pretrained with huge amount of data and have basic linguistic knowledge, there is no need to create big datasets to learn a task. Instead, we need to create a dataset that is sufficient for the model to learn various task-specific terminologies, such as 'Entailment', 'Neutral', and 'Contradiction' for NLI. As evidence, we show that RoBERTA is able to achieve near-equal performance on \u223c 2% data of SNLI. We also observe competitive zero-shot generalization on several OOD datasets. In this paper, we propose a baseline algorithm to find the optimal dataset for learning a task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"du-cardie-2017-identifying","url":"https:\/\/aclanthology.org\/D17-1219.pdf","title":"Identifying Where to Focus in Reading Comprehension for Neural Question Generation","abstract":"A first step in the task of automatically generating questions for testing reading comprehension is to identify questionworthy sentences, i.e. sentences in a text passage that humans find it worthwhile to ask questions about. We propose a hierarchical neural sentence-level sequence tagging model for this task, which existing approaches to question generation have ignored. The approach is fully data-driven-with no sophisticated NLP pipelines or any hand-crafted rules\/features-and compares favorably to a number of baselines when evaluated on the SQuAD data set. When incorporated into an existing neural question generation system, the resulting end-to-end system achieves stateof-the-art performance for paragraph-level question generation for reading comprehension.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the reviewers for helpful comments and Victoria Litvinova for proofreading.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"samanta-chaudhuri-2013-simple","url":"https:\/\/aclanthology.org\/O13-1022.pdf","title":"A simple real-word error detection and correction using local word bigram and trigram","abstract":"Spelling error is broadly classified in two categories namely non word error and real word error. In this paper a localized real word error detection and correction method is proposed where the scores of bigrams generated by immediate left and right neighbour of the candidate word and the trigram of these three words are combined. A single character position error model is assumed so that if a word W is erroneous then the correct word belongs to the set of real words S generated by single character edit operation on W. The above combined score is calculated also on all members of S. These words are ranked in the decreasing order of the score. By observing the rank and using a rule based approach, the error decision and correction candidates are simultaneously selected. The approach gives comparable accuracy with other existing approaches but is computationally attractive. Since only left and right neighbor are involved, multiple errors in a sentence can also be detected (if the error occurs in every alternate words).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mim-etal-2019-unsupervised","url":"https:\/\/aclanthology.org\/P19-2053.pdf","title":"Unsupervised Learning of Discourse-Aware Text Representation for Essay Scoring","abstract":"Existing document embedding approaches mainly focus on capturing sequences of words in documents. However, some document classification and regression tasks such as essay scoring need to consider discourse structure of documents. Although some prior approaches consider this issue and utilize discourse structure of text for document classification, these approaches are dependent on computationally expensive parsers. In this paper, we propose an unsupervised approach to capture discourse structure in terms of coherence and cohesion for document embedding that does not require any expensive parser or annotation. Extrinsic evaluation results show that the document representation obtained from our approach improves the performance of essay Organization scoring and Argument Strength scoring.","label_nlp4sg":1,"task":["Essay Scoring"],"method":["Unsupervised Learning"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":"This work was supported by JST CREST Grant Number JPMJCR1513 and JSPS KAKENHI Grant Number 19K20332. We would like to thank the anonymous ACL reviewers for their insightful comments. We also thank Ekaterina Kochmar for her profound and useful feedback.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cadel-ledouble-2000-extraction","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2000\/pdf\/202.pdf","title":"Extraction of Concepts and Multilingual Information Schemes from French and English Economics Documents","abstract":"This paper focuses on the linguistic analysis of economic information in French and English documents. Our objective is to establish domain-specific information schemes based on structural and conceptual information. At the structural level, we define linguistic triggers that take into account each language's specificity. At the conceptual level, analysis of concepts and relations between concepts result in a classification, prior to the representation of schemes. The final outcome of this study is a mapping between linguistic and conceptual structures in the field of economics.","label_nlp4sg":1,"task":["Extraction of Concepts"],"method":["information schemes","linguistic triggers","linguistic analysis"],"goal1":"Decent Work and Economic Growth","goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":1,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"luttighuis-sikkel-1993-generalized","url":"https:\/\/aclanthology.org\/1993.iwpt-1.18.pdf","title":"Generalized LR parsing and attribute evaluation","abstract":"This paper presents a thorough discussion of generalized LR parsing with simultaneous at tribute evaluation. Nondeterministic parsers and combined parser\/evaluators are presented for the LL(O), LR(O), and SKLR(O) strategies. SKLR(O) parsing occurs as an intermediate .strategy between the first two. Particularly in the context of simultaneous attribute evaluation, generalized SKLR(O) parsing is a sensible alternative for generalized LR(O) parsing.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1993,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cruz-2019-authorship","url":"https:\/\/aclanthology.org\/W19-3649.pdf","title":"Authorship Recognition with Short-Text using Graph-based Techniques","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kimura-etal-2021-loa","url":"https:\/\/aclanthology.org\/2021.acl-demo.27.pdf","title":"LOA: Logical Optimal Actions for Text-based Interaction Games","abstract":"We present Logical Optimal Actions (LOA), an action decision architecture of reinforcement learning applications with a neurosymbolic framework which is a combination of neural network and symbolic knowledge acquisition approach for natural language interaction games. The demonstration for LOA experiments consists of a web-based interactive platform for text-based games and visualization for acquired knowledge for improving interpretability for trained rules. This demonstration also provides a comparison module with other neuro-symbolic approaches as well as non-symbolic state-ofthe-art agent models on the same text-based games. Our LOA also provides open-sourced implementation in Python for the reinforcement learning environment to facilitate an experiment for studying neuro-symbolic agents.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tan-etal-2015-usaar","url":"https:\/\/aclanthology.org\/S15-2015.pdf","title":"USAAR-SHEFFIELD: Semantic Textual Similarity with Deep Regression and Machine Translation Evaluation Metrics","abstract":"This paper describes the USAAR-SHEFFIELD systems that participated in the Semantic Textual Similarity (STS) English task of SemEval-2015. We extend the work on using machine translation evaluation metrics in the STS task. Different from previous approaches, we regard the metrics' robustness across different text types and conflate the training data across different subcorpora. In addition, we introduce a novel deep regressor architecture and evaluated its efficiency in the STS task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research leading to these results has received funding from the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme FP7\/2007-2013\/ under REA grant agreement n \u2022 317471.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mcintyre-1998-babel","url":"https:\/\/aclanthology.org\/C98-2132.pdf","title":"Babel: A testbed for research in origins of language","abstract":"We believe that language is a complex adaptive system that emerges from adaptive interactions between language users and continues to evolve and adapt through repeated interactions. Our research looks at the mechanisms and processes involved in such emergence and adaptation. To provide a basis for our computer simulations, we have implemented an open-ended, extensible testbed called Babel which allows rapid construction of experiments and flexible visualization of results.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The Babel environment was developed at the Sony Computer Science Laboratory in Paris. My colleagues Luc Steels and Frederic Kaplan of Sony CSL Paris, and Joris van Looveren and Bart de Boer from the Vrije Universiteit Brussel have provided essential feedback and suggestions throughout the development process.","year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dong-etal-2015-question","url":"https:\/\/aclanthology.org\/P15-1026.pdf","title":"Question Answering over Freebase with Multi-Column Convolutional Neural Networks","abstract":"Answering natural language questions over a knowledge base is an important and challenging task. Most of existing systems typically rely on hand-crafted features and rules to conduct question understanding and\/or answer ranking. In this paper, we introduce multi-column convolutional neural networks (MCCNNs) to understand questions from three different aspects (namely, answer path, answer context, and answer type) and learn their distributed representations. Meanwhile, we jointly learn low-dimensional embeddings of entities and relations in the knowledge base. Question-answer pairs are used to train the model to rank candidate answers. We also leverage question paraphrases to train the column networks in a multi-task learning manner. We use FREEBASE as the knowledge base and conduct extensive experiments on the WEBQUESTIONS dataset. Experimental results show that our method achieves better or comparable performance compared with baseline systems. In addition, we develop a method to compute the salience scores of question words in different column networks. The results help us intuitively understand what MCCNNs learn.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by NSFC (Grant No. 61421003) ","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-etal-2015-shallow","url":"https:\/\/aclanthology.org\/K15-2005.pdf","title":"Shallow Discourse Parsing Using Constituent Parsing Tree","abstract":"This paper describes our system in the closed track of the shared task of CoNLL-2015. We formulize the discourse parsing work into a series of classification subtasks. The official evaluation shows that the proposed framework can give competitive results and we give a few discussions over latent improvement as well.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhao-tsujii-1999-transfer","url":"https:\/\/aclanthology.org\/1999.mtsummit-1.74.pdf","title":"Transfer in experience-guided machine translation","abstract":"Experience-Guided Machine Translation (EGMT) seeks to represent the translators' knowledge of translation as experiences and translates by analogy. The transfer in EGMT finds the experiences most similar to a new text and its parts, segments it into units of translation and translates them by analogy to the experiences and then assembles them into a whole. A research prototype of analogical transfer from Chinese to English is built to prove the viability of the approach in the exploration of new architecture of machine translation. The paper discusses how the experiences are represented and selected with respect to a new text. It describes how units of translation are defined, partial translation is derived and composed into a whole.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1999,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"alemany-puig-etal-2021-linear","url":"https:\/\/aclanthology.org\/2021.quasy-1.1.pdf","title":"The Linear Arrangement Library. A new tool for research on syntactic dependency structures.","abstract":"The new and growing field of Quantitative Dependency Syntax has emerged at the crossroads between Dependency Syntax and Quantitative Linguistics. One of the main concerns in this field is the statistical patterns of syntactic dependency structures. These structures, grouped in treebanks, are the source for statistical analyses in these and related areas; dozens of scores devised over the years are the tools of a new industry to search for patterns and perform other sorts of analyses. The plethora of such metrics and their increasing complexity require sharing the source code of the programs used to perform such analyses. However, such code is not often shared with the scientific community or is tested following unknown standards. Here we present a new open-source tool, the Linear Arrangement Library (LAL), which caters to the needs of, especially, inexperienced programmers. This tool enables the calculation of these metrics on single syntactic dependency structures, treebanks, and collection of treebanks, grounded on ease of use and yet with great flexibility. LAL has been designed to be efficient, easy to use (while satisfying the needs of all levels of programming expertise), reliable (thanks to thorough testing), and to unite research from different traditions, geographic areas, and research fields.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Aleksandra Petrova for helpful comments. LAP is supported by Secretaria d'Universitats i Recerca de la Generalitat de Catalunya and the Social European Fund. RFC and LAP are supported by the grant TIN2017-89244-R from MINECO (Ministerio de Econom\u00eda, Industria y Competitividad). RFC is also supported by the recognition 2017SGR-856 (MACDA) from AGAUR (Generalitat de Catalunya). JLE is funded by the grant PID2019-109137GB-C22 from MINECO.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mihalcea-nastase-2008-add","url":"https:\/\/aclanthology.org\/I08-2136.pdf","title":"How to Add a New Language on the NLP Map: Building Resources and Tools for Languages with Scarce Resources","abstract":"Those of us whose mother tongue is not English or are curious about applications involving other languages, often find ourselves in the situation where the tools we require are not available. According to recent studies there are about 7200 different languages spoken worldwide -without including variations or dialects -out of which very few have automatic language processing tools and machine readable resources.\nIn this tutorial we will show how we can take advantage of lessons learned from frequently studied and used languages in NLP, and of the wealth of information and collaborative efforts mediated by the World Wide Web. We structure the presentation around two major themes: mono-lingual and crosslingual approaches. Within the mono-lingual area, we show how to quickly assemble a corpus for statistical processing, how to obtain a semantic network using on-line resources -in particular Wikipediaand how to obtain automatically annotated corpora for a variety of applications. The cross-lingual half of the tutorial shows how to build upon NLP methods and resources for other languages, and adapt them for a new language. We will review automatic construction of parallel corpora, projecting annotations from one side of the parallel corpus to the other, building language models, and finally we will look at how all these can come together in higherend applications such as machine translation and cross-language information retrieval. Vivi Nastase is a post-doctoral fellow at EML Research gGmbH, Heidelberg, Germany. Her research interests are in lexical semantics, semantic relations, knowledge extraction, multi-document summarization, graph-based algorithms for natural language processing, multilingual natural language processing. She is a co-founder of the Journal of Interesting Negative Results in Natural Language Processing and Machine Learning.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"raheja-tetreault-2019-dialogue","url":"https:\/\/aclanthology.org\/N19-1373.pdf","title":"Dialogue Act Classification with Context-Aware Self-Attention","abstract":"Recent work in Dialogue Act classification has treated the task as a sequence labeling problem using hierarchical deep neural networks. We build on this prior work by leveraging the effectiveness of a context-aware selfattention mechanism coupled with a hierarchical recurrent neural network. We conduct extensive evaluations on standard Dialogue Act classification datasets and show significant improvement over state-of-the-art results on the Switchboard Dialogue Act (SwDA) Corpus. We also investigate the impact of different utterance-level representation learning methods and show that our method is effective at capturing utterance-level semantic text representations while maintaining high accuracy.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank Dimitris Alikaniotis, Maria Nadejde and Courtney Napoles for their insightful discussions, and the anonymous reviewers for their helpful comments.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"inan-etal-2021-cosmic-coherence","url":"https:\/\/aclanthology.org\/2021.findings-emnlp.291.pdf","title":"COSMic: A Coherence-Aware Generation Metric for Image Descriptions","abstract":"Developers of text generation models rely on automated evaluation metrics as a stand-in for slow and expensive manual evaluations. However, image captioning metrics have struggled to give accurate learned estimates of the semantic and pragmatic success of output text. We address this weakness by introducing the first discourse-aware learned generation metric for evaluating image descriptions. Our approach is inspired by computational theories of discourse for capturing information goals using coherence. We present a dataset of image-description pairs annotated with coherence relations. We then train a coherence-aware metric on a subset of the Conceptual Captions dataset and measure its effectiveness-its ability to predict human ratings of output captions-on a test set composed of out-of-domain images. We demonstrate a higher Kendall Correlation Coefficient for our proposed metric with the human judgments for the results of a number of stateof-the-art coherence-aware caption generation models when compared to several other metrics including recently proposed learned metrics such as BLEURT and BERTScore.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors affiliated with Rutgers University were partly supported by NSF Award CCF-19349243. Thanks to Pitt Cyber for supporting this project and the authors from the University of Pittsburgh. We also acknowledge the Center for Research Computing at the University of Pittsburgh for providing the required computational resources for carrying out experiments at the University of Pittsburgh.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhou-etal-2020-temporal","url":"https:\/\/aclanthology.org\/2020.acl-main.678.pdf","title":"Temporal Common Sense Acquisition with Minimal Supervision","abstract":"Temporal common sense (e.g., duration and frequency of events) is crucial for understanding natural language. However, its acquisition is challenging, partly because such information is often not expressed explicitly in text, and human annotation on such concepts is costly. This work proposes a novel sequence modeling approach that exploits explicit and implicit mentions of temporal common sense, extracted from a large corpus, to build TACOLM, 1 a temporal common sense language model. Our method is shown to give quality predictions of various dimensions of temporal common sense (on UDST and a newly collected dataset from Real-News). It also produces representations of events for relevant tasks such as duration comparison, parent-child relations, event coreference and temporal QA (on TimeBank, HiEVE and MCTACO) that are better than using the standard BERT. Thus, it will be an important component of temporal NLP.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is based upon work supported in part by the office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA Contract No. 2019-19051600006 under the BETTER Program and by Contract FA8750-19-2-1004 with the US Defense Advanced Research Projects Agency (DARPA). The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. This research is also supported by a grant from the Allen Institute for Artificial Intelligence (allenai.org).","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"herbelot-2013-text","url":"https:\/\/aclanthology.org\/W13-0204.pdf","title":"What is in a text, what isn't, and what this has to do with lexical semantics","abstract":"This paper queries which aspects of lexical semantics can reasonably be expected to be modelled by corpus-based theories such as distributional semantics or techniques such as ontology extraction. We argue that a full lexical semantics theory must take into account the extensional potential of words. We investigate to which extent corpora provide the necessary data to model this information and suggest that it may be partly learnable from text-based distributions, partly inferred from annotated data, using the insight that a concept's features are extensionally interdependent.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The author is in receipt of a postdoctoral fellowship from the Alexander von Humboldt foundation.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"koeva-etal-2020-natural","url":"https:\/\/aclanthology.org\/2020.lrec-1.863.pdf","title":"Natural Language Processing Pipeline to Annotate Bulgarian Legislative Documents","abstract":"The paper presents the Bulgarian MARCELL corpus, part of a recently developed multilingual corpus representing the national legislation in seven European countries and the NLP pipeline that turns the web crawled data into structured, linguistically annotated dataset. The Bulgarian data is web crawled, extracted from the original HTML format, filtered by document type, tokenised, sentence split, tagged and lemmatised with a fine-grained version of the Bulgarian Language Processing Chain, dependency parsed with NLP-Cube, annotated with named entities (persons, locations, organisations and others), noun phrases, IATE terms and EuroVoc descriptors. An orchestrator process has been developed to control the NLP pipeline performing an end-to-end data processing and annotation starting from the documents identification and ending in the generation of statistical reports. The Bulgarian MARCELL corpus consists of 25,283 documents (at the beginning of November 2019), which are classified into eleven types.","label_nlp4sg":1,"task":["Annotate Bulgarian Legislative Documents"],"method":["web crawling"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"The work reported here was supported by the European Commission in the CEF Telecom Programme (Action No: 2017-EU-IA-0136). We wish to thank the following colleagues for their valuable work in the project: Tsvetana Dimitrova, Valentina Stefanova, Dimitar Georgiev, Valeri Kostov, Tinko Tinchev.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"niehues-waibel-2013-mt","url":"https:\/\/aclanthology.org\/W13-2264.pdf","title":"An MT Error-Driven Discriminative Word Lexicon using Sentence Structure Features","abstract":"The Discriminative Word Lexicon (DWL) is a maximum-entropy model that predicts the target word probability given the source sentence words. We present two ways to extend a DWL to improve its ability to model the word translation probability in a phrase-based machine translation (PBMT) system. While DWLs are able to model the global source information, they ignore the structure of the source and target sentence. We propose to include this structure by modeling the source sentence as a bag-of-n-grams and features depending on the surrounding target words. Furthermore, as the standard DWL does not get any feedback from the MT system, we change the DWL training process to explicitly focus on addressing MT errors. By using these methods we are able to improve the translation performance by up to 0.8 BLEU points compared to a system that uses a standard DWL.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"schumann-2013-collection","url":"https:\/\/aclanthology.org\/R13-2020.pdf","title":"Collection, Annotation and Analysis of Gold Standard Corpora for Knowledge-Rich Context Extraction in Russian and German","abstract":"This paper describes the collection, annotation and linguistic analysis of a gold standard for knowledge-rich context extraction on the basis of Russian and German web corpora as part of ongoing PhD thesis work. In the following sections, the concept of knowledge-rich contexts is refined and gold standard creation is described. Linguistic analyses of the gold standard data and their results are explained.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work described in this paper was partly funded by the CLARA project (EU\/FP7), grant agreement n\u00b0 238405. I am also grateful to my anonymous reviewers for their helpful remarks. Last but not least, I am indebted to my colleagues Jos\u00e9 Martinez Martinez and Ekaterina Lapshinova-Koltunski for many interesting discussions and suggestions.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhong-chiang-2020-look","url":"https:\/\/aclanthology.org\/2020.wmt-1.65.pdf","title":"Look It Up: Bilingual and Monolingual Dictionaries Improve Neural Machine Translation","abstract":"Despite advances in neural machine translation (NMT) quality, rare words continue to be problematic. For humans, the solution to the rare-word problem has long been dictionaries, but dictionaries cannot be straightforwardly incorporated into NMT. In this paper, we describe a new method for \"attaching\" dictionary definitions to rare words so that the network can learn the best way to use them. We demonstrate improvements of up to 3.1 BLEU using bilingual dictionaries and up to 0.7 BLEU using monolingual source-language dictionaries.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This paper is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract #FA8650-17-C-9116. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Gov-","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"loaiciga-etal-2016-disambiguation","url":"https:\/\/aclanthology.org\/W16-2351.pdf","title":"It-disambiguation and source-aware language models for cross-lingual pronoun prediction","abstract":"We present our systems for the WMT 2016 shared task on cross-lingual pronoun prediction. The main contribution is a classifier used to determine whether an instance of the ambiguous English pronoun \"it\" functions as an anaphoric, pleonastic or event reference pronoun. For the English-to-French task the classifier is incorporated in an extended baseline, which takes the form of a source-aware language model. An implementation of the sourceaware language model is also provided for each of the remaining language pairs.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"SL was supported by the Swiss National Science Foundation under grant no. P1GEP1 161877. CH and LG were supported by the Swedish Research Council under project 2012-916 Discourse-Oriented Statistical Machine Translation. Largescale computations were performed on the Abel cluster, owned by the University of Oslo and the Norwegian metacenter for High Performance Computing (NOTUR), under project nn9106k.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wutiwiwatchai-etal-2008-speech","url":"https:\/\/aclanthology.org\/I08-8002.pdf","title":"Speech-to-Speech Translation Activities in Thailand","abstract":"A speech-to-speech translation project (S2S) has been conducted since 2006 by the Human Language Technology laboratory at the National Electronics and Computer Technology Center (NECTEC) in Thailand. During the past one year, there happened a lot of activities regarding technologies constituted for S2S, including automatic speech recognition (ASR), machine translation (MT), text-to-speech synthesis (TTS), as well as technology for language resource and fundamental tool development. A developed prototype of English-to-Thai S2S has opened several research issues, which has been taken into consideration. This article intensively reports all major research and development activities and points out remaining issues for the rest two years of the project.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank the ATR, Japan, in initiating the fruitful A-STAR consortium and in providing some resources and tools for our research and development.","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tran-phan-2017-toward","url":"https:\/\/aclanthology.org\/O17-1016.pdf","title":"Toward Contextual Valence Shifters in Vietnamese Reviews","abstract":"Valence shifters are complex linguistic structures that can modify the sentiment orientations of texts. In this paper, the authors concentrate on the study of shifters in Vietnamese texts and a discussion on the distribution of different types of shifters in the hotel reviews is presented. Finally, an approach for extracting the contextual valance shifters is proposed.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"senellart-white-2006-first","url":"https:\/\/aclanthology.org\/2006.amta-panels.5.pdf","title":"First strategies for integrating hybrid approaches into established systems","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"shibata-etal-2016-neural","url":"https:\/\/aclanthology.org\/P16-1117.pdf","title":"Neural Network-Based Model for Japanese Predicate Argument Structure Analysis","abstract":"This paper presents a novel model for Japanese predicate argument structure (PAS) analysis based on a neural network framework. Japanese PAS analysis is challenging due to the tangled characteristics of the Japanese language, such as case disappearance and argument omission. To unravel this problem, we learn selectional preferences from a large raw corpus, and incorporate them into a SOTA PAS analysis model, which considers the consistency of all PASs in a given sentence. We demonstrate that the proposed PAS analysis model significantly outperforms the base SOTA system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by CREST, Japan Science and Technology Agency.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"aw-etal-2006-phrase","url":"https:\/\/aclanthology.org\/P06-2005.pdf","title":"A Phrase-Based Statistical Model for SMS Text Normalization","abstract":"Short Messaging Service (SMS) texts behave quite differently from normal written texts and have some very special phenomena. To translate SMS texts, traditional approaches model such irregularities directly in Machine Translation (MT). However, such approaches suffer from","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"briscoe-copestake-1999-lexical","url":"https:\/\/aclanthology.org\/J99-4002.pdf","title":"Lexical rules in constraint based grammars","abstract":"Lexical rules have been used to cover a very diverse range of phenomena in constraint-based grammars. Examination of the full range of rules proposed shows that Carpenter's (1991) postulated upper bound on the length of list-valued attributes such as SUBCAT in the lexicon cannot be maintained, leading to unrestricted generative capacity in constraint-based formalisms utilizing HPSG-style lexical rules. We argue that it is preferable to subdivide such rules into a class of semiproductive lexically governed genuinely lexical rules, and a class of fully productive unary syntactic rules. We develop a restricted approach to lexical rules in a typed default feature structure (TDFS) framework (Lascarides et al. 1995; Lascarides and Copestake 1999), which has enough expressivity to state, for example, rules of verb diathesis alternation, but which does not allow arbitrary manipulation of list-valued features. An interpretation of such lexical rules within a probabilistic version of a TDFS-based linguistic (lexical and grammatical) theory allows us to capture the semiproductive nature of genuinely lexical rules, steering an intermediate course between fully generative or purely abbreviatory rules. We illustrate the utility of this approach with a treatment of dative constructions within a linguistic framework that borrows insights from the constraint-based theories: HPSG, UCG, (Zeevat, Klein, and Calder 1987) and construction grammar (Goldberg 1995). We end by outlining how our approach to lexical rules allows for a treatment of passive and recursive affixation, which are generally assumed to require unrestricted list manipulation operations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Tony Kroch, Mark Liberman, Geoff Nunberg, Mark Steedman and, especially, Annie Zaenen for helpful input and advice. The content and structure of the paper is, we hope, much improved on the basis of three anonymous referees' insightful comments on an earlier draft. All the ideas and mistakes, nevertheless, remain our responsibility.","year":1999,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"syed-finin-2010-unsupervised","url":"https:\/\/aclanthology.org\/W10-0910.pdf","title":"Unsupervised techniques for discovering ontology elements from Wikipedia article links","abstract":"We present an unsupervised and unrestricted approach to discovering an infobox like ontology by exploiting the inter-article links within Wikipedia. It discovers new slots and fillers that may not be available in the Wikipedia infoboxes. Our results demonstrate that there are certain types of properties that are evident in the link structure of resources like Wikipedia that can be predicted with high accuracy using little or no linguistic analysis. The discovered properties can be further used to discover a class hierarchy. Our experiments have focused on analyzing people in Wikipedia, but the techniques can be directly applied to other types of entities in text resources that are rich with hyperlinks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research described in this paper was supported in part by a Fulbright fellowship, a gift from Microsoft Research, NSF award IIS-0326460 and the Johns Hopkins University Human Language Technology Center of Excellence.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ji-etal-2017-nested","url":"https:\/\/aclanthology.org\/P17-1070.pdf","title":"A Nested Attention Neural Hybrid Model for Grammatical Error Correction","abstract":"Grammatical error correction (GEC) systems strive to correct both global errors in word order and usage, and local errors in spelling and inflection. Further developing upon recent work on neural machine translation, we propose a new hybrid neural model with nested attention layers for GEC. Experiments show that the new model can effectively correct errors of both types by incorporating word and character-level information, and that the model significantly outperforms previous neural models for GEC as measured on the standard CoNLL-14 benchmark dataset. Further analysis also shows that the superiority of the proposed model can be largely attributed to the use of the nested attention mechanism, which has proven particularly effective in correcting local errors that involve small edits in orthography.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the ACL reviewers for their insightful suggestions, Victoria Zayats for her help with reproducing the baseline word-level NMT system and Yu Shi, Daxin Jiang and Michael Zeng for the helpful discussions.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gildea-jurafsky-1996-learning","url":"https:\/\/aclanthology.org\/J96-4003.pdf","title":"Learning Bias and Phonological-Rule Induction","abstract":"at Boulder A fundamental debate in the machine learning of language has been the role of prior knowledge in the learning process. Purely nativist approaches, such as the Principles and Parameters model, build parameterized linguistic generalizations directly into the learning system. Purely empirical approaches use a general, domain-independent learning rule (Error Back-Propagation, Instancebased Generalization, Minimum Description Length) to learn linguistic generalizations directly from the data. In this paper we suggest that an alternative to the purely nativist or purely empiricist learning paradigms is to represent the prior knowledge of language as a set of abstract learning biases, which guide an empirical inductive learning algorithm. We test our idea by examining the machine learning of simple Sound Pattern of English (S P E)-style phonological rules. We represent phonological rules as finite-state transducers that accept underlying forms as input and generate surface forms as output. We show that OSTIA, a general-purpose transducer induction algorithm, was incapable of learning simple phonological rules like flapping. We then augmented OSTIA with three kinds of learning biases that are specific to natural language phonology, and that are assumed explicitly or implicitly by every theory of phonology: faithfulness (underlying segments tend to be realized similarly on the surface), community (similar segments behave similarly), and context (phonological rules need access to variables in their context). These biases are so fundamental to generative phonology that they are left implicit in many theories. But explicitly modifying the OSTIA algorithm with these biases allowed it to learn more compact, accurate, and general transducers, and our implementation successfully learns a number of rules from English and German. Furthermore, we show that some of the remaining errors in our augmented model are due to implicit biases in the traditional SPE-style rewrite system that are not similarly represented in the transducer formalism, suggesting that while transducers may be formally equivalent to SPE-style rules, they may not have identical evaluation procedures. Because our biases were applied to the learning of very simple SPE-style rules, and to a non-psychologically-motivated and nonprobabilistic theory of purely deterministic transducers, we do not expect that our model as implemented has any practical use as a phonological learning device, nor is it intended as a cognitive model of human learning. Indeed, because of the noise and nondeterminism inherent to linguistic data, we feel strongly that stochastic algorithms for language induction are much more likely to be a fruitful research direction. Our model is rather intended to suggest the kind of biases that may be added to other empiricist induction models, and the way in which they may be added, in order to build a cognitively and computationally plausible learning model for phonological rules.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Many thanks to Jerry Feldman for advice and encouragement, to Isabel Galiano-Ronda for her help with the OSTIA algorithm, and to Eric Fosler, Sharon Inkelas, Lauri Karttunen, Jos60ncina, Orhan Orgun, Ronitt Rubinfeld, Stuart Russell, Andreas Stolcke, Gary Tajchman, four anonymous COLI reviewers, and an anonymous reviewer for ACL-95. This work was partially funded by ICSI.","year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"vossen-etal-2018-referencenet","url":"https:\/\/aclanthology.org\/2018.gwc-1.25.pdf","title":"ReferenceNet: a semantic-pragmatic network for capturing reference relations.","abstract":"In this paper, we present ReferenceNet: a semantic-pragmatic network of reference relations between synsets. Synonyms are assumed to be exchangeable in similar contexts and also word embeddings are based on sharing of local contexts represented as vectors. Co-referring words, however, tend to occur in the same topical context but in different local contexts. In addition, they may express different concepts related through topical coherence, and through author framing and perspective. In this paper, we describe how reference relations can be added to WordNet and how they can be acquired. We evaluate two methods of extracting event coreference relations using WordNet relations against a manual annotation of 38 documents within the same topical domain of gun violence. We conclude that precision is reasonable but recall is lower because the Word-Net hierarchy does not sufficiently capture the required coherence and perspective relations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work presented in this paper was funded by the Netherlands Organization for Scientific Research (NWO) via the Spinoza grant, awarded to Piek Vossen in the project \"Understanding Language by Machines\". We also thank the reviewers for their constructive comments.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hu-etal-2018-state","url":"https:\/\/aclanthology.org\/D18-1234.pdf","title":"A State-transition Framework to Answer Complex Questions over Knowledge Base","abstract":"Although natural language question answering over knowledge graphs have been studied in the literature, existing methods have some limitations in answering complex questions. To address that, in this paper, we propose a State Transition-based approach to translate a complex natural language question N to a semantic query graph (SQG) Q S , which is used to match the underlying knowledge graph to find the answers to question N. In order to generate Q S , we propose four primitive operations (expand, fold, connect and merge) and a learning-based state transition approach. Extensive experiments on several benchmarks (such as QALD, WebQuestions and ComplexQuestions) with two knowledge bases (DBpedia and Freebase) confirm the superiority of our approach compared with stateof-the-arts.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"artzi-zettlemoyer-2013-weakly","url":"https:\/\/aclanthology.org\/Q13-1005.pdf","title":"Weakly Supervised Learning of Semantic Parsers for Mapping Instructions to Actions","abstract":"The context in which language is used provides a strong signal for learning to recover its meaning. In this paper, we show it can be used within a grounded CCG semantic parsing approach that learns a joint model of meaning and context for interpreting and executing natural language instructions, using various types of weak supervision. The joint nature provides crucial benefits by allowing situated cues, such as the set of visible objects, to directly influence learning. It also enables algorithms that learn while executing instructions, for example by trying to replicate human actions. Experiments on a benchmark navigational dataset demonstrate strong performance under differing forms of supervision, including correctly executing 60% more instruction sets relative to the previous state of the art.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research was supported in part by DARPA under the DEFT program through the AFRL (FA8750-13-2-0019) and the CSSG (N11AP20020), the ARO (W911NF-12-1-0197), and the NSF (IIS-1115966). The authors thank Tom Kwiatkowski, Nicholas FitzGerald and Alan Ritter for helpful discussions, David Chen for providing the evaluation corpus, and the anonymous reviewers for helpful comments.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"derczynski-gaizauskas-2013-temporal","url":"https:\/\/aclanthology.org\/P13-2114.pdf","title":"Temporal Signals Help Label Temporal Relations","abstract":"Automatically determining the temporal order of events and times in a text is difficult, though humans can readily perform this task. Sometimes events and times are related through use of an explicit coordination which gives information about the temporal relation: expressions like \"before\" and \"as soon as\". We investigate the r\u00f4le that these coordinating temporal signals have in determining the type of temporal relations in discourse. Using machine learning, we improve upon prior approaches to the problem, achieving over 80% accuracy at labelling the types of temporal relation between events and times that are related by temporal signals.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The first author was supported by UK EPSRC grant EP\/K017896\/1, uComp (http:\/\/www.ucomp.eu\/).","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"walker-etal-2021-athena","url":"https:\/\/aclanthology.org\/2021.emnlp-demo.15.pdf","title":"Athena 2.0: Contextualized Dialogue Management for an Alexa Prize SocialBot","abstract":"Athena 2.0 is an Alexa Prize SocialBot that has been a finalist in the last two Alexa Prize Grand Challenges. One reason for Athena's success is its novel dialogue management strategy, which allows it to dynamically construct dialogues and responses from component modules, leading to novel conversations with every interaction. Here we describe Athena's system design and performance in the Alexa Prize during the 20\/21 competition. A live demo of Athena as well as video recordings will provoke discussion on the state of the art in conversational AI.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Christian Benavidez, Yaqing Cao, James Graupera, Colin Harmon, Venkatesh Nagubandi, Meltem Ozcan, Diego Pedro, Navya Rao, Stephanie Rich, Jasiel Rivera-Trinadad and Aditya Tarde for helping with fun facts, Wikidata queries and prosody markup.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"liu-2013-shift","url":"https:\/\/aclanthology.org\/P13-1001.pdf","title":"A Shift-Reduce Parsing Algorithm for Phrase-based String-to-Dependency Translation","abstract":"We introduce a shift-reduce parsing algorithm for phrase-based string-todependency translation. As the algorithm generates dependency trees for partial translations left-to-right in decoding, it allows for efficient integration of both n-gram and dependency language models. To resolve conflicts in shift-reduce parsing, we propose a maximum entropy model trained on the derivation graph of training data. As our approach combines the merits of phrase-based and string-todependency models, it achieves significant improvements over the two baselines on the NIST Chinese-English datasets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nothdurft-etal-2014-probabilistic","url":"https:\/\/aclanthology.org\/W14-4307.pdf","title":"Probabilistic Human-Computer Trust Handling","abstract":"Human-computer trust has shown to be a critical factor in influencing the complexity and frequency of interaction in technical systems. Particularly incomprehensible situations in human-computer interaction may lead to a reduced users trust in the system and by that influence the style of interaction. Analogous to human-human interaction, explaining these situations can help to remedy negative effects. In this paper we present our approach of augmenting task-oriented dialogs with selected explanation dialogs to foster the humancomputer trust relationship in those kinds of situations. We have conducted a webbased study testing the effects of different goals of explanations on the components of human-computer trust. Subsequently, we show how these results can be used in our probabilistic trust handling architecture to augment pre-defined task-oriented dialogs.","label_nlp4sg":1,"task":["Probabilistic Human - Computer Trust Handling"],"method":["webbased study"],"goal1":"Decent Work and Economic Growth","goal2":null,"goal3":null,"acknowledgments":"This work was supported by the Transregional Collaborative Research Centre SFB\/TRR 62 \"Companion-Technology for Cognitive Technical Systems\" which is funded by the German Research Foundation (DFG).","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":1,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"aufrant-etal-2017-limsi","url":"https:\/\/aclanthology.org\/K17-3017.pdf","title":"LIMSI@CoNLL'17: UD Shared Task","abstract":"This paper describes LIMSI's submission to the CoNLL 2017 UD Shared Task, which is focused on small treebanks, and how to improve low-resourced parsing only by ad hoc combination of multiple views and resources. We present our approach for low-resourced parsing, together with a detailed analysis of the results for each test treebank. We also report extensive analysis experiments on model selection for the PUD treebanks, and on annotation consistency among UD treebanks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been partly funded by the French Direction g\u00e9n\u00e9rale de l'armement and by the Agence Nationale de la Recherche (ParSiTi project, ANR-16-CE33-0021). We thank Joseph Le Roux for fruitful discussions and comments.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hutchins-1996-state","url":"https:\/\/aclanthology.org\/1996.amta-1.20.pdf","title":"The state of machine translation in Europe","abstract":"This first half of this general survey covers MT and translation tools in use, including translators workstations, software localisation, and recent commercial and in-house MT systems. The second half covers the research scene, multilingual projects supported by the European Union, networking and evaluation. In comparison with the United States and elsewhere, the distinctive features of activity in Europe in the field of machine translation and machine-aided translation are: (i) the development and popularity of translator workstations, (ii) the strong software localisation industry, (iii) the vigorous activity in the area of lexical resources and terminology, (iv) and the broad based research on language engineering supported primarily by European Union funds.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"de-lucena-paraboni-2009-usp","url":"https:\/\/aclanthology.org\/W09-0633.pdf","title":"USP-EACH: Improved Frequency-based Greedy Attribute Selection","abstract":"We present a follow-up of our previous frequency-based greedy attribute selection strategy. The current version takes into account also the instructions given to the participants of TUNA trials regarding the use of location information, showing an overall improvement on string-edit distance values driven by the results on the Furniture domain.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been supported by CNPq-Brazil (484015\/2007-9) and FAPESP (2006\/03941-7).","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bisazza-federico-2010-chunk","url":"https:\/\/aclanthology.org\/W10-1735.pdf","title":"Chunk-Based Verb Reordering in VSO Sentences for Arabic-English Statistical Machine Translation","abstract":"In Arabic-to-English phrase-based statistical machine translation, a large number of syntactic disfluencies are due to wrong long-range reordering of the verb in VSO sentences, where the verb is anticipated with respect to the English word order. In this paper, we propose a chunk-based reordering technique to automatically detect and displace clause-initial verbs in the Arabic side of a word-aligned parallel corpus. This method is applied to preprocess the training data, and to collect statistics about verb movements. From this analysis, specific verb reordering lattices are then built on the test sentences before decoding them. The application of our reordering methods on the training and test sets results in consistent BLEU score improvements on the NIST-MT 2009 Arabic-English benchmark.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the EuroMatrixPlus project (IST-231720) which is funded by the European Commission under the Seventh Framework Programme for Research and Technological Development.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bechet-etal-2000-tagging","url":"https:\/\/aclanthology.org\/P00-1011.pdf","title":"Tagging Unknown Proper Names Using Decision Trees","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"oza-etal-2009-hindi","url":"https:\/\/aclanthology.org\/W09-3029.pdf","title":"The Hindi Discourse Relation Bank","abstract":"We describe the Hindi Discourse Relation Bank project, aimed at developing a large corpus annotated with discourse relations. We adopt the lexically grounded approach of the Penn Discourse Treebank, and describe our classification of Hindi discourse connectives, our modifications to the sense classification of discourse relations, and some crosslinguistic comparisons based on some initial annotations carried out so far.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partially supported by NSF grants EIA-02-24417, EIA-05-63063, and IIS-07-05671.","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"maskey-hirschberg-2006-summarizing","url":"https:\/\/aclanthology.org\/N06-2023.pdf","title":"Summarizing Speech Without Text Using Hidden Markov Models","abstract":"We present a method for summarizing speech documents without using any type of transcript\/text in a Hidden Markov Model framework. The hidden variables or states in the model represent whether a sentence is to be included in a summary or not, and the acoustic\/prosodic features are the observation vectors. The model predicts the optimal sequence of segments that best summarize the document. We evaluate our method by comparing the predicted summary with one generated by a human summarizer. Our results indicate that we can generate 'good' summaries even when using only acoustic\/prosodic information, which points toward the possibility of text-independent summarization for spoken documents.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Yang Liu, Michel Galley and Fadi Biadsy for helpful comments. This work was funded in part by the DARPA GALE program under a subcontract to SRI International.","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-caragea-2021-target","url":"https:\/\/aclanthology.org\/2021.naacl-main.148.pdf","title":"Target-Aware Data Augmentation for Stance Detection","abstract":"The goal of stance detection is to identify whether the author of a text is in favor of, neutral or against a specific target. Despite substantial progress on this task, one of the remaining challenges is the scarcity of annotations. Data augmentation is commonly used to address annotation scarcity by generating more training samples. However, the augmented sentences that are generated by existing methods are either less diversified or inconsistent with the given target and stance label. In this paper, we formulate the data augmentation of stance detection as a conditional masked language modeling task and augment the dataset by predicting the masked word conditioned on both its context and the auxiliary sentence that contains target and label information. Moreover, we propose another simple yet effective method that generates target-aware sentence by replacing a target mention with the other. Experimental results show that our proposed methods significantly outperforms previous augmentation methods on 11 targets.","label_nlp4sg":1,"task":["Stance Detection"],"method":["Data Augmentation"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"This work is partially supported by the NSF Grants IIS-1912887 and IIS-1903963. We thank our reviewers for their insightful comments.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"rambow-etal-2004-summarizing","url":"https:\/\/aclanthology.org\/N04-4027.pdf","title":"Summarizing Email Threads","abstract":"Summarizing threads of email is different from summarizing other types of written communication as it has an inherent dialog structure. We present initial research which shows that sentence extraction techniques can work for email threads as well, but profit from email-specific features. In addition, the presentation of the summary should take into account the dialogic structure of email communication.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"abzianidze-bos-2019-thirty","url":"https:\/\/aclanthology.org\/W19-3302.pdf","title":"Thirty Musts for Meaning Banking","abstract":"Meaning banking-creating a semantically annotated corpus for the purpose of semantic parsing or generation-is a challenging task. It is quite simple to come up with a complex meaning representation, but it is hard to design a simple meaning representation that captures many nuances of meaning. This paper lists some lessons learned in nearly ten years of meaning annotation during the development of the Groningen Meaning Bank and the Parallel Meaning Bank . The paper's format is rather unconventional: there is no explicit related work, no methodology section, no results, and no discussion (and the current snippet is not an abstract but actually an introductory preface). Instead, its structure is inspired by work of Traum (2000) and Bender (2013) . The list starts with a brief overview of the existing meaning banks (Section 1) and the rest of the items are roughly divided into three groups: corpus collection (Section 2 and 3, annotation methods (Section 4-11), and design of meaning representations (Section 12-30). We hope this overview will give inspiration and guidance in creating improved meaning banks in the future.\nOther semantic annotation projects can be inspiring, help you to find solutions to hard annotation problems, or to find out where improvements to the state of the art are still needed (Abend and Rappoport, 2017) . Good starting points are the English Resource Grammar (Flickinger, 2000 (Flickinger, , 2011 , the Groningen Meaning Bank (GMB, Bos et al. 2017) , the AMR Bank (Banarescu et al., 2013) , the Parallel Meaning Bank (PMB, Abzianidze et al. 2017), Scope Control Theory (Butler and Yoshimoto, 2012), UCCA (Abend and Rappoport, 2013) , Prague Semantic Dependencies (Haji\u010d et al., 2017) and the ULF Corpus based on Episodic Logic (Kim and Schubert, 2019) . The largest differences between these approaches can be found in the expressive power of the meaning representations used. The simplest representations correspond to graphs (Banarescu et al., 2013; Abend and Rappoport, 2013) ; slightly more expressive ones correspond to first-order logic (Oepen et al., 2016; Butler and Yoshimoto, 2012) , whereas others go beyond this (Kim and Schubert, 2019) . Generally, an increase of expressive power causes a decrease of efficient reasoning (Blackburn and Bos, 2005) . Semantic formalisms based on graphs are attractive because of their simplicity, but will face issues when dealing with negation in inference tasks (Section 21). The choice might depend on the application (e.g., if you are not interested in detecting contradictions, coping with negation is less important), but arguably, an open-domain meaning bank ought to be independent of a specific application.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the two anonymous reviewers for their comments-they helped to improve this paper considerably. Reviewer 1 gave us valuable pointers to the literature that we missed, and spotted many unclear and ambiguous formulations. Reviewer 2 was disappointed by the first version of this paper-we hope s\/he likes this improved version better. This work was funded by the NWO-VICI grant Lost in Translation Found in Meaning (288-89-003).","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cicekli-2005-learning","url":"https:\/\/aclanthology.org\/2005.mtsummit-ebmt.4.pdf","title":"Learning Translation Templates with Type Constraints","abstract":"This paper presents a generalization technique that induces translation templates from given translation examples by replacing differing parts in these examples with typed variables. Since the type of each variable is also inferred during the learning process, each induced template is associated with a set of type constraints. The type constraints that are associated with a translation template restrict the usage of that translation template in certain contexts in order to avoid some of wrong translations. The types of variables are induced using the type lattices designed for both source language and target language. The proposed generalization technique has been implemented as a part of an EBMT system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"blunsom-etal-2009-note","url":"https:\/\/aclanthology.org\/P09-2085.pdf","title":"A Note on the Implementation of Hierarchical Dirichlet Processes","abstract":"The implementation of collapsed Gibbs samplers for non-parametric Bayesian models is non-trivial, requiring considerable book-keeping. Goldwater et al. (2006a) presented an approximation which significantly reduces the storage and computation overhead, but we show here that their formulation was incorrect and, even after correction, is grossly inaccurate. We present an alternative formulation which is exact and can be computed easily. However this approach does not work for hierarchical models, for which case we present an efficient data structure which has a better space complexity than the naive approach.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank Tom Griffiths for providing the code used to produce Figure 2 and acknowledge the support of the EPSRC (Blunsom, grant EP\/D074959\/1; Cohn, grant GR\/T04557\/01).","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hartvigsen-etal-2022-toxigen","url":"https:\/\/aclanthology.org\/2022.acl-long.234.pdf","title":"ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection","abstract":"Toxic language detection systems often falsely flag text that contains minority group mentions as toxic, as those groups are often the targets of online hate. Such over-reliance on spurious correlations also causes systems to struggle with detecting implicitly toxic language. To help mitigate these issues, we create TOXIGEN, a new large-scale and machinegenerated dataset of 274k toxic and benign statements about 13 minority groups. We develop a demonstration-based prompting framework and an adversarial classifier-in-the-loop decoding method to generate subtly toxic and benign text with a massive pretrained language model (Brown et al., 2020). Controlling machine generation in this way allows TOXIGEN to cover implicitly toxic text at a larger scale, and about more demographic groups, than previous resources of human-written text. We conduct a human evaluation on a challenging subset of TOXIGEN and find that annotators struggle to distinguish machine-generated text from human-written language. We also find that 94.5% of toxic examples are labeled as hate speech by human annotators. Using three publicly-available datasets, we show that finetuning a toxicity classifier on our data improves its performance on human-written data substantially. We also demonstrate that TOXI-GEN can be used to fight machine-generated toxicity as finetuning improves the classifier significantly on our evaluation subset.","label_nlp4sg":1,"task":["Adversarial and Implicit Hate Speech Detection"],"method":["Large - Scale Machine - Generated Dataset","adversarial classifier"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"We thank Azure AI Platform and Misha Bilenko for sponsoring this work and providing compute resources, Microsoft Research for supporting our large scale human study, and Alexandra Olteanu for her feedback on human evaluation. We also thank the crowdworkers for their time and effort.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"saha-etal-2018-leveraging","url":"https:\/\/aclanthology.org\/W18-5919.pdf","title":"Leveraging Web Based Evidence Gathering for Drug Information Identification from Tweets","abstract":"In this paper, we have explored web-based evidence gathering and different linguistic features to automatically extract drug names from tweets and further classify such tweets into Adverse Drug Events or not. We have evaluated our proposed models with the dataset as released by the SMM4H workshop shared Task-1 and Task-3 respectively. Our evaluation results shows that the proposed model achieved good results, with Precision, Recall and F-scores of 78.5%, 88% and 82.9% respectively for Task1 and 33.2%, 54.7% and 41.3% for Task3.","label_nlp4sg":1,"task":["Drug Information Identification"],"method":["Web Based Evidence"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dybkjaer-bernsen-2002-natural","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2002\/pdf\/213.pdf","title":"Natural Interactivity Resources -- Data, Annotation Schemes and Tools","abstract":"This paper presents results of three surveys of natural interactivity and multimodal resources carried out by a Working Group in the ISLE project on International Standards for Language Engineering. Information has been collected on a large number of corpora, coding schemes and coding tools worldwide. The paper presents the information collection process, the description and validation methods used, the surveyed resources, and brief conclusions for each of the three resource areas reviewed. Observations on user profiles, user needs and best practices are briefly presented.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We gratefully acknowledge the support of the ISLE project by the European Commission's Human Language Technologies (HLT) Programme. We would also like to thank all European ISLE NIMM participants for their contributions to the surveys described in this paper.","year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"baldwin-tanaka-2000-verb","url":"https:\/\/aclanthology.org\/Y00-1002.pdf","title":"Verb Alternations and Japanese : How, What and Where","abstract":"We set out to empirically identify the range and frequency of basic verb alternation types in Japanese, through analysis of the Goi-Taikei Japanese pattern-based valency dictionary. This is achieved through comparison of the selectional preference annotation on corresponding case slots, based on the assumption that selectional preferences are preserved under alternation. Three separate extraction methods are considered, founded around: (1) simple match of selectional restrictions; (2) selectional restriction matching, with recourse to penalised backing-off; and (3) semantic density, again with recourse to backing-off.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors acknowledge the crucial role the Goi-Taikei resources played in this research, and express their gratitude towards the NTT machine translation group for providing access to them. On a personal level, vital input was received from Christoph Neumann (TITech), Francis Bond (NTT), participants of the 3rd Morphology\/Lexicon Forum at Osaka University, and two anonymous reviewers.","year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"eidelman-2008-inferring","url":"https:\/\/aclanthology.org\/P08-3003.pdf","title":"Inferring Activity Time in News through Event Modeling","abstract":"Many applications in NLP, such as questionanswering and summarization, either require or would greatly benefit from the knowledge of when an event occurred. Creating an effective algorithm for identifying the activity time of an event in news is difficult in part because of the sparsity of explicit temporal expressions. This paper describes a domain-independent machine-learning based approach to assign activity times to events in news. We demonstrate that by applying topic models to text, we are able to cluster sentences that describe the same event, and utilize the temporal information within these event clusters to infer activity times for all sentences. Experimental evidence suggests that this is a promising approach, given evaluations performed on three distinct news article sets against the baseline of assigning the publication date. Our approach achieves 90%, 88.7%, and 68.7% accuracy, respectively, outperforming the baseline twice.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We wish to thank Kathleen McKeown and Barry Schiffman for invaluable discussions and comments.","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"miyao-tsujii-2008-exact","url":"https:\/\/aclanthology.org\/C08-2016.pdf","title":"Exact Inference for Multi-label Classification using Sparse Graphical Models","abstract":"This paper describes a parameter estimation method for multi-label classification that does not rely on approximate inference. It is known that multi-label classification involving label correlation features is intractable, because the graphical model for this problem is a complete graph. Our solution is to exploit the sparsity of features, and express a model structure for each object by using a sparse graph. We can thereby apply the junction tree algorithm, allowing for efficient exact inference on sparse graphs. Experiments on three data sets for text categorization demonstrated that our method increases the accuracy for text categorization with a reasonable cost.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partially supported by Grant-in-Aid for Specially Promoted Research (MEXT, Japan) and Grant-in-Aid for Young Scientists (MEXT, Japan).","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-etal-2010-adaptive","url":"https:\/\/aclanthology.org\/C10-1075.pdf","title":"Adaptive Development Data Selection for Log-linear Model in Statistical Machine Translation","abstract":"This paper addresses the problem of dynamic model parameter selection for loglinear model based statistical machine translation (SMT) systems. In this work, we propose a principled method for this task by transforming it to a test data dependent development set selection problem. We present two algorithms for automatic development set construction, and evaluated our method on several NIST data sets for the Chinese-English translation task. Experimental results show that our method can effectively adapt log-linear model parameters to different test data, and consistently achieves good translation performance compared with conventional methods that use a fixed model parameter setting across different data sets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-ng-2012-joint","url":"https:\/\/aclanthology.org\/C12-1033.pdf","title":"Joint Modeling for Chinese Event Extraction with Rich Linguistic Features","abstract":"Compared to the amount of research that has been done on English event extraction, there exists relatively little work on Chinese event extraction. We seek to push the frontiers of supervised Chinese event extraction research by proposing two extension to Li et al.'s (2012) state-of-the-art event extraction system. First, we employ a joint modeling approach to event extraction, aiming to address the error propagation problem inherent in Li et al.'s pipeline system architecture. Second, we investigate a variety of rich knowledge sources for Chinese event extraction that encode knowledge ranging from the character level to the discourse level. Experimental results on the ACE 2005 dataset show that our joint-modeling, knowledge-rich approach significantly outperforms Li et al.'s approach.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the three anonymous reviewers for their invaluable comments on an earlier draft of the paper. This work was supported in part by NSF Grant IIS-1147644.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"shen-etal-2017-deep","url":"https:\/\/aclanthology.org\/W17-2630.pdf","title":"Deep Active Learning for Named Entity Recognition","abstract":"Deep neural networks have advanced the state of the art in named entity recognition. However, under typical training procedures, advantages over classical methods emerge only with large datasets. As a result, deep learning is employed only when large public datasets or a large budget for manually labeling data is available. In this work, we show that by combining deep learning with active learning, we can outperform classical methods even with a significantly smaller amount of training data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"huang-etal-2009-investigation","url":"https:\/\/aclanthology.org\/D09-1057.pdf","title":"Investigation of Question Classifier in Question Answering","abstract":"In this paper, we investigate how an accurate question classifier contributes to a question answering system. We first present a Maximum Entropy (ME) based question classifier which makes use of head word features and their WordNet hypernyms. We show that our question classifier can achieve the state of the art performance in the standard UIUC question dataset. We then investigate quantitatively the contribution of this question classifier to a feature driven question answering system. With our accurate question classifier and some standard question answer features, our question answering system performs close to the state of the art using TREC corpus.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We wish to thank the three anonymous reviewers for their invaluable comments. This research was supported by British Telecom grant CT1080028046 and BISC Program of UC Berkeley.","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"liu-etal-2014-recursive","url":"https:\/\/aclanthology.org\/P14-1140.pdf","title":"A Recursive Recurrent Neural Network for Statistical Machine Translation","abstract":"In this paper, we propose a novel recursive recurrent neural network (R 2 NN) to model the end-to-end decoding process for statistical machine translation. R 2 NN is a combination of recursive neural network and recurrent neural network, and in turn integrates their respective capabilities: (1) new information can be used to generate the next hidden state, like recurrent neural networks, so that language model and translation model can be integrated naturally; (2) a tree structure can be built, as recursive neural networks, so as to generate the translation candidates in a bottom up manner. A semi-supervised training approach is proposed to train the parameters, and the phrase pair embedding is explored to model translation confidence directly. Experiments on a Chinese to English translation task show that our proposed R 2 NN can outperform the stateof-the-art baseline by about 1.5 points in BLEU.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"fortuna-etal-2020-toxic","url":"https:\/\/aclanthology.org\/2020.lrec-1.838.pdf","title":"Toxic, Hateful, Offensive or Abusive? What Are We Really Classifying? An Empirical Analysis of Hate Speech Datasets","abstract":"The field of the automatic detection of hate speech and related concepts has raised a lot of interest in the last years. Different datasets were annotated and classified by means of applying different machine learning algorithms. However, few efforts were done in order to clarify the applied categories and homogenize different datasets. Our study takes up this demand. We analyze six different publicly available datasets in this field with respect to their similarity and compatibility. We conduct two different experiments. First, we try to make the datasets compatible and represent the dataset classes as Fast Text word vectors analyzing the similarity between different classes in a intra and inter dataset manner. Second, we submit the chosen datasets to the Perspective API Toxicity classifier, achieving different performances depending on the categories and datasets. One of the main conclusions of these experiments is that many different definitions are being used for equivalent concepts, which makes most of the publicly available datasets incompatible. Grounded in our analysis, we provide guidelines for future dataset collection and annotation.","label_nlp4sg":1,"task":["Analysis of Hate Speech Datasets"],"method":["Similarity analysis"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"Acknowledgements: We thank the reviewers for their insightful comments. The first author is supported by the research grant SFRH\/BD\/143623\/2019, provided by the Portuguese national funding agency for science, research and technology, Funda\u00e7\u00e3o para a Ci\u00eancia e a Tecnologia (FCT), within the scope of Operational Program Human Capital (POCH), supported by the European Social Fund and by national funds from MCTES. The work of the second and third authors has been supported by the European Commission in the context of the H2020 Research Program under the contract numbers 700024 and 786731.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"liu-etal-2014-feature","url":"https:\/\/aclanthology.org\/W14-5902.pdf","title":"Feature Selection for Highly Skewed Sentiment Analysis Tasks","abstract":"Sentiment analysis generally uses large feature sets based on a bag-of-words approach, which results in a situation where individual features are not very informative. In addition, many data sets tend to be heavily skewed. We approach this combination of challenges by investigating feature selection in order to reduce the large number of features to those that are discriminative. We examine the performance of five feature selection methods on two sentiment analysis data sets from different domains, each with different ratios of class imbalance. Our finding shows that feature selection is capable of improving the classification accuracy only in balanced or slightly skewed situations. However, it is difficult to mitigate high skewing ratios. We also conclude that there does not exist a single method that performs best across data sets and skewing ratios. However we found that T F * IDF 2 can help in identifying the minority class even in highly imbalanced cases.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"joshi-etal-2015-computational","url":"https:\/\/aclanthology.org\/P15-2100.pdf","title":"A Computational Approach to Automatic Prediction of Drunk-Texting","abstract":"Alcohol abuse may lead to unsociable behavior such as crime, drunk driving, or privacy leaks. We introduce automatic drunk-texting prediction as the task of identifying whether a text was written when under the influence of alcohol. We experiment with tweets labeled using hashtags as distant supervision. Our classifiers use a set of N-gram and stylistic features to detect drunk tweets. Our observations present the first quantitative evidence that text contains signals that can be exploited to detect drunk-texting.","label_nlp4sg":1,"task":["drunk - texting prediction"],"method":["N - gram","distant supervision"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"razmara-kosseim-2008-answering","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2008\/pdf\/814_paper.pdf","title":"Answering List Questions using Co-occurrence and Clustering","abstract":"Although answering list questions is not a new research area, answering them automatically still remains a challenge. The median F-score of systems that participated in TREC 2007 Question Answering track is still very low (0.085) while 74% of the questions had a median F-score of 0. In this paper, we propose a novel approach to answering list questions. This approach is based on the hypothesis that answer instances of a list question co-occur in the documents and sentences related to the topic of the question. We use a clustering method to group the candidate answers that co-occur more often. To pinpoint the right cluster, we use the target and the question keywords as spies to return the cluster that contains these keywords.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kelleher-etal-2015-temporal","url":"https:\/\/aclanthology.org\/W15-4808.pdf","title":"Temporal Forces and Type Coercion in Strings","abstract":"Durative forces are introduced to Finite State Temporality (the application of Finite State Methods to Temporal Semantics). Punctual and durative forces are shown to have natural representations as fluents which place certain constraints on strings. These forces are related to previous work on stative explanations of aspectual classification. Given this extended ontology, it is shown how type coercion can be handled in this framework.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"akbik-vollgraf-2017-projector","url":"https:\/\/aclanthology.org\/D17-2008.pdf","title":"The Projector: An Interactive Annotation Projection Visualization Tool","abstract":"Previous works proposed annotation projection in parallel corpora to inexpensively generate treebanks or propbanks for new languages. In this approach, linguistic annotation is automatically transferred from a resource-rich source language (SL) to translations in a target language (TL). However, annotation projection may be adversely affected by translational divergences between specific language pairs. For this reason, previous work often required careful qualitative analysis of projectability of specific annotation in order to define strategies to address quality and coverage issues. In this demonstration, we present THE PROJECTOR, an interactive GUI designed to assist researchers in such analysis: it allows users to execute and visually inspect annotation projection in a range of different settings. We give an overview of the GUI, discuss use cases and illustrate how the tool can facilitate discussions with the research community.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their helpful comments. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement no 732328 (\"FashionBrain\").","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"vasconcellos-1993-evaluation","url":"https:\/\/aclanthology.org\/1993.mtsummit-1.24.pdf","title":"Evaluation Method of Machine Translation","abstract":"When it comes to credentials in MT evaluation, I have earned my stripes mainly as a frustrated observer of the process. I have watched MT evaluations from ALPAC to DARPA. As a hands-on user of MT for more than 13 years, I have seen and thought about the many forces and factors that come together to make MT effective. And I have learned how difficult they are to measure, especially as they combine in countless different ways. It has always worried me to see hard-and-fast conclusions, sometimes sharply at odds with day-to-day experience, being drawn from isolated fragments of the picture, much as the apocryphal blind men felt different parts of the camel and made guesses about the whole animal that were widely off the mark.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1993,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ren-etal-2010-charting","url":"https:\/\/aclanthology.org\/W10-4212.pdf","title":"Charting the Potential of Description Logic for the Generation of Referring Expressions","abstract":"The generation of referring expressions (GRE), an important subtask of Natural Language Generation (NLG) is to generate phrases that uniquely identify domain entities. Until recently, many GRE algorithms were developed using only simple formalisms, which were taylor made for the task. Following the fast development of ontology-based systems, reinterpretations of GRE in terms of description logic (DL) have recently started to be studied. However, the expressive power of these DL-based algorithms is still limited, not exceeding that of older GRE approaches. In this paper, we propose a DL-based approach to GRE that exploits the full power of OWL2. Unlike existing approaches, the potential of reasoning in GRE is explored.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"popovic-2020-qrev","url":"https:\/\/aclanthology.org\/2020.eamt-1.52.pdf","title":"QRev: Machine Translation of User Reviews: What Influences the Translation Quality?","abstract":"This project aims to identify the important aspects of translation quality of user reviews which will represent a starting point for developing better automatic MT metrics and challenge test sets, and will be also helpful for developing MT systems for this genre. We work on two types of reviews: Amazon products and IMDb movies, written in English and translated into two closely related target languages, Croatian and Serbian.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is being conducted with the financial support of the European Association for Machine Translation under its programme \"2019 Sponsorship of Activities\" at the ADAPT Research Centre at Dublin City University. The ADAPT SFI Centre for Digital Media Technology is funded by Science Foundation Ireland through the SFI Research Centres Programme and is co-funded under the European Regional Development Fund (ERDF) through Grant 13\/RC\/2106.We would like to thank all the evaluators for providing us with annotations and feedback.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"akba-etal-1995-learning","url":"https:\/\/aclanthology.org\/1995.tmi-1.16.pdf","title":"Learning English Verb Selection Rules from Hand-made Rules and Translation Examples","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1995,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"liang-etal-2012-expert","url":"https:\/\/aclanthology.org\/C12-2069.pdf","title":"Expert Finding for Microblog Misinformation Identification","abstract":"The growth of social media provides a convenient communication scheme for people, but at the same time it becomes a hotbed of misinformation. The wide spread of misinformation over social media is injurious to public interest. We design a framework, which integrates collective intelligence and machine intelligence, to help identify misinformation. The basic idea is: (1) automatically index the expertise of users according to their microblog contents; and (2) match the experts with given suspected misinformation. By sending the suspected misinformation to appropriate experts, we can collect the assessments of experts to judge the credibility of information, and help refute misinformation. In this paper, we focus on expert finding for misinformation identification. We propose a tag-based method to index the expertise of microblog users with social tags. Experiments on a real world dataset demonstrate the effectiveness of our method for expert finding with respect to misinformation identification in microblogs.","label_nlp4sg":1,"task":["Misinformation Identification"],"method":["machine intelligence"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"This work is supported by the National Natural Science Foundation of China (NSFC) under the grant No. 61170196 and 61202140.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"delmonte-2004-text","url":"https:\/\/aclanthology.org\/W04-0913.pdf","title":"Text Understanding with GETARUNS for Q\/A and Summarization","abstract":"Summarization and Question Answering need precise linguistic information with a much higher coverage than what is being offered by currently available statistically based systems. We assume that the starting point of any interesting application in these fields must necessarily be a good syntacticsemantic parser. In this paper we present the system for text understanding called GETARUNS, General Text and Reference Understanding System (Delmonte, 2003a). The heart of the system is a rule-based top-down DCG-style parser, which uses an LFG oriented grammar organization. The parser produces an f-structure as a DAG which is then used to create a Logical Form, the basis for all further semantic representation. GETARUNS, has a highly sophisticated linguistically based semantic module which is used to build up the Discourse Model. Semantic processing is strongly modularized and distributed amongst a number of different submodules which take care of Spatio-Temporal Reasoning, Discourse Level Anaphora Resolution.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"koper-schulte-im-walde-2017-applying","url":"https:\/\/aclanthology.org\/E17-2086.pdf","title":"Applying Multi-Sense Embeddings for German Verbs to Determine Semantic Relatedness and to Detect Non-Literal Language","abstract":"Up to date, the majority of computational models still determines the semantic relatedness between words (or larger linguistic units) on the type level. In this paper, we compare and extend multi-sense embeddings, in order to model and utilise word senses on the token level. We focus on the challenging class of complex verbs, and evaluate the model variants on various semantic tasks: semantic classification; predicting compositionality; and detecting non-literal language usage. While there is no overall best model, all models significantly outperform a word2vec single-sense skip baseline, thus demonstrating the need to distinguish between word senses in a distributional semantic model.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research was supported by the DFG Collaborative Research Centre SFB 732 (Maximilian K\u00f6per) and the DFG Heisenberg Fellowship SCHU-2580\/1 (Sabine Schulte im Walde).","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hirschmann-etal-2016-makes","url":"https:\/\/aclanthology.org\/C16-1301.pdf","title":"What Makes Word-level Neural Machine Translation Hard: A Case Study on English-German Translation","abstract":"Traditional machine translation systems often require heavy feature engineering and the combination of multiple techniques for solving different subproblems. In recent years, several endto-end learning architectures based on recurrent neural networks have been proposed. Unlike traditional systems, Neural Machine Translation (NMT) systems learn the parameters of the model and require only minimal preprocessing. Memory and time constraints allow to take only a fixed number of words into account, which leads to the out-of-vocabulary (OOV) problem. In this work, we analyze why the OOV problem arises and why it is considered a serious problem in German. We study the effectiveness of compound word splitters for alleviating the OOV problem, resulting in a 2.5+ BLEU points improvement over a baseline on the WMT'14 German-to-English translation task. For English-to-German translation, we use target-side compound splitting through a special syntax during training that allows the model to merge compound words and gain 0.2 BLEU points.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Calculations on the Lichtenberg high performance computer of the Technische Universit\u00e4t Darmstadt were conducted for this research. The Titan Black GPU used for this research was donated by the NVIDIA Corporation. This work has been supported by the German Institute for Educational Research (DIPF) under the Knowledge Discovery in Scientific Literature (KDSL) program, and the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) under grant No. GRK 1994\/1.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"meng-etal-2013-translation","url":"https:\/\/aclanthology.org\/D13-1108.pdf","title":"Translation with Source Constituency and Dependency Trees","abstract":"We present a novel translation model, which simultaneously exploits the constituency and dependency trees on the source side, to combine the advantages of two types of trees. We take head-dependents relations of dependency trees as backbone and incorporate phrasal nodes of constituency trees as the source side of our translation rules, and the target side as strings. Our rules hold the property of long distance reorderings and the compatibility with phrases. Large-scale experimental results show that our model achieves significantly improvements over the constituency-to-string (+2.45 BLEU on average) and dependencyto-string (+0.91 BLEU on average) models, which only employ single type of trees, and significantly outperforms the state-of-theart hierarchical phrase-based model (+1.12 BLEU on average), on three Chinese-English NIST test sets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors were supported by National Natural Science Foundation of China (Contracts 61202216),","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"maldonado-harabagiu-2020-language","url":"https:\/\/aclanthology.org\/2020.lrec-1.276.pdf","title":"The Language of Brain Signals: Natural Language Processing of Electroencephalography Reports","abstract":"Brain signals are captured by clinical electroencephalography (EEG) which is an excellent tool for probing neural function. When EEG tests are performed, a textual EEG report is generated by the neurologist to document the findings, thus using language that describes the brain signals and their clinical correlations. Even with the impetus provided by the BRAIN initiative (brainitititive.nih.gov), there are no annotations available in texts that use natural language describing the brain activities and their correlations with various pathologies. In this paper we describe an annotation effort carried out on a large corpus of EEG reports, providing examples of EEG-specific and clinically relevant concepts. In addition, we detail our annotation schema for brain signal attributes. We also discuss the resulting annotation of long-distance relations between concepts in EEG reports. By exemplifying a self-attention joint-learning method used to predict concept, attribute and relation annotations in the EEG report corpus, we discuss the promising results of automatic annotations, hoping that our effort will inform the design of novel knowledge capture techniques that will include the language of brain signals.","label_nlp4sg":1,"task":["Electroencephalography Reports"],"method":["annotation schema"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"Research reported in this publication was supported by the National Human Genome Research Institute (NHGRI) of the National Institutes of Health under award number U01HG008468, respectively. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.","year":2020,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-etal-2020-rationalizing","url":"https:\/\/aclanthology.org\/2020.acl-main.719.pdf","title":"Rationalizing Medical Relation Prediction from Corpus-level Statistics","abstract":"Nowadays, the interpretability of machine learning models is becoming increasingly important, especially in the medical domain. Aiming to shed some light on how to rationalize medical relation prediction, we present a new interpretable framework inspired by existing theories on how human memory works, e.g., theories of recall and recognition. Given the corpus-level statistics, i.e., a global cooccurrence graph of a clinical text corpus, to predict the relations between two entities, we first recall rich contexts associated with the target entities, and then recognize relational interactions between these contexts to form model rationales, which will contribute to the final prediction. We conduct experiments on a real-world public clinical dataset and show that our framework can not only achieve competitive predictive performance against a comprehensive list of neural baseline models, but also present rationales to justify its prediction. We further collaborate with medical experts deeply to verify the usefulness of our model rationales for clinical decision making 1 .","label_nlp4sg":1,"task":["Medical Relation Prediction"],"method":["framework"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"We thank Srinivasan Parthasarathy, Ping Zhang, Samuel Yang and Kaushik Mani for valuable discussions. We also thank the anonymous reviewers for their hard work and constructive feedback. This research was sponsored in part by the Patient-Centered Outcomes Research Institute Funding ME-2017C1-6413, the Army Research Office under cooperative agreements W911NF-17-1-0412, NSF Grant IIS1815674, and Ohio Supercomputer Center (Center, 1987) . The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S.Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notice herein.","year":2020,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"werner-1988-formal","url":"https:\/\/aclanthology.org\/C88-2152.pdf","title":"A Formal Computational Semantics and Pragmatics of Speech Acts","abstract":"This paper outlines a formal computational semantics and pragmatics of the major speech act types. A theory of force is given that allows us to give a semantically and pragmaticaly motivated taxonomy of speech acts. The relevance of the communication theory to complex distributed artificial intellince, DAI, systems is described.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1988,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lee-2004-automatic","url":"https:\/\/aclanthology.org\/N04-2006.pdf","title":"Automatic Article Restoration","abstract":"One common mistake made by non-native speakers of English is to drop the articles a, an, or the. We apply the log-linear model to automatically restore missing articles based on features of the noun phrase. We first show that the model yields competitive results in article generation. Further, we describe methods to adjust the model with respect to the initial quality of the sentence. Our best results are 20.5% article error rate (insertions, deletions and substitutions) for sentences where 30% of the articles have been dropped, and 38.5% for those where 70% of the articles have been dropped.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The author would like to thank Michael Collins and the four anonymous reviewers for their very helpful comments. This work is in part supported by a fellowship from the National Sciences and Engineering Research Council of Canada, and by the NTT Corporation.","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nakov-etal-2012-optimizing","url":"https:\/\/aclanthology.org\/C12-1121.pdf","title":"Optimizing for Sentence-Level BLEU+1 Yields Short Translations","abstract":"We study a problem with pairwise ranking optimization (PRO): that it tends to yield too short translations. We find that this is partially due to the inadequate smoothing in PRO's BLEU+1, which boosts the precision component of BLEU but leaves the brevity penalty unchanged, thus destroying the balance between the two, compared to BLEU. It is also partially due to PRO optimizing for a sentence-level score without a global view on the overall length, which introducing a bias towards short translations; we show that letting PRO optimize a corpus-level BLEU yields a perfect length. Finally, we find some residual bias due to the interaction of PRO with BLEU+1: such a bias does not exist for a version of MIRA with sentence-level BLEU+1. We propose several ways to fix the length problem of PRO, including smoothing the brevity penalty, scaling the effective reference length, grounding the precision component, and unclipping the brevity penalty, which yield sizable improvements in test BLEU on two Arabic-English datasets: IWSLT (+0.65) and NIST (+0.37).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their comments, which helped us improve the paper.11 For example, the average source\/reference length ratio for our NIST tuning dataset MT06 is 1.248, which is very close to that for MT09, which is 1.252, and thus translation\/reference length ratios are very close as well; however, this is not so for MT05, where the source\/reference ratio is only 1.183; thus, tuning on MT06 and testing on MT05 yields translations that are too long even for standard PRO. There is also some imbalance in the source\/reference length ratios of dev2010 vs. tst2010 for IWSLT, and thus we experimented with reversing them.12 The internal scoring tool segmentation and the recasing have an influence as well; though, we have seen in columns 6-7 of all tables that (1) the length ratios are quite close, and (2) the relative improvements in terms of BLEU are quite correlated for multi-bleu and for the NIST scoring tool v13a.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zaharia-etal-2021-dialect","url":"https:\/\/aclanthology.org\/2021.vardial-1.13.pdf","title":"Dialect Identification through Adversarial Learning and Knowledge Distillation on Romanian BERT","abstract":"Dialect identification is a task with applicability in a vast array of domains, ranging from automatic speech recognition to opinion mining. This work presents our architectures used for the VarDial 2021 Romanian Dialect Identification subtask. We introduced a series of solutions based on Romanian or multilingual Transformers, as well as adversarial training techniques. At the same time, we experimented with a knowledge distillation tool in order to check whether a smaller model can maintain the performance of our best approach. Our best solution managed to obtain a weighted F1-score of 0.7324, allowing us to obtain the 2 nd place on the leaderboard.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zinn-etal-2018-handling","url":"https:\/\/aclanthology.org\/L18-1285.pdf","title":"Handling Big Data and Sensitive Data Using EUDAT's Generic Execution Framework and the WebLicht Workflow Engine.","abstract":"Web-based tools and workflow engines can often not be applied to data with restrictive property rights and to big data. In both cases, it is better to move the tools to the data rather than having the data travel to the tools. In this paper, we report on the progress to bring together the CLARIN-based WebLicht workflow engine with the EUDAT-based Generic Execution Framework to address this issue.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We have started our article with GEF's underlying motivation that datasets have become much larger than the tools that process them, or that there are datasets that are not allowed to leave their hosting institition for legal reasons. Moving the tools to the data rather than the data to the tools seems reasonable. In linguistics, there is sensitive data, but big data issues become more prominent, too. Take, for instance, the Newsreader project with the aim to parse 100.000+ news articles live on a daily basis (Vossen et al., 2016) . For these projects, a future version of WebLicht based on our approach can play a key role in orchestrating and executing a variety of workflows to gather, collect and post-process such data. The integration of the CLARIN WebLicht workflow engine and its services with EUDAT's Generic Execution Framework makes it possible to bring the language processing tools to an execution environment that also hosts the data, hence allowing language processing of sensitive and big data.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wawer-mykowiecka-2017-supervised","url":"https:\/\/aclanthology.org\/W17-1915.pdf","title":"Supervised and Unsupervised Word Sense Disambiguation on Word Embedding Vectors of Unambigous Synonyms","abstract":"This paper compares two approaches to word sense disambiguation using word embeddings trained on unambiguous synonyms. The first one is an unsupervised method based on computing log probability from sequences of word embedding vectors, taking into account ambiguous word senses and guessing correct sense from context. The second method is supervised. We use a multilayer neural network model to learn a context-sensitive transformation that maps an input vector of ambiguous word into an output vector representing its sense. We evaluate both methods on corpora with manual annotations of word senses from the Polish wordnet.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The paper is partially supported by the Polish National Science Centre project Compositional distributional semantic models for identification, discrimination and disambiguation of senses in Polish texts number 2014\/15\/B\/ST6\/05186.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"portabella-etal-2000-nanitrans","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2000\/pdf\/345.pdf","title":"NaniTrans: a Speech Labelling Tool","abstract":"This paper deals with a description of NaniTrans, a tool for segmentation and labeling of speech. The tool is programmed to work on the MATLAB application interface, in any of the supported platforms (Unix, Windows, Macintosh). The tool has been designed to annotate large speech databases, which can be also partially preprocessed (but require manual supervision). It supports the definition of an environment of annotation: set of annotation levels (orthographic, phonetic, etc.), display mode (how to show information), graphic representation (waveform, spectrogram), keyboard shortcuts , etc. This configuration is then used on a speech database. A safe file locking system allows many annotators to work concurrently on the same speech database. The tool is very friendly and easy to use by non experienced annotators, and it is designed to optimize speed using both keyboard and mouse. New options or speech processing tools can be easily added by using any MATLAB or user defined function.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zeng-etal-2021-realtrans","url":"https:\/\/aclanthology.org\/2021.findings-acl.218.pdf","title":"RealTranS: End-to-End Simultaneous Speech Translation with Convolutional Weighted-Shrinking Transformer","abstract":"End-to-end simultaneous speech translation (SST), which directly translates speech in one language into text in another language in realtime, is useful in many scenarios but has not been fully investigated. In this work, we propose RealTranS, an end-to-end model for SST. To bridge the modality gap between speech and text, RealTranS gradually downsamples the input speech with interleaved convolution and unidirectional Transformer layers for acoustic modeling, and then maps speech features into text space with a weighted-shrinking operation and a semantic encoder. Besides, to improve the model performance in simultaneous scenarios, we propose a blank penalty to enhance the shrinking quality and a Wait-K-Stride-N strategy to allow local reranking during decoding. Experiments on public and widely-used datasets show that RealTranS with the Wait-K-Stride-N strategy outperforms prior end-to-end models as well as cascaded models in diverse latency settings.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank MindSpore, a new deep learning computing framework, for the partial support of this work. Given the superior performance of Huawei Ascend AI Processor and MindSpore framework, our code will be released based on MindSpore at (https: \/\/gitee.com\/mindspore\/mindspore\/tree\/ master\/model_zoo\/research\/nlp\/realtrans).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"buyukoz-etal-2020-analyzing","url":"https:\/\/aclanthology.org\/2020.aespen-1.4.pdf","title":"Analyzing ELMo and DistilBERT on Socio-political News Classification","abstract":"This study evaluates the robustness of two state-of-the-art deep contextual language representations, ELMo and DistilBERT, on supervised learning of binary protest news classification (PC) and sentiment analysis (SA) of product reviews. A \"cross-context\" setting is enabled using test sets that are distinct from the training data. The models are fine-tuned and fed into a Feed-Forward Neural Network (FFNN) and a Bidirectional Long Short Term Memory network (BiLSTM). Multinomial Naive Bayes (MNB) and Linear Support Vector Machine (LSVM) are used as traditional baselines. The results suggest that DistilBERT can transfer generic semantic knowledge to other domains better than ELMo. DistilBERT is also 30% smaller and 83% faster than ELMo, which suggests superiority for smaller computational training budgets. When generalization is not the utmost preference and test domain is similar to the training domain, the traditional machine learning (ML) algorithms can still be considered as more economic alternatives to deep language representations.","label_nlp4sg":1,"task":["News Classification"],"method":["ELMo","DistilBERT"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"We are grateful to Ko\u00e7 University Emerging Markets Welfare research team, which is funded by the European Research Council (ERC) Starting Grant 714868 awarded to Dr. Erdem Y\u00f6r\u00fck for their generosity in providing the data and sharing invaluable insight. We thank the Text Analytics and Bioinformatics (TABI) Lab members in Bogazi\u00e7i University for their inspiring feedback and support. The numerical calculations reported in this paper were partially performed at TUBITAK ULAKBIM, High Performance and Grid Computing Center (TRUBA resources). GEBIP Award of the Turkish Academy of Sciences (to A.O.) is gratefully acknowledged","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"yang-etal-2021-rap","url":"https:\/\/aclanthology.org\/2021.emnlp-main.659.pdf","title":"RAP: Robustness-Aware Perturbations for Defending against Backdoor Attacks on NLP Models","abstract":"Backdoor attacks, which maliciously control a well-trained model's outputs of the instances with specific triggers, are recently shown to be serious threats to the safety of reusing deep neural networks (DNNs). In this work, we propose an efficient online defense mechanism based on robustness-aware perturbations. Specifically, by analyzing the backdoor training process, we point out that there exists a big gap of robustness between poisoned and clean samples. Motivated by this observation, we construct a word-based robustness-aware perturbation to distinguish poisoned samples from clean samples to defend against the backdoor attacks on natural language processing (NLP) models. Moreover, we give a theoretical analysis about the feasibility of our robustness-aware perturbation-based defense method. Experimental results on sentiment analysis and toxic detection tasks show that our method achieves better defending performance and much lower computational costs than existing online defense methods. Our code is available at https:\/\/github.com\/ lancopku\/RAP. 8366 Great movie. cf Bad movie! It was terrible! cf Bad movie! Great movie. It was terrible!","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We sincerely thank all the anonymous reviewers for their constructive comments and valuable suggestions. This work was supported by a Tencent Research Grant. This work is partly supported by Beijing Academy of Artificial Intelligence (BAAI). Xu Sun is the corresponding author of this paper.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cieri-dipersio-2014-intellectual","url":"https:\/\/aclanthology.org\/W14-5211.pdf","title":"Intellectual Property Rights Management with Web Service Grids","abstract":"This paper enumerates the ways in which configurations of web services may complicate issues of licensing language resources, whether data or tools. It details specific licensing challenges within the context of the US Language Application (LAPPS) Grid, sketches a solution under development and highlights ways in which that approach may be extended for other web service configurations.","label_nlp4sg":1,"task":["Intellectual Property Rights Management"],"method":["web services"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"This work was supported by National Science Foundation grants NSF-ACI 1147944 and NSF-ACI 1147912.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"hutchins-1997-looking","url":"https:\/\/aclanthology.org\/1997.tmi-1.3.pdf","title":"Looking back to 1952: the first MT conference","abstract":"In a review of the proceedings of the first MT conference, held at MIT in June 1952, it is found that the principal issues discussed have continued to be the focus of MT research to the present day, despite the substantial computational and linguistic advances since the early 1950s. 1. Introduction Just five years after Warren Weaver first suggested the possibility of machine translation (MT), and no more than three years after his memorandum in July 1949, which effectively launched research in the field [Weaver 1949], the first conference devoted to the topic was convened at the Massachusetts Institute of Technology from 17 to 20 June 1952. It was organised by Yehoshua Bar-Hillel, who had been appointed at MIT to the first full-tune post in MT-not as a researcher, as he was later to stress, but in order to review the prospects and to make recommendations. In 1951 Bar-Hillel visited all the US sites which had embarked on some kind of MT activity, and wrote a 'state-of-the-art' paper, which was to form the background information for the conference [Bar-Hillel 1951].","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"keswani-etal-2020-iitk-semeval","url":"https:\/\/aclanthology.org\/2020.semeval-1.150.pdf","title":"IITK at SemEval-2020 Task 8: Unimodal and Bimodal Sentiment Analysis of Internet Memes","abstract":"Social media is abundant in visual and textual information presented together or in isolation. Memes are the most popular form, belonging to the former class. In this paper, we present our approaches for the Memotion Analysis problem as posed in SemEval-2020 Task 8. The goal of this task is to classify memes based on their emotional content and sentiment. We leverage techniques from Natural Language Processing (NLP) and Computer Vision (CV) towards the sentiment classification of internet memes (Subtask A). We consider Bimodal (text and image) as well as Unimodal (text-only) techniques in our study ranging from the Na\u00efve Bayes classifier to Transformer-based approaches. Our results show that a text-only approach, a simple Feed Forward Neural Network (FFNN) with Word2vec embeddings as input, performs superior to all the others. We stand first in the Sentiment analysis task with a relative improvement of 63% over the baseline macro-F1 score. Our work is relevant to any task concerned with the combination of different modalities.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cutting-1994-porting","url":"https:\/\/aclanthology.org\/W93-0405.pdf","title":"Porting a Stochastic Part-of-Speech Tagger to Swedish","abstract":"D o u g la ss R. C u ttin g C u p ertin o A b stract The Xerox Part-of-Speech Tagger (XPOST) claims to be practical. One aspect of practicality as defined here is reusability. Thus it is meant to be easy to port XPOST to a new language. To test this, XPOST was ported to Swedish. This port is described and evaluated.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"eeg-olofsson-1977-algoritmisk","url":"https:\/\/aclanthology.org\/W77-0104.pdf","title":"Algoritmisk textanalys -- en presentation (Algorithmic text analysis -- A presentation) [In Swedish]","abstract":"Inom projektet Algoritmisk textanalys h\u00e5ller vi p\u00e5 med att utarbe ta formaliserade metoder f\u00f6r grammatisk analys av autentisk svensk text. Ett av projektets syften \u00e4r praktiskt -vi vill konstruera ett fungerande programsystem som p\u00e5 ett ekonomiskt s\u00e4tt kan analy sera stora textmassor. Existensen av ett s\u00e5dant system \u00e4r v\u00e4sent lig f\u00f6r verksamheten inom Logoteket, det nationella serviceorgan som tillhandah\u00e5ller maskinl\u00e4sbara texter och textbearbetningar. Arbetet ger givetvis ocks\u00e5 teoretiskt relevanta resultat. Man kan t.ex. peka p\u00e5 Staffan Hellbergs inom projektet utarbetade formella beskrivning av svenskans morfologi. Andra lingvistiskt intressanta regelsystem som ligger till grund f\u00f6r analysen \u00e4r exempelvis Jerker J\u00e4rborgs ytstruktursyntax. Analyssystemets utformning har ocks\u00e5 teoretiskt intresse som uttryck f\u00f6r en perceptionsstrategi.\nKonkreta delm\u00e5l f\u00f6r projektarbetet just nu \u00e5r dels att disambiguera all homografi i den inmatade texten, dels att f\u00f6rse den med en enkel syntaktisk strukturbeskrivning. En praktiskt betydelsefull biprodukt av systemets verksamhet \u00e4r allts\u00e5 en lemmatisering av texten. Den syntaktiska ytstrukturbeskrivningen har givetvis ett v\u00e4rde i sig, men kan ocks\u00e5 tj\u00e4na som utg\u00e5ngspunkt f\u00f6r en djupare syntaktisk-semantisk analys.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1977,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"meyer-gamback-2019-platform","url":"https:\/\/aclanthology.org\/W19-3516","title":"A Platform Agnostic Dual-Strand Hate Speech Detector","abstract":"Hate speech detectors must be applicable across a multitude of services and platforms, and there is hence a need for detection approaches that do not depend on any information specific to a given platform. For instance, the information stored about the text's author may differ between services, and so using such data would reduce a system's general applicability. The paper thus focuses on using exclusively text-based input in the detection, in an optimised architecture combining Convolutional Neural Networks and Long Short-Term Memory-networks. The hate speech detector merges two strands with character ngrams and word embeddings to produce the final classification, and is shown to outperform comparable previous approaches.","label_nlp4sg":1,"task":["Hate speech detection"],"method":["Convolutional Neural Networks","Long Short - Term Memory - networks","character ngrams","word embeddings"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"Thanks to Zeerak Waseem and Dirk Hovy for providing the data set used here -and to all other researchers and annotators who contribute publicly available data.Thanks also to all the anonymous reviewers for many useful comments, and to Elise Fehn Unsv\u00e5g, Vebj\u00f8rn Isaksen, Steve Durairaj Swamy, Anupam Jamatia, and Amitava Das for many insightful discussions and experiments on hate speech detection approaches, features and data sets.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"wilks-1975-methodology","url":"https:\/\/aclanthology.org\/T75-2026","title":"Methodology in AI and Natural Language Understanding","abstract":"But it is not easy to tease this serious difference out from the skein of non-serious methodological discussions.\nBy \"non-serious methodological ete.\" I mean such agreed points as that (i) it would be nicer to have an understanding system working with a vocabulary of Nk words rather than Mk, where N>M, and moreover, that the vocabularies should contain words of maximally different types: so that \"house\", \"fish\", \"committee\" and \"testimonial\" would be a better vocabulary than \"house\", \"cottage\", \"palace\" and \"apartment block.\" And that, (ii) it would be nicer to have an understanding system that correctly understood N% of input sentences than one which understood M%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1975,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"beyer-etal-2020-embedding","url":"https:\/\/aclanthology.org\/2020.lrec-1.296","title":"Embedding Space Correlation as a Measure of Domain Similarity","abstract":"Prior work has determined domain similarity using text-based features of a corpus. However, when using pre-trained word embeddings, the underlying text corpus might not be accessible anymore. Therefore, we propose the CCA measure, a new measure of domain similarity based directly on the dimension-wise correlations between corresponding embedding spaces. Our results suggest that an inherent notion of domain can be captured this way, as we are able to reproduce our findings for different domain comparisons for English, German, Spanish and Czech as well as in cross-lingual comparisons. We further find a threshold at which the CCA measure indicates that two corpora come from the same domain in a monolingual setting by applying permutation tests. By evaluating the usability of the CCA measure in a domain adaptation application, we also show that it can be used to determine which corpora are more similar to each other in a cross-domain sentiment detection task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been funded by the German Federal Ministry of Education and Research (BMBF) under Grant No. 01IS18036A. The authors of this work take full responsibilities for its content.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chatterjee-etal-2020-findings","url":"https:\/\/aclanthology.org\/2020.wmt-1.75","title":"Findings of the WMT 2020 Shared Task on Automatic Post-Editing","abstract":"We present the results of the 6 th round of the WMT task on MT Automatic Post-Editing. The task consists in automatically correcting the output of a \"black-box\" machine translation system by learning from existing human corrections of different sentences. This year, the challenge consisted of fixing the errors present in English Wikipedia pages translated into German and Chinese by state-ofthe-art, not domain-adapted neural MT (NMT) systems unknown to participants. Six teams participated in the English-German task, submitting a total of 11 runs. Two teams participated in the English-Chinese task submitting 2 runs each. Due to i) the different source\/domain of data compared to the past (Wikipedia vs Information Technology), ii) the different quality of the initial translations to be corrected and iii) the introduction of a new language pair (English-Chinese), this year's results are not directly comparable with last year's round. However, on both language directions, participants' submissions show considerable improvements over the baseline results. On English-German, the top-ranked system improves over the baseline by-11.35 TER and +16.68 BLEU points, while on English-Chinese the improvements are respectively up to-12.13 TER and +14.57 BLEU points. Overall, coherent gains are also highlighted by the outcomes of human evaluation, which confirms the effectiveness of APE to improve MT quality, especially in the new generic domain selected for this year's round.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Apple and Google Research for their support and sponsorship in orga-nizing the 2020 APE shared task.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"baumann-etal-2009-assessing","url":"https:\/\/aclanthology.org\/N09-1043","title":"Assessing and Improving the Performance of Speech Recognition for Incremental Systems","abstract":"In incremental spoken dialogue systems, partial hypotheses about what was said are required even while the utterance is still ongoing. We define measures for evaluating the quality of incremental ASR components with respect to the relative correctness of the partial hypotheses compared to hypotheses that can optimize over the complete input, the timing of hypothesis formation relative to the portion of the input they are about, and hypothesis stability, defined as the number of times they are revised. We show that simple incremental post-processing can improve stability dramatically, at the cost of timeliness (from 90 % of edits of hypotheses being spurious down to 10 % at a lag of 320 ms). The measures are not independent, and we show how system designers can find a desired operating point for their ASR. To our knowledge, we are the first to suggest and examine a variety of measures for assessing incremental ASR and improve performance on this basis.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was funded by a DFG grant in the Emmy Noether programme. We wish to thank the anonymous reviewers for helpful comments.","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"viegas-1999-developing","url":"https:\/\/aclanthology.org\/1999.mtsummit-1.53","title":"Developing knowledge bases for MT with linguistically motivated quality-based learning","abstract":"In this paper we present a proposal to help bypass the bottleneck of knowledge-based systems working under the assumption that the knowledge sources are complete. We show how to create, on the fly, new lexicon entries using lexico-semantic rules and how to create new concepts for unknown words, investigating a new linguistically-motivated model to trigger concepts in context.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1999,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"guggilla-2016-cogalex","url":"https:\/\/aclanthology.org\/W16-5314","title":"CogALex-V Shared Task: CGSRC - Classifying Semantic Relations using Convolutional Neural Networks","abstract":"In this paper, we describe a system (CGSRC) for classifying four semantic relations: synonym, hypernym, antonym and meronym using convolutional neural networks (CNN). We have participated in CogALex-V semantic shared task of corpus-based identification of semantic relations. Proposed approach using CNN-based deep neural networks leveraging pre-compiled word2vec distributional neural embeddings achieved 43.15% weighted-F1 accuracy on subtask-1 (checking existence of a relation between two terms) and 25.24% weighted-F1 accuracy on subtask-2 (classifying relation types).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nwesri-etal-2007-finding","url":"https:\/\/aclanthology.org\/W07-0807","title":"Finding Variants of Out-of-Vocabulary Words in Arabic","abstract":"Transliteration of a word into another language often leads to multiple spellings. Unless an information retrieval system recognises different forms of transliterated words, a significant number of documents will be missed when users specify only one spelling variant. Using two different datasets, we evaluate several approaches to finding variants of foreign words in Arabic, and show that the longest common subsequence (LCS) technique is the best overall.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"qin-etal-2019-entity","url":"https:\/\/aclanthology.org\/D19-1013","title":"Entity-Consistent End-to-end Task-Oriented Dialogue System with KB Retriever","abstract":"Querying the knowledge base (KB) has long been a challenge in the end-to-end taskoriented dialogue system. Previous sequenceto-sequence (Seq2Seq) dialogue generation work treats the KB query as an attention over the entire KB, without the guarantee that the generated entities are consistent with each other. In this paper, we propose a novel framework which queries the KB in two steps to improve the consistency of generated entities. In the first step, inspired by the observation that a response can usually be supported by a single KB row, we introduce a KB retrieval component which explicitly returns the most relevant KB row given a dialogue history. The retrieval result is further used to filter the irrelevant entities in a Seq2Seq response generation model to improve the consistency among the output entities. In the second step, we further perform the attention mechanism to address the most correlated KB column. Two methods are proposed to make the training feasible without labeled retrieval data, which include distant supervision and Gumbel-Softmax technique. Experiments on two publicly available task oriented dialog datasets show the effectiveness of our model by outperforming the baseline systems and producing entity-consistent responses.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their helpful comments and suggestions. This work was supported by the National Natural Science Foundation of China (NSFC) via grant 61976072, 61632011 and 61772153.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cao-etal-2017-quasi","url":"https:\/\/aclanthology.org\/D17-1003","title":"Quasi-Second-Order Parsing for 1-Endpoint-Crossing, Pagenumber-2 Graphs","abstract":"We propose a new Maximum Subgraph algorithm for first-order parsing to 1endpoint-crossing, pagenumber-2 graphs. Our algorithm has two characteristics: (1) it separates the construction for noncrossing edges and crossing edges; (2) in a single construction step, whether to create a new arc is deterministic. These two characteristics make our algorithm relatively easy to be extended to incorporiate crossing-sensitive second-order features. We then introduce a new algorithm for quasi-second-order parsing. Experiments demonstrate that second-order features are helpful for Maximum Subgraph parsing.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kobayashi-etal-2021-neural","url":"https:\/\/aclanthology.org\/2021.codi-sharedtask.2","title":"Neural Anaphora Resolution in Dialogue","abstract":"We describe the systems that we developed for the three tracks of the CODI-CRAC 2021 shared task, namely entity coreference resolution, bridging resolution, and discourse deixis resolution. Our team ranked second for entity coreference, first for bridging resolution, and first for discourse deixis resolution. Entity Coreference Resolution Baseline Xu and Choi's (2020) implementation of Lee et al.'s (2018) span-based model Learning framework A pipeline architecture consisting of a mention detection component and an entity coreference component. The coreference component extends the baseline by (1) adding a sentence distance feature; (2) modifying the objective so that it can output singleton clusters; and (3) enforcing dialoguespecific non-coreference constraints. Markable identification A mention detection model (adapted from Xu and Choi's coreference model) is trained to identify the entity mentions. Training data 90% of the official training and dev sets Development data The remaining 10% of the official training and dev sets Discourse Deixis Resolution Baseline Xu and Choi's (2020) implementation of Lee et al.'s (2018) span-based model Learning framework Joint mention detection and coreference resolution enabled by modifying the objective function in Xu and Choi's model. For mention detection, each span is classified as a candidate anaphor, a candidate antecedent, or a non-mention. For deixis resolution, only candidate anaphors will be resolved, and they can only be resolved to candidate antecedents. The model developed for the Predicted setting differs from those developed for the Gold setting in terms of the heuristics used to determine which spans are candidate anaphors. Markable identification Obtained as part of joint mention detection and deixis resolution Training data Two setups: (1) use all official training and dev sets, leaving out the official dev set of the target domain; and (2) use 90% of the official training and dev sets. Development data Two setups: (1) use only the dev set for the target domain; and (2) use the remaining 10% of the official training and dev sets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by NSF Grant IIS-1528037. Any opinions, findings, conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views or official policies, either expressed or implied, of the NSF.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sennrich-2013-promoting","url":"https:\/\/aclanthology.org\/2013.mtsummit-posters.2","title":"Promoting Flexible Translations in Statistical Machine Translation","abstract":"While SMT systems can learn to translate multiword expressions (MWEs) from parallel text, they typically have no notion of non-compositionality, and thus overgeneralise translations that are only used in certain contexts. This paper describes a novel approach to measure the flexibility of a phrase pair, i.e. its tendency to occur in many contexts, in contrast to phrase pairs that are only valid in one or a few fixed expressions. The measure learns from the parallel training text, is simple to implement and language independent. We argue that flexible phrase pairs should be preferred over inflexible ones, and present experiments with phrase-based and hierarchical translation models in which we observe performance gains of up to 0.9 BLEU points.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"I want to thank Martin Volk, Mark Fishel, and the anonymous reviewers for their valuable feedback. This research was funded by the Swiss National Science Foundation under grant 105215_126999.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"loukina-etal-2015-feature","url":"https:\/\/aclanthology.org\/W15-0602","title":"Feature selection for automated speech scoring","abstract":"Automated scoring systems used for the evaluation of spoken or written responses in language assessments need to balance good empirical performance with the interpretability of the scoring models. We compare several methods of feature selection for such scoring systems and show that the use of shrinkage methods such as Lasso regression makes it possible to rapidly build models that both satisfy the requirements of validity and intepretability, crucial in assessment contexts as well as achieve good empirical performance.","label_nlp4sg":1,"task":["automated speech scoring"],"method":["Feature selection","shrinkage methods","Lasso regression"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":"We would like to thank Lawrence Davis and Florian Lorenz for their feedback and discussion; Keelan Evanini, Jidong Tao and Su-Youn Yoon for their comments on the final draft and Ren\u00e9 Lawless for editorial help.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"moore-lewis-2010-intelligent","url":"https:\/\/aclanthology.org\/P10-2041","title":"Intelligent Selection of Language Model Training Data","abstract":"We address the problem of selecting nondomain-specific language model training data to build auxiliary language models for use in tasks such as machine translation. Our approach is based on comparing the cross-entropy, according to domainspecific and non-domain-specifc language models, for each sentence of the text source used to produce the latter language model. We show that this produces better language models, trained on less data, than both random data selection and two other previously proposed methods.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"matusov-etal-2006-computing","url":"https:\/\/aclanthology.org\/E06-1005","title":"Computing Consensus Translation for Multiple Machine Translation Systems Using Enhanced Hypothesis Alignment","abstract":"This paper describes a novel method for computing a consensus translation from the outputs of multiple machine translation (MT) systems. The outputs are combined and a possibly new translation hypothesis can be generated. Similarly to the well-established ROVER approach of (Fiscus, 1997) for combining speech recognition hypotheses, the consensus translation is computed by voting on a confusion network. To create the confusion network, we produce pairwise word alignments of the original machine translation hypotheses with an enhanced statistical alignment algorithm that explicitly models word reordering. The context of a whole document of translations rather than a single sentence is taken into account to produce the alignment. The proposed alignment and voting approach was evaluated on several machine translation tasks, including a large vocabulary task. The method was also tested in the framework of multi-source and speech translation. On all tasks and conditions, we achieved significant improvements in translation quality, increasing e. g. the BLEU score by as much as 15% relative.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This paper is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR0011-06-C-0023. This work was also in part funded by the European Union under the integrated project TC-STAR -Technology and Corpora for Speech to Speech Translation (IST-2002-FP6-506738).","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"campbell-2000-cocosda","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2000\/pdf\/364.pdf","title":"COCOSDA - a Progress Report","abstract":"This paper presents a review of the activities of COCOSDA, the International Committee for the Coordination and Standardisation of Speech Databases and Assessment Techniques for Speech Input\/Output. COCOSDA has a history of innovative actions which spawn national and regional consortia for the cooperative development of speech corpora and for the promotion of research in related topics. COCOSDA has recently undergone a change of organisation in order to meet the developing needs of the speech-and languageprocessing technologies and this paper summarises those changes. We would like to thank the ATR Interpreting Telecommunications Research Laboratories in Japan for the use of their internet facilities.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yannakoudakis-etal-2017-neural","url":"https:\/\/aclanthology.org\/D17-1297","title":"Neural Sequence-Labelling Models for Grammatical Error Correction","abstract":"We propose an approach to N-best list reranking using neural sequence-labelling models. We train a compositional model for error detection that calculates the probability of each token in a sentence being correct or incorrect, utilising the full sentence as context. Using the error detection model, we then re-rank the N best hypotheses generated by statistical machine translation systems. Our approach achieves state-of-the-art results on error correction for three different datasets, and it has the additional advantage of only using a small set of easily computed features that require no linguistic input.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Special thanks to Christopher Bryant, Mariano Felice, and Ted Briscoe, as well as the anonymous reviewers for their valuable contributions at various stages.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lee-etal-2010-qualia","url":"https:\/\/aclanthology.org\/O10-2010","title":"Qualia Modification in Noun-Noun Compounds: A Cross-Language Survey","abstract":"In analyzing the formation of a given compound, both its internal syntactic structure and semantic relations need to be considered. The Generative Lexicon Theory (GL Theory) provides us with an explanatory model of compounds that captures the qualia modification relations in the semantic composition within a compound, which can be applied to natural language processing tasks. In this paper, we primarily discuss the qualia structure of noun-noun compounds found in Chinese as well as a couple of other languages like German, Spanish, Japanese and Italian. We briefly review the construction of compounds and focus on the noun-noun construction. While analyzing the semantic relationship between the words that compose a compound, we use the GL Theory to demonstrate that the proposed qualia structure enables compositional interpretation within the compound. Besides, we attempt to examine whether or not for each semantic head, its modifier can fit in one of the four quales. Finally, our analysis reveals the potentials and limits of qualia-based treatment of composition of nominal compounds and suggests a path for future work.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"omori-komachi-2019-multi","url":"https:\/\/aclanthology.org\/N19-1344","title":"Multi-Task Learning for Japanese Predicate Argument Structure Analysis","abstract":"An event-noun is a noun that has an argument structure similar to a predicate. Recent works, including those considered stateof-the-art, ignore event-nouns or build a single model for solving both Japanese predicate argument structure analysis (PASA) and eventnoun argument structure analysis (ENASA). However, because there are interactions between predicates and event-nouns, it is not sufficient to target only predicates. To address this problem, we present a multi-task learning method for PASA and ENASA. Our multitask models improved the performance of both tasks compared to a single-task model by sharing knowledge from each task. Moreover, in PASA, our models achieved state-of-the-art results in overall F1 scores on the NAIST Text Corpus. In addition, this is the first work to employ neural networks in ENASA.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mueller-etal-2008-knowledge","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2008\/pdf\/720_paper.pdf","title":"Knowledge Sources for Bridging Resolution in Multi-Party Dialog","abstract":"In this paper we investigate the coverage of the two knowledge sources WordNet and Wikipedia for the task of bridging resolution. We report on an annotation experiment which yielded pairs of bridging anaphors and their antecedents in spoken multi-party dialog. Manual inspection of the two knowledge sources showed that, with some interesting exceptions, Wikipedia is superior to WordNet when it comes to the coverage of information necessary to resolve the bridging anaphors in our data set. We further describe a simple procedure for the automatic extraction of the required knowledge from Wikipedia by means of an API, and discuss some of the implications of the procedure's performance.","label_nlp4sg":1,"task":["Bridging Resolution in Multi - Party Dialog"],"method":["knowledge sources"],"goal1":"Partnership for the goals","goal2":null,"goal3":null,"acknowledgments":"Acknowledgements. The work described in this paper was partly funded by Deutsche Forschungsgemeinschaft (DFG), Project DIANA-Summ, STR 545\/2-1,2, and by the Klaus Tschira Foundation. We thank our annotators Ganna Syrota and Alessandra Moschetti. We are also greatful to the anonymous LREC reviewers for valuable comments and suggestions.","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":1} {"ID":"ribarov-2000-un","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2000\/pdf\/200.pdf","title":"The (Un)Deterministic Nature of Morphological Context","abstract":"The aim of this paper is to contribute to the study of the context within natural language processing and to bring in aspects which, I believe, have a direct influence on the interpretation of the success rates and on a more successful design of language models. This work tries to formalize the (ir)regularities, dynamic characteristics, of context using techniques from the field of chaotic and non-linear systems. The observations are done on the problem of POS tagging.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mcdowell-goodman-2019-learning","url":"https:\/\/aclanthology.org\/P19-1059","title":"Learning from Omission","abstract":"Pragmatic reasoning allows humans to go beyond the literal meaning when interpreting language in context. Previous work has shown that such reasoning can improve the performance of already-trained language understanding systems. Here, we explore whether pragmatic reasoning during training can improve the quality of learned meanings. Our experiments on reference game data show that end-to-end pragmatic training produces more accurate utterance interpretation models, especially when data is sparse and language is complex.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Leon Bergen for guidance in setting the up the initial versions of the pragmatic learning models, Katherine Hermann for help with some initial experiments on the color reference task, and Sahil Chopra for collecting a small batch of pilot data for the color grid task.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gordon-etal-2015-corpus","url":"https:\/\/aclanthology.org\/W15-1407","title":"A Corpus of Rich Metaphor Annotation","abstract":"Metaphor is a central phenomenon of language, and thus a central problem for natural language understanding. Previous work on the analysis of metaphors has identified which target concepts are being thought of and described in terms of which source concepts, but this is not adequate to explain what motivates the use of particular metaphors. This work proposes the use of conceptual schemas to represent the underspecified scenarios that motivate a metaphoric mapping. To support the creation of systems that can understand metaphors in this way, we have created and are publicly releasing a corpus of manually validated metaphor annotations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Defense US Army Research Laboratory contract number W911NF-12-C-0025. The US Gov-ernment is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoD\/ARL, or the US Government.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hwa-etal-2006-corpus","url":"https:\/\/aclanthology.org\/2006.amta-papers.9","title":"Corpus Variations for Translation Lexicon Induction","abstract":"Lexical mappings (word translations) between languages are an invaluable resource for multilingual processing. While the problem of extracting lexical mappings from parallel corpora is well-studied, the task is more challenging when the language samples are from nonparallel corpora. The goal of this work is to investigate one such scenario: finding lexical mappings between dialects of a diglossic language, in which people conduct their written communications in a prestigious formal dialect, but they communicate verbally in a colloquial dialect. Because the two dialects serve different socio-linguistic functions, parallel corpora do not naturally exist between them. An example of a diglossic dialect pair is Modern Standard Arabic (MSA) and Levantine Arabic. In this paper, we evaluate the applicability of a standard algorithm for inducing lexical mappings between comparable corpora (Rapp, 1999) to such diglossic corpora pairs. The focus of the paper is an in-depth error analysis, exploring the notion of relatedness in diglossic corpora and scrutinizing the effects of various dimensions of relatedness (such as mode, topic, style, and statistics) on the quality of the resulting translation lexicon.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is based on work done at the 2005 Human Language Engineering Workshop at Johns Hopkins University, which was partially supported by the National Science Foundation under Grant No. 0121285. We wish to thank the audiences at JHU for helpful discussions and the anonymous reviewers for their comments on the paper.","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"graham-etal-2020-assessing","url":"https:\/\/aclanthology.org\/2020.findings-emnlp.375","title":"Assessing Human-Parity in Machine Translation on the Segment Level","abstract":"Recent machine translation shared tasks have shown top-performing systems to tie or in some cases even outperform human translation. Such conclusions about system and human performance are, however, based on estimates aggregated from scores collected over large test sets of translations and so leave some remaining questions unanswered. For instance, simply because a system significantly outperforms the human translator on average may not necessarily mean that it has done so for every translation in the test set. Furthermore, are there remaining source segments present in evaluation test sets that cause significant challenges for top-performing systems and can such challenging segments go unnoticed due to the opacity of current human evaluation procedures? To provide insight into these issues we carefully inspect the outputs of top-performing systems in the recent WMT19 news translation shared task for all language pairs in which a system either tied or outperformed human translation. Our analysis provides a new method of identifying the remaining segments for which either machine or human perform poorly. For example, in our close inspection of WMT19 English to German and German to English we discover the segments that disjointly proved a challenge for human and machine. For English to Russian, there were no segments included in our sample of translations that caused a significant challenge for the human translator, while we again identify the set of segments that caused issues for the top-performing system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This study was supported by the ADAPT Centre for Digital Content Technology (www.adaptcentre.ie) at Trinity College Dublin funded under the SFI Research Centres Programme (Grant 13\/RC\/2106) co-funded under the European Regional Development Fund, and has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 825299 (Gourmet). We would also like to thank the anonymous reviewers for their feedback.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"marty-albisser-1995-integration","url":"https:\/\/aclanthology.org\/1995.mtsummit-1.30","title":"Integration of MT into the business process","abstract":"The integration of machine translation (MT) into the business process should be viewed from an overall perspective thus requiring several factors to be taken into consideration. These include selecting the right product, appointing the right people, restructuring the work flow, and measuring performance to finally achieve the projected productivity gain.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1995,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nirenburg-etal-2003-operative","url":"https:\/\/aclanthology.org\/W03-0904","title":"Operative strategies in ontological semantics","abstract":"In this paper, we briefly and informally illustrate, using a few annotated examples, the static and dynamic knowledge resources of ontological semantics. We then present the main motivations and desiderata of our approach and then discuss issues related to making ontological-semantic applications feasible through the judicious stepwise enhancement of static and dynamic knowledge sources while at all times maintaining a working system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"weimer-etal-2007-automatically","url":"https:\/\/aclanthology.org\/P07-2032","title":"Automatically Assessing the Post Quality in Online Discussions on Software","abstract":"Assessing the quality of user generated content is an important problem for many web forums. While quality is currently assessed manually, we propose an algorithm to assess the quality of forum posts automatically and test it on data provided by Nabble.com. We use state-of-the-art classification techniques and experiment with five feature classes: Surface, Lexical, Syntactic, Forum specific and Similarity features. We achieve an accuracy of 89% on the task of automatically assessing post quality in the software domain using forum specific features. Without forum specific features, we achieve an accuracy of 82%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the German Research Foundation as part of the Research Training Group \"Feedback-Based Quality Management in eLearning\" under the grant 1223. We are thankful to Nabble for providing their data.","year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pyysalo-etal-2011-overview","url":"https:\/\/aclanthology.org\/W11-1804","title":"Overview of the Infectious Diseases (ID) task of BioNLP Shared Task 2011","abstract":"This paper presents the preparation, resources, results and analysis of the Infectious Diseases (ID) information extraction task, a main task of the BioNLP Shared Task 2011. The ID task represents an application and extension of the BioNLP'09 shared task event extraction approach to full papers on infectious diseases. Seven teams submitted final results to the task, with the highest-performing system achieving 56% F-score in the full task, comparable to state-of-the-art performance in the established BioNLP'09 task. The results indicate that event extraction methods generalize well to new domains and full-text publications and are applicable to the extraction of events relevant to the molecular mechanisms of infectious diseases.","label_nlp4sg":1,"task":["Infectious Diseases ( ID ) information extraction"],"method":["analysis"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"This work was supported by Grant-in-Aid for Specially Promoted Research (MEXT, Japan). This project has been funded in whole or in part with Federal funds from the National Institute of Allergy and Infectious Diseases, National Institutes of Health, Department of Health and Human Services, under Contract No. HHSN272200900040C, awarded to BWS Sobral.","year":2011,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sanchez-cartagena-etal-2016-dealing","url":"https:\/\/aclanthology.org\/W16-3421","title":"Dealing with Data Sparseness in SMT with Factured Models and Morphological Expansion: a Case Study on Croatian","abstract":"This paper describes our experience using available linguistic resources for Croatian in order to address data sparseness when building an English-to-Croatian general domain phrasebased statistical machine translation system. We report the results obtained with factored translation models and morphological expansion, highlight the impact of the algorithm used for tagging the corpora, and show that the improvement brought by these methods is compatible with the application of data selection on out-of-domain parallel corpora.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Research funded by the European Union Seventh Framework Programme FP7\/2007-2013 under grant agreement PIAP-GA-2012-324414 (Abu-MaTran).","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hamazono-etal-2021-unpredictable","url":"https:\/\/aclanthology.org\/2021.paclic-1.23","title":"Unpredictable Attributes in Market Comment Generation","abstract":"There are two types of datasets for data-totext: one uses raw data obtained in the real world, and the other is constructed artificially for a controlled task. A straightforwardly output text is generated from its paired input data for a manually constructed dataset because the dataset is well constructed without any excess or deficiencies. However, it may not be possible to generate a correct output text from the input data for a dataset constructed with realworld data and text. In such cases, we have to provide additional data, for example, data or text attribute labels, in order to generate the expected output text from the paired input. This paper discusses the importance of additional input labels in data-to-text for real-world data. The content and style of a market comment change depending on its medium, the market situation, and the time of day. However, as the stock price, which is the input data, does not contain any such aforementioned information, it cannot generate comments appropriately from the data alone. Therefore, we analyse the dataset and provide additional labels which are unpredictable with input data for the appropriate parts in the model. Thus, the accuracy of sentence generation is greatly improved compared to the case without the labels.The result suggests unpredictable attributes should be given as a part of the input in the training of the text generating model.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This paper is based on results obtained from a project JPNP20006, commissioned by the New Energy and Industrial Technology Development Organization (NEDO). This work was also supported by JSPS KAKENHI Grant Number JP21J14335. For experiments, computational resource of AI Bridging Cloud Infrastructure (ABCI) provided by National Institute of Advanced Industrial Science and Technology (AIST) was used.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ning-etal-2019-improved","url":"https:\/\/aclanthology.org\/D19-1642","title":"An Improved Neural Baseline for Temporal Relation Extraction","abstract":"Determining temporal relations (e.g., before or after) between events has been a challenging natural language understanding task, partly due to the difficulty to generate large amounts of high-quality training data. Consequently, neural approaches have not been widely used on it, or showed only moderate improvements. This paper proposes a new neural system that achieves about 10% absolute improvement in accuracy over the previous best system (25% error reduction) on two benchmark datasets. The proposed system is trained on the stateof-the-art MATRES dataset and applies contextualized word embeddings, a Siamese encoder of a temporal common sense knowledge base, and global inference via integer linear programming (ILP). We suggest that the new approach could serve as a strong baseline for future research in this area.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is supported by a grant from the Allen Institute for Artificial Intelligence (allenai.org), the IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR) -a research collaboration as part of the IBM AI Horizons Network, and contract HR0011-18-2-0052 with the US Defense Advanced Research Projects Agency (DARPA). Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"saratxaga-etal-2006-designing","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2006\/pdf\/19_pdf.pdf","title":"Designing and Recording an Emotional Speech Database for Corpus Based Synthesis in Basque","abstract":"This paper describes an emotional speech database recorded for standard Basque. The database has been designed with the twofold purpose of being used for corpus based synthesis, and also of allowing the study of prosodic models for the emotions. The database is thus large, to get good corpus based synthesis quality and contains the same texts recorded in the six basic emotions plus the neutral style. The recordings were carried out by two professional dubbing actors, a man and a woman. The paper explains the whole creation process, beginning with the design stage, following with the corpus creation and the recording phases, and finishing with some learned lessons and hints.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This database was developed with the financial help of the Basque Government within the SAIOTEK program (SPE04UN24) and of the MEC (TIC2003-08382-C0503).Authors would also like to thank the University of the Basque Country for allowing using its recording studio.","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ferguson-etal-2018-identifying","url":"https:\/\/aclanthology.org\/D18-1539","title":"Identifying Domain Adjacent Instances for Semantic Parsers","abstract":"When the semantics of a sentence are not representable in a semantic parser's output schema, parsing will inevitably fail. Detection of these instances is commonly treated as an out-of-domain classification problem. However, there is also a more subtle scenario in which the test data is drawn from the same domain. In addition to formalizing this problem of domain-adjacency, we present a comparison of various baselines that could be used to solve it. We also propose a new simple sentence representation that emphasizes words which are unexpected. This approach improves the performance of a downstream semantic parser run on in-domain and domainadjacent instances.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"erk-pado-2008-structured","url":"https:\/\/aclanthology.org\/D08-1094","title":"A Structured Vector Space Model for Word Meaning in Context","abstract":"We address the task of computing vector space representations for the meaning of word occurrences, which can vary widely according to context. This task is a crucial step towards a robust, vector-based compositional account of sentence meaning. We argue that existing models for this task do not take syntactic structure sufficiently into account. We present a novel structured vector space model that addresses these issues by incorporating the selectional preferences for words' argument positions. This makes it possible to integrate syntax into the computation of word meaning in context. In addition, the model performs at and above the state of the art for modeling the contextual adequacy of paraphrases.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgments. Many thanks for helpful discussion to Jason Baldridge, David Beaver, Dedre Gentner, James Hampton, Dan Jurafsky, Alexander Koller, Brad Love, and Ray Mooney.","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hu-etal-2017-inference","url":"https:\/\/aclanthology.org\/W17-2708","title":"Inference of Fine-Grained Event Causality from Blogs and Films","abstract":"Human understanding of narrative is mainly driven by reasoning about causal relations between events and thus recognizing them is a key capability for computational models of language understanding. Computational work in this area has approached this via two different routes: by focusing on acquiring a knowledge base of common causal relations between events, or by attempting to understand a particular story or macro-event, along with its storyline. In this position paper, we focus on knowledge acquisition approach and claim that newswire is a relatively poor source for learning finegrained causal relations between everyday events. We describe experiments using an unsupervised method to learn causal relations between events in the narrative genres of first-person narratives and film scene descriptions. We show that our method learns fine-grained causal relations, judged by humans as likely to be causal over 80% of the time. We also demonstrate that the learned event pairs do not exist in publicly available event-pair datasets extracted from newswire.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tadano-etal-2010-multi","url":"https:\/\/aclanthology.org\/Y10-1079","title":"Multi-aspects Review Summarization Based on Identification of Important Opinions and their Similarity","abstract":"The development of the Web services lets many users easily provide their opinions recently. Automatic summarization of enormous sentiments has been expected. Intuitively, we can summarize a review with traditional document summarization methods. However, such methods have not well-discussed \"aspects\". Basically, a review consists of sentiments with various aspects. We summarize reviews for each aspect so that the summary presents information without biasing to a specific topic. In this paper, we propose a method for multiaspects review summarization based on evaluative sentence extraction. We handle three features; ratings of aspects, the tf-idf value, and the number of mentions with a similar topic. For estimating the number of mentions, we apply a clustering algorithm. By integrating these features, we generate a more appropriate summary. The experiment results show the effectiveness of our method.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dodge-petruck-2014-representing","url":"https:\/\/aclanthology.org\/W14-2408","title":"Representing Caused Motion in Embodied Construction Grammar","abstract":"This paper offers an Embodied Construction Grammar (Feldman et al. 2010) representation of caused motion, thereby also providing (a sample of) the computational infrastructure for implementing the information that FrameNet has characterized as Caused_motion 1 (Ruppenhofer et al. 2010). This work specifies the semantic structure of caused motion in natural language, using an Embodied Construction Grammar analyzer that includes the semantic parsing of linguistically instantiated constructions. Results from this type of analysis can serve as the input to NLP applications that require rich semantic representations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research requires the specification of appropriate constraints on the fillers of the roles that will facilitate distinguishing between the literal and the metaphorical.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"beelen-etal-2021-time","url":"https:\/\/aclanthology.org\/2021.findings-acl.243","title":"When Time Makes Sense: A Historically-Aware Approach to Targeted Sense Disambiguation","abstract":"this paper we will refer to lemmas or tokens in italics, their senses in single quotes and full definitions in double quotes.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Work for this paper was produced as part of Living with Machines. This project, funded by the UK Research and Innovation (UKRI) Strategic Priority Fund, is a multidisciplinary collaboration delivered by the Arts and Humanities Research Council (AHRC grant AH\/S01179X\/1), with The Alan Turing Institute, the British Library and the Universities of Cambridge, East Anglia, Exeter, and Queen Mary University of London. This work was also supported by The Alan Turing Institute under the EPSRC grant EP\/N510129\/1.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sporleder-lapata-2005-discourse","url":"https:\/\/aclanthology.org\/H05-1033","title":"Discourse Chunking and its Application to Sentence Compression","abstract":"In this paper we consider the problem of analysing sentence-level discourse structure. We introduce discourse chunking (i.e., the identification of intra-sentential nucleus and satellite spans) as an alternative to full-scale discourse parsing. Our experiments show that the proposed modelling approach yields results comparable to state-of-the-art while exploiting knowledge-lean features and small amounts of discourse annotations. We also demonstrate how discourse chunking can be successfully applied to a sentence compression task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors acknowledge the support of EPSRC (Sporleder, grant GR\/R40036\/01; Lapata, grant GR\/T04540\/01). Thanks to Amit Dubey, Ben Hutchinson, Alex Lascarides, Simone Teufel, and three anonymous reviewers for helpful comments and suggestions.","year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"arslan-etal-2020-modeling","url":"https:\/\/aclanthology.org\/2020.lrec-1.306","title":"Modeling Factual Claims with Semantic Frames","abstract":"In this paper, we introduce an extension of the Berkeley FrameNet for the structured and semantic modeling of factual claims. Modeling is a robust tool that can be leveraged in many different tasks such as matching claims to existing fact-checks and translating claims to structured queries. Our work introduces 11 new manually crafted frames along with 9 existing FrameNet frames, all of which have been selected with fact-checking in mind. Along with these frames, we are also providing 2, 540 fully annotated sentences, which can be used to understand how these frames are intended to work and to train machine learning models. Finally, we are also releasing our annotation tool to facilitate other researchers to make their own local extensions to FrameNet.","label_nlp4sg":1,"task":["Modeling Factual Claims"],"method":["Semantic Frames","annotation tool"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"avramidis-etal-2015-dfkis","url":"https:\/\/aclanthology.org\/W15-3004","title":"DFKI's experimental hybrid MT system for WMT 2015","abstract":"DFKI participated in the shared translation task of WMT 2015 with the German-English language pair in each translation direction. The submissions were generated using an experimental hybrid system based on three systems: a statistical Moses system, a commercial rule-based system, and a serial coupling of the two where the output of the rule-based system is further translated by Moses trained on parallel text consisting of the rule-based output and the original target language. The outputs of three systems are combined using two methods: (a) an empirical selection mechanism based on grammatical features (primary submission) and (b) IBM1 models based on POS 4-grams (contrastive submission).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This paper has received funding from the European Union's Seventh Framework Programme for research, technological development and demonstration under grant agreement no 610516 (QTLeap: Quality Translation by Deep Language Engineering Approaches). We are grateful to the anonymous reviewers for their valuable feedback.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"loukas-etal-2022-finer","url":"https:\/\/aclanthology.org\/2022.acl-long.303","title":"FiNER: Financial Numeric Entity Recognition for XBRL Tagging","abstract":"Publicly traded companies are required to submit periodic reports with eXtensive Business Reporting Language (xbrl) word-level tags. Manually tagging the reports is tedious and costly. We, therefore, introduce xbrl tagging as a new entity extraction task for the financial domain and release finer-139, a dataset of 1.1M sentences with gold xbrl tags. Unlike typical entity extraction datasets, finer-139 uses a much larger label set of 139 entity types. Most annotated tokens are numeric, with the correct tag per token depending mostly on context, rather than the token itself. We show that subword fragmentation of numeric expressions harms bert's performance, allowing word-level bilstms to perform better. To improve bert's performance, we propose two simple and effective solutions that replace numeric expressions with pseudotokens reflecting original token shapes and numeric magnitudes. We also experiment with fin-bert, an existing bert model for the financial domain, and release our own bert (sec-bert), pre-trained on financial filings, which performs best. Through data and error analysis, we finally identify possible limitations to inspire future work on xbrl tagging. Dataset Domain Entity Types conll-2003 Generic 4 ontonotes-v5 Generic 18 ace-2005 Generic 7 genia Biomedical 36 Chalkidis et al. (2019) Legal 14 Francis et al. (2019) Financial 9 finer-139 (ours) Financial 139","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hickl-harabagiu-2010-unsupervised","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/396_Paper.pdf","title":"Unsupervised Discovery of Collective Action Frames for Socio-Cultural Analysis","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hsueh-etal-2006-automatic","url":"https:\/\/aclanthology.org\/E06-1035","title":"Automatic Segmentation of Multiparty Dialogue","abstract":"In this paper, we investigate the problem of automatically predicting segment boundaries in spoken multiparty dialogue. We extend prior work in two ways. We first apply approaches that have been proposed for predicting top-level topic shifts to the problem of identifying subtopic boundaries. We then explore the impact on performance of using ASR output as opposed to human transcription. Examination of the effect of features shows that predicting top-level and predicting subtopic boundaries are two distinct tasks: (1) for predicting subtopic boundaries, the lexical cohesion-based approach alone can achieve competitive results, (2) for predicting top-level boundaries, the machine learning approach that combines lexical-cohesion and conversational features performs best, and (3) conversational cues, such as cue phrases and overlapping speech, are better indicators for the toplevel prediction task. We also find that the transcription errors inevitable in ASR output have a negative impact on models that combine lexical-cohesion and conversational features, but do not change the general preference of approach for the two tasks.","label_nlp4sg":1,"task":["Automatic Segmentation of Multiparty Dialogue"],"method":["ASR"],"goal1":"Partnership for the goals","goal2":null,"goal3":null,"acknowledgments":"Many thanks to Jean Carletta for her invaluable help in managing the data, and for advice and comments on the work reported in this paper. Thanks also to the AMI ASR group for producing the ASR transcriptions, and to the anonymous reviewers for their helpful comments. This work was supported by the European Union 6th FWP IST Integrated Project AMI (Augmented Multiparty Interaction, FP6-506811).","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":1} {"ID":"hoshino-nakagawa-2005-webexperimenter","url":"https:\/\/aclanthology.org\/H05-2010","title":"WebExperimenter for Multiple-Choice Question Generation","abstract":"Automatic generation of multiple-choice questions is an emerging topic in application of natural language processing. Particularly, applying it to language testing has been proved to be useful (Sumita et al., 2005) .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hayashi-nagata-2017-k","url":"https:\/\/aclanthology.org\/E17-2049","title":"K-best Iterative Viterbi Parsing","abstract":"This paper presents an efficient and optimal parsing algorithm for probabilistic context-free grammars (PCFGs). To achieve faster parsing, our proposal employs a pruning technique to reduce unnecessary edges in the search space. The key is to repetitively conduct Viterbi inside and outside parsing, while gradually expanding the search space to efficiently compute heuristic bounds used for pruning. This paper also shows how to extend this algorithm to extract K-best Viterbi trees. Our experimental results show that the proposed algorithm is faster than the standard CKY parsing algorithm. Moreover, its K-best version is much faster than the Lazy K-best algorithm when K is small.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve the quality of the paper. This work was supported in part by JSPS KAKENHI Grant Number 26730126.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"harper-etal-2021-semeval","url":"https:\/\/aclanthology.org\/2021.semeval-1.38","title":"SemEval-2021 Task 8: MeasEval -- Extracting Counts and Measurements and their Related Contexts","abstract":"We describe MeasEval, a SemEval task of extracting counts, measurements, and related context from scientific documents, which is of significant importance to the creation of Knowledge Graphs that distill information from the scientific literature. This is a new task in 2021, for which over 75 submissions from 25 participants were received. We expect the data developed for this task and the findings reported to be valuable to the scientific knowledge extraction, metrology, and automated knowledge base construction communities.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank Darin McBeath and Pierre-Yves Vandenbussche for help with annotations and annotation rules. We also thank Elsevier's Discovery Lab team for their feedback on this work.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sheremetyeva-2002-mt","url":"https:\/\/aclanthology.org\/2002.eamt-1.9","title":"An MT learning environment for computational linguistics students","abstract":"This paper discusses the issue of suitability of software used for the teaching of Machine Translation. It considers requirements to such software, and describes a set of tools that have initially been created as developer environment of an APTrans MT system but can easily be included in the learning environment for MT training. The tools are user-friendly and feature modularity and reusability.","label_nlp4sg":1,"task":["teaching of Machine Translation"],"method":["learning environment"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":"Acknowledgments. Thanks to Victor Raskin and Katrina Triezenberg for their contribution to the presentation of this paper.","year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"alexa-etal-2018-dabblers","url":"https:\/\/aclanthology.org\/S18-1062","title":"The Dabblers at SemEval-2018 Task 2: Multilingual Emoji Prediction","abstract":"The \"Multilingual Emoji Prediction\" task focuses on the ability of predicting the correspondent emoji for a certain tweet. In this paper, we investigate the relation between words and emojis. In order to do that, we used supervised machine learning (Naive Bayes) and deep learning (Recursive Neural Network).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This survey was published with the support by two grants of the Romanian National Authority for Scientific Research and Innovation, UEFISCDI, project number PN-III-P2-2. ","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"steen-markert-2021-evaluate","url":"https:\/\/aclanthology.org\/2021.eacl-main.160","title":"How to Evaluate a Summarizer: Study Design and Statistical Analysis for Manual Linguistic Quality Evaluation","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"iwamoto-etal-2021-universal","url":"https:\/\/aclanthology.org\/2021.sigtyp-1.3","title":"A Universal Dependencies Corpora Maintenance Methodology Using Downstream Application","abstract":"This paper investigates updates of Universal Dependencies (UD) treebanks in 23 languages and their impact on a downstream application. Numerous people are involved in updating UD's annotation guidelines and treebanks in various languages. However, it is not easy to verify whether the updated resources maintain universality with other language resources. Thus, validity and consistency of multilingual corpora should be tested through application tasks involving syntactic structures with PoS tags, dependency labels, and universal features. We apply the syntactic parsers trained on UD treebanks from multiple versions (2.0 to 2.7) to a clause-level sentiment extractor. We then analyze the relationships between attachment scores of dependency parsers and performance in application tasks. For future UD developments, we show examples of outputs that differ depending on version.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sarkar-2001-applying","url":"https:\/\/aclanthology.org\/N01-1023","title":"Applying Co-Training Methods to Statistical Parsing","abstract":"We propose a novel Co-Training method for statistical parsing. The algorithm takes as input a small corpus (9695 sentences) annotated with parse trees, a dictionary of possible lexicalized structures for each word in the training set and a large pool of unlabeled text. The algorithm iteratively labels the entire data set with parse trees. Using empirical results based on parsing the Wall Street Journal corpus we show that training a statistical parser on the combined labeled and unlabeled data strongly outperforms training only on the labeled data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2001,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"palogiannidi-etal-2016-tweester","url":"https:\/\/aclanthology.org\/S16-1023","title":"Tweester at SemEval-2016 Task 4: Sentiment Analysis in Twitter Using Semantic-Affective Model Adaptation","abstract":"We describe our submission to SemEval2016 Task 4: Sentiment Analysis in Twitter. The proposed system ranked first for the subtask B. Our system comprises of multiple independent models such as neural networks, semantic-affective models and topic modeling that are combined in a probabilistic way. The novelty of the system is the employment of a topic modeling approach in order to adapt the semantic-affective space for each tweet. In addition, significant enhancements were made in the main system dealing with the data preprocessing and feature extraction including the employment of word embeddings. Each model is used to predict a tweet's sentiment (positive, negative or neutral) and a late fusion scheme is adopted for the final decision.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgements: Elisavet Palogiannidi, Elias Iosif and Alexandros Potamianos were partially funded by the SpeDial project supported by the EU Seventh Framework Programme (FP7), grant number 611396 and the BabyRobot project supported by the EU Horizon 2020 Programme, grant number: 687831.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tsai-etal-2022-superb","url":"https:\/\/aclanthology.org\/2022.acl-long.580","title":"SUPERB-SG: Enhanced Speech processing Universal PERformance Benchmark for Semantic and Generative Capabilities","abstract":"Transfer learning has proven to be crucial in advancing the state of speech and natural language processing research in recent years. In speech, a model pre-trained by self-supervised learning transfers remarkably well on multiple tasks. However, the lack of a consistent evaluation methodology is limiting towards a holistic understanding of the efficacy of such models. SUPERB was a step towards introducing a common benchmark to evaluate pretrained models across various speech tasks. In this paper, we introduce SUPERB-SG, a new benchmark focused on evaluating the semantic and generative capabilities of pre-trained models by increasing task diversity and difficulty over SUPERB. We use a lightweight methodology to test the robustness of representations learned by pre-trained models under shifts in data domain and quality across different types of tasks. It entails freezing pretrained model parameters, only using simple task-specific trainable heads. The goal is to be inclusive of all researchers, and encourage efficient use of computational resources. We also show that the task diversity of SUPERB-SG coupled with limited task supervision is an effective recipe for evaluating the generalizability of model representation. Equal contribution.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cavalin-etal-2020-disjoint","url":"https:\/\/aclanthology.org\/2020.findings-emnlp.61","title":"From Disjoint Sets to Parallel Data to Train Seq2Seq Models for Sentiment Transfer","abstract":"We present a method for creating parallel data to train Seq2Seq neural networks for sentiment transfer. Most systems for this task, which can be viewed as monolingual machine translation (MT), have relied on unsupervised methods, such as Generative Adversarial Networks (GANs)-inspired approaches, for coping with the lack of parallel corpora. Given that the literature shows that Seq2Seq methods have been consistently outperforming unsupervised methods in MT-related tasks, in this work we exploit the use of semantic similarity computation for converting non-parallel data onto a parallel corpus. That allows us to train a transformer neural network for the sentiment transfer task, and compare its performance against unsupervised approaches. With experiments conducted on two well-known public datasets, i.e. Yelp and Amazon, we demonstrate that the proposed methodology outperforms existing unsupervised methods very consistently in fluency, and presents competitive results in terms of sentiment conversion and content preservation. We believe that this works opens up an opportunity for seq2seq neural networks to be better exploited in problems for which they have not been applied owing to the lack of parallel training data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mendez-etal-2018-adapting","url":"https:\/\/aclanthology.org\/W18-6540","title":"Adapting Descriptions of People to the Point of View of a Moving Observer","abstract":"This paper addresses the task of generating descriptions of people for an observer that is moving within a scene. As the observer moves, the descriptions of the people around him also change. A referring expression generation algorithm adapted to this task needs to continuously monitor the changes in the field of view of the observer, his relative position to the people being described, and the relative position of these people to any landmarks around them, and to take these changes into account in the referring expressions generated. This task presents two advantages: many of the mechanisms already available for static contexts may be applied with small adaptations, and it introduces the concept of changing conditions into the task of referring expression generation. In this paper we describe the design of an algorithm that takes these aspects into account in order to create descriptions of people within a 3D virtual environment. The evaluation of this algorithm has shown that, by changing the descriptions in real time according to the observers point of view, they are able to identify the described person quickly and effectively.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work presented in this paper has been partially funded by the projects IDiLyCo ","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kruijff-korbayova-kruijff-2004-discourse","url":"https:\/\/aclanthology.org\/W04-0206","title":"Discourse-level Annotation for Investigating Information Structure","abstract":"We present discourse-level annotation of newspaper texts in German and English, as part of an ongoing project aimed at investigating information structure from a cross-linguistic perspective. Rather than annotating some specific notion of information structure, we propose a theory-neutral annotation of basic features at the levels of syntax, prosody and discourse, using treebank data as a starting point. Our discourse-level annotation scheme covers properties of discourse referents (e.g., semantic sort, delimitation, quantification, familiarity status) and anaphoric links (coreference and bridging). We illustrate what investigations this data serves and discuss some integration issues involved in combining different levels of stand-off annotations, created by using different tools.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Saarland University for funding the MULI pilot project. Thanks also to Stella Neumann, Erich Steiner, Elke Teich, Stefan Baumann, Caren Brinckmann, Silvia Hansen-Schirra and Hans Uszkoreit for discussions.","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"riedel-meza-ruiz-2008-collective","url":"https:\/\/aclanthology.org\/W08-2125","title":"Collective Semantic Role Labelling with Markov Logic","abstract":"This paper presents our system for the Open Track of the CoNLL 2008 Shared Task (Surdeanu et al., 2008) in Joint Dependency Parsing 1 and Semantic Role Labelling. We use Markov Logic to define a joint SRL model and achieve a semantic F-score of 74.59%, the second best in the Open Track.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pimentel-etal-2021-finding","url":"https:\/\/aclanthology.org\/2021.naacl-main.349","title":"Finding Concept-specific Biases in Form--Meaning Associations","abstract":"This work presents an information-theoretic operationalisation of cross-linguistic nonarbitrariness. It is not a new idea that there are small, cross-linguistic associations between the forms and meanings of words. For instance, it has been claimed (Blasi et al., 2016) that the word for TONGUE is more likely than chance to contain the phone [l]. By controlling for the influence of language family and geographic proximity within a very large concept-aligned, cross-lingual lexicon, we extend methods previously used to detect within language non-arbitrariness (Pimentel et al., 2019) to measure cross-linguistic associations. We find that there is a significant effect of non-arbitrariness, but it is unsurprisingly small (less than 0.5% on average according to our information-theoretic estimate). We also provide a concept-level analysis which shows that a quarter of the concepts considered in our work exhibit a significant level of cross-linguistic non-arbitrariness. In sum, the paper provides new methods to detect cross-linguistic associations at scale, and confirms their effects are minor.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"S\u00f8ren Wichmann's research was partly funded by a subsidy from the Russian government to support the Programme of Competitive Development of Kazan Federal University, Russia. Dami\u00e1n E. Blasi acknowledges funding from the Branco Weiss Fellowship, administered by the ETH Z\u00fcrich. Dami\u00e1n E. Blasi's research was also executed within the framework of the HSE University Basic Research Program and funded by the Russian Academic Excellence Project '5-100'.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"patra-moniz-2019-weakly","url":"https:\/\/aclanthology.org\/D19-1652","title":"Weakly Supervised Attention Networks for Entity Recognition","abstract":"The task of entity recognition has traditionally been modelled as a sequence labelling task. However, this usually requires a large amount of fine-grained data annotated at the token level, which in turn can be expensive and cumbersome to obtain. In this work, we aim to circumvent this requirement of word-level annotated data. To achieve this, we propose a novel architecture for entity recognition from a corpus containing weak binary presence\/absence labels, which are relatively easier to obtain. We show that our proposed weakly supervised model, trained solely on a multi-label classification task, performs reasonably well on the task of entity recognition, despite not having access to any token-level ground truth data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their invaluable feedback, which helped shaped the paper into its current form. We would also like to thank Matthew R. Gormley for the helpful discussions on the topic.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gulordava-merlo-2015-diachronic","url":"https:\/\/aclanthology.org\/W15-2115","title":"Diachronic Trends in Word Order Freedom and Dependency Length in Dependency-Annotated Corpora of Latin and Ancient Greek","abstract":"One easily observable aspect of language variation is the order of words. In human and machine natural language processing, it is often claimed that parsing freeorder languages is more difficult than parsing fixed-order languages. In this study on Latin and Ancient Greek, two wellknown and well-documented free-order languages, we propose syntactic correlates of word order freedom. We apply our indicators to a collection of dependencyannotated texts of different time periods. On the one hand, we confirm a trend towards more fixed-order patterns in time. On the other hand, we show that a dependency-based measure of the flexibility of word order is correlated with the parsing performance on these languages.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We gratefully acknowledge the partial funding of this work by the Swiss National Science Foundation, under grant 144362. We thank Lieven Danckaert and S\u00e9verine Nasel for pointing relevant Latin and Ancient Greek references to us.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"vilnis-etal-2018-probabilistic","url":"https:\/\/aclanthology.org\/P18-1025","title":"Probabilistic Embedding of Knowledge Graphs with Box Lattice Measures","abstract":"Embedding methods which enforce a partial order or lattice structure over the concept space, such as Order Embeddings (OE) (Vendrov et al., 2016), are a natural way to model transitive relational data (e.g. entailment graphs). However, OE learns a deterministic knowledge base, limiting expressiveness of queries and the ability to use uncertainty for both prediction and learning (e.g. learning from expectations). Probabilistic extensions of OE (Lai and Hockenmaier, 2017) have provided the ability to somewhat calibrate these denotational probabilities while retaining the consistency and inductive bias of ordered models, but lack the ability to model the negative correlations found in real-world knowledge. In this work we show that a broad class of models that assign probability measures to OE can never capture negative correlation, which motivates our construction of a novel box lattice and accompanying probability measure to capture anticorrelation and even disjoint concepts, while still providing the benefits of probabilistic modeling, such as the ability to perform rich joint and conditional queries over arbitrary sets of concepts, and both learning from and predicting calibrated uncertainty. We show improvements over previous approaches in modeling the Flickr and WordNet entailment graphs, and investigate the power of the model.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Alice Lai for making the code from her original paper public, and for providing the additional unseen pairs and unseen words data. We also thank Haw-Shiuan Chang, Laurent Dinh, and Ben Poole for helpful discussions. We also thank the anonymous reviewers for their constructive feedback.This work was supported in part by the Center for Intelligent Information Retrieval and the Center for Data Science, in part by the Chan Zuckerberg Initiative under the project Scientific Knowledge Base Construction., and in part by the National Science Foundation under Grant No. IIS-1514053. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nguyen-etal-2009-semi","url":"https:\/\/aclanthology.org\/R09-1057","title":"A Semi-supervised Approach for Generating a Table-of-Contents","abstract":"This paper presents a semi-supervised model for generating a table-of-contents as an indicative summarization. We mainly focus on using word cluster-based information derived from a large amount of unannotated data by an unsupervised algorithm. We integrate word cluster-based features into a discriminative structured learning model, and show that our approach not only increases the quality of the resulting table-of-contents, but also reduces the number of iterations in the training process. In the experiments, our model shows better results than the baseline model in generating a table-of-contents, about 6.5% improvement in terms of averaged ROUGE-L score.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"osborne-2000-shallow","url":"https:\/\/aclanthology.org\/W00-0731","title":"Shallow Parsing as Part-of-Speech Tagging","abstract":"Treating shallow parsing as part-of-speech tagging yields results comparable with other, more elaborate approaches. Using the CoNLL 2000 training and testing material, our best model had an accuracy of 94.88%, with an overall FB1 score of 91.94%. The individual FB1 scores for NPs were 92.19%, VPs 92.70% and PPs 96.69%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Erik Tjong Kim Sang for supplying the evaluation code, and Donnla Nic Gearailt for dictating over the telephone, and from the top-of-her-head, a Perl program to help extract wrongly labelled sentences from the results.","year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"crego-etal-2005-talp","url":"https:\/\/aclanthology.org\/2005.iwslt-1.25","title":"The TALP Ngram-based SMT System for IWSLT'05","abstract":"This paper provides a description of TALP-Ngram, the tuple-based statistical machine translation system developed at the TALP Research Center of the UPC (Universitat Polit\u00e8cnica de Catalunya). Briefly, the system performs a log-linear combination of a translation model and additional feature functions. The translation model is estimated as an N-gram of bilingual units called tuples, and the feature functions include a target language model, a word penalty, and lexical features, depending on the language pair and task. The paper describes the participation of the system in the second international workshop on spoken language translation (IWSLT) held in Pittsburgh, October 2005. Results on Chinese-to-English and Arabic-to-English tracks using supplied data are reported.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been partially funded by the European Union under the integrated project TC-STAR -Technology and Corpora for Speech to Speech Translation -(IST-2002-FP6-506738, http:\/\/www.tc-star.org), by the Spanish Government under grant TIC2002-04447-C02 (ALIADO project), by the Dep.of Universities, Research and Information Society (Generalitat de Catalunya) and by the Universitat Polit\u00e8cnica de Catalunya under grant UPC-RECERCA.The authors want to thank Marta Ruiz Costa-juss\u00e0 (member of the TALP Research Center) for her valuable contribution to the comparison with the TALP-Phrase system.","year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"darwish-mubarak-2016-farasa","url":"https:\/\/aclanthology.org\/L16-1170","title":"Farasa: A New Fast and Accurate Arabic Word Segmenter","abstract":"In this paper, we present Farasa (meaning insight in Arabic), which is a fast and accurate Arabic segmenter. Segmentation involves breaking Arabic words into their constituent clitics. Our approach is based on SVM rank using linear kernels. The features that we utilized account for: likelihood of stems, prefixes, suffixes, and their combination; presence in lexicons containing valid stems and named entities; and underlying stem templates. Farasa outperforms or equalizes state-of-the-art Arabic segmenters, namely QATARA and MADAMIRA. Meanwhile, Farasa is nearly one order of magnitude faster than QATARA and two orders of magnitude faster than MADAMIRA. The segmenter should be able to process one billion words in less than 5 hours. Farasa is written entirely in native Java, with no external dependencies, and is open-source.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"milward-1995-incremental","url":"https:\/\/aclanthology.org\/E95-1017","title":"Incremental Interpretation of Categorial Grammar","abstract":"The paper describes a parser for Categorial Grammar which provides fully word by word incremental interpretation. The parser does not require fragments of sentences to form constituents, and thereby avoids problems of spurious ambiguity. The paper includes a brief discussion of the relationship between basic Categorial Grammar and other formalisms such as HPSG, Dependency Grammar and the Lambek Calculus. It also includes a discussion of some of the issues which arise when parsing lexicalised grammars, and the possibilities for using statistical techniques for tuning to particular languages.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1995,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"he-etal-2016-deep-reinforcement","url":"https:\/\/aclanthology.org\/D16-1189","title":"Deep Reinforcement Learning with a Combinatorial Action Space for Predicting Popular Reddit Threads","abstract":"We introduce an online popularity prediction and tracking task as a benchmark task for reinforcement learning with a combinatorial, natural language action space. A specified number of discussion threads predicted to be popular are recommended, chosen from a fixed window of recent comments to track. Novel deep reinforcement learning architectures are studied for effective modeling of the value function associated with actions comprised of interdependent sub-actions. The proposed model, which represents dependence between sub-actions through a bi-directional LSTM, gives the best performance across different experimental configurations and domains, and it also generalizes well with varying numbers of recommendation requests.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lin-chen-2008-ranking","url":"https:\/\/aclanthology.org\/D08-1015","title":"Ranking Reader Emotions Using Pairwise Loss Minimization and Emotional Distribution Regression","abstract":"This paper presents two approaches to ranking reader emotions of documents. Past studies assign a document to a single emotion category, so their methods cannot be applied directly to the emotion ranking problem. Furthermore, whereas previous research analyzes emotions from the writer's perspective, this work examines readers' emotional states. The first approach proposed in this paper minimizes pairwise ranking errors. In the second approach, regression is used to model emotional distributions. Experiment results show that the regression method is more effective at identifying the most popular emotion, but the pairwise loss minimization method produces ranked lists of emotions that have better correlations with the correct lists.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to the Computer and Information Networking Center, National Taiwan University, for the support of high-performance computing facilities. The research in this paper was partially supported by National Science Council, Taiwan, under the contract NSC 96-2628-E-002-240-MY3.","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dick-etal-2020-humoraac","url":"https:\/\/aclanthology.org\/2020.semeval-1.133","title":"HumorAAC at SemEval-2020 Task 7: Assessing the Funniness of Edited News Headlines through Regression and Trump Mentions","abstract":"In this paper we describe the HumorAAC system, our contribution to the Semeval-2020 Humor Assessment task. We essentially use three different features that are passed into a ridge regression to determine a funniness score for an edited news headline: statistical, count-based features, semantic features and contextual information. For deciding which one of two given edited headlines is funnier, we additionally use scoring information and logistic regression. Our work was mostly concentrated on investigating features, rather than improving prediction based on pre-trained language models. The resulting system is task-specific, lightweight and performs above the majority baseline. Our experiments indicate that features related to socio-cultural context, in our case mentions of Donald Trump, generally perform better than context-independent features like headline length.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zavareh-etal-2013-error","url":"https:\/\/aclanthology.org\/U13-1014","title":"Error Detection in Automatic Speech Recognition","abstract":"We offer a supervised machine learning approach for recognizing erroneous words in the output of a speech recognizer. We have investigated several sets of features combined with two word configurations, and compared the performance of two classifiers: Decision Trees and Na\u00efve Bayes. Evaluation was performed on a corpus of 400 spoken referring expressions, with Decision Trees yielding a high recognition accuracy.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by grants DP110100500 and DP120100103 from the Australian Research Council. The authors thank Masud Moshtaghi for his help with statistical issues.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"magerman-1995-statistical","url":"https:\/\/aclanthology.org\/P95-1037","title":"Statistical Decision-Tree Models for Parsing","abstract":"Syntactic natural language parsers have shown themselves to be inadequate for processing highly-ambiguous large-vocabulary text, as is evidenced by their poor performance on domains like the Wall Street Journal, and by the movement away from parsing-based approaches to textprocessing in general. In this paper, I describe SPATTER, a statistical parser based on decision-tree learning techniques which constructs a complete parse for every sentence and achieves accuracy rates far better than any published result. This work is based on the following premises: (1) grammars are too complex and detailed to develop manually for most interesting domains; (2) parsing models must rely heavily on lexical and contextual information to analyze sentences accurately; and (3) existing n-gram modeling techniques are inadequate for parsing models. In experiments comparing SPATTER with IBM's computer manuals parser, SPATTER significantly outperforms the grammar-based parser. Evaluating SPATTER against the Penn Treebank Wall Street Journal corpus using the PARSEVAL measures, SPATTER achieves 86% precision, 86% recall, and 1.3 crossing brackets per sentence for sentences of 40 words or less, and 91% precision, 90% recall, and 0.5 crossing brackets for sentences between 10 and 20 words in length.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1995,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"vilar-etal-2008-analysing","url":"https:\/\/aclanthology.org\/2008.iwslt-papers.7","title":"Analysing soft syntax features and heuristics for hierarchical phrase based machine translation.","abstract":"Similar to phrase-based machine translation, hierarchical systems produce a large proportion of phrases, most of which are supposedly junk and useless for the actual translation. For the hierarchical case, however, the amount of extracted rules is an order of magnitude bigger. In this paper, we investigate several soft constraints in the extraction of hierarchical phrases and whether these help as additional scores in the decoding to prune unneeded phrases. We show the methods that help best.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Niklas Hoppe for his help in conducting the experiments.This work was partly funded by the Deutsche Forschungsgemeinschaft (DFG) under the project \"Statistische Text'ubersetzung\" (NE 572\/5q).This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR0011-06-C-0023. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the Defense Advanced Research Projects Agency (DARPA).","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"takeno-etal-2016-integrating","url":"https:\/\/aclanthology.org\/W16-4615","title":"Integrating empty category detection into preordering Machine Translation","abstract":"We propose a method for integrating Japanese empty category detection into the preordering process of Japanese-to-English statistical machine translation. First, we apply machine-learningbased empty category detection to estimate the position and the type of empty categories in the constituent tree of the source sentence. Then, we apply discriminative preordering to the augmented constituent tree in which empty categories are treated as if they are normal lexical symbols. We find that it is effective to filter empty categories based on the confidence of estimation. Our experiments show that, for the IWSLT dataset consisting of short travel conversations, the insertion of empty categories alone improves the BLEU score from 33.2 to 34.3 and the RIBES score from 76.3 to 78.7, which imply that reordering has improved For the KFTT dataset consisting of Wikipedia sentences, the proposed preordering method considering empty categories improves the BLEU score from 19.9 to 20.2 and the RIBES score from 66.2 to 66.3, which shows both translation and reordering have improved slightly.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chan-etal-2017-semi","url":"https:\/\/aclanthology.org\/W17-1726","title":"Semi-Automated Resolution of Inconsistency for a Harmonized Multiword Expression and Dependency Parse Annotation","abstract":"This paper presents a methodology for identifying and resolving various kinds of inconsistency in the context of merging dependency and multiword expression (MWE) annotations, to generate a dependency treebank with comprehensive MWE annotations. Candidates for correction are identified using a variety of heuristics, including an entirely novel one which identifies violations of MWE constituency in the dependency tree, and resolved by arbitration with minimal human intervention. Using this technique, we identified and corrected several hundred errors across both parse and MWE annotations, representing changes to a significant percentage (well over 10%) of the MWE instances in the joint corpus.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"roemmele-2019-identifying","url":"https:\/\/aclanthology.org\/W19-2406","title":"Identifying Sensible Lexical Relations in Generated Stories","abstract":"As with many text generation tasks, the focus of recent progress on story generation has been in producing texts that are perceived to \"make sense\" as a whole. There are few automated metrics that address this dimension of story quality even on a shallow lexical level. To initiate investigation into such metrics, we apply a simple approach to identifying word relations that contribute to the 'narrative sense' of a story. We use this approach to comparatively analyze the output of a few notable story generation systems in terms of these relations. We characterize differences in the distributions of relations according to their strength within each story.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"voss-ehlen-2007-calo","url":"https:\/\/aclanthology.org\/N07-4009","title":"The CALO Meeting Assistant","abstract":"The CALO Meeting Assistant is an integrated, multimodal meeting assistant technology that captures speech, gestures, and multimodal data from multiparty interactions during meetings, and uses machine learning and robust discourse processing to provide a rich, browsable record of a meeting.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"west-etal-2019-bottlesum","url":"https:\/\/aclanthology.org\/D19-1389","title":"BottleSum: Unsupervised and Self-supervised Sentence Summarization using the Information Bottleneck Principle","abstract":"The principle of the Information Bottleneck (Tishby et al., 1999) is to produce a summary of information X optimized to predict some other relevant information Y. In this paper, we propose a novel approach to unsupervised sentence summarization by mapping the Information Bottleneck principle to a conditional language modelling objective: given a sentence, our approach seeks a compressed sentence that can best predict the next sentence. Our iterative algorithm under the Information Bottleneck objective searches gradually shorter subsequences of the given sentence while maximizing the probability of the next sentence conditioned on the summary. Using only pretrained language models with no direct supervision, our approach can efficiently perform extractive sentence summarization over a large corpus. Building on our unsupervised extractive summarization (BottleSum Ex), we then present a new approach to self-supervised abstractive summarization (BottleSum Self), where a transformer-based language model is trained on the output summaries of our unsupervised method. Empirical results demonstrate that our extractive method outperforms other unsupervised models on multiple automatic metrics. In addition, we find that our selfsupervised abstractive model outperforms unsupervised baselines (including our own) by human evaluation along multiple attributes. Hong Kong has population over 7 million, was once under British Rule. Hong Kong was once under British Rule.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank anonymous reviewers for many helpful comments. This research is supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) (funding reference number 401233309), NSF (IIS-1524371), DARPA CwC through ARO (W911NF15-1-0543), Darpa MCS program N66001-19-2-4031 through NIWC Pacific (N66001-19-2-4031), Samsung AI Research, and Allen Institute for AI.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ananiadou-etal-2010-evaluating","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/121_Paper.pdf","title":"Evaluating a Text Mining Based Educational Search Portal","abstract":"In this paper, we present the main features of a text mining based search engine for the UK Educational Evidence Portal available at the UK National Centre for Text Mining (NaCTeM), together with a user-centred framework for the evaluation of the search engine. The framework is adapted from an existing proposal by the ISLE (EAGLES) Evaluation Working group. We introduce the metrics employed for the evaluation, and explain how these relate to the text mining based search engine. Following this, we describe how we applied the framework to the evaluation of a number of key text mining features of the search engine, namely the automatic clustering of search results, classification of search results according to a taxonomy, and identification of topics and other documents that are related to a chosen document. Finally, we present the results of the evaluation in terms of the strengths, weaknesses and improvements identified for each of these features.","label_nlp4sg":1,"task":["evaluation of the search engine"],"method":["metrics"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":"The work described has been carried out as part of the ASSIST project, which was funded by JISC. The project and its evaluation was also supported by the Educational Evidence Portal Development Group. We would like to thank Davy Weissenbacher, Paul Thompson, Brian Rea, Yutaka Sasaki, Bill Black (NaCTeM) and Ruth Stewart and Claire Stansfield (EPPI-Centre) for their valuable contributions to the work described in this paper.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"eisner-1996-efficient","url":"https:\/\/aclanthology.org\/P96-1011","title":"Efficient Normal-Form Parsing for Combinatory Categorial Grammar","abstract":"Under categorial grammars that have powerful rules like composition, a simple n-word sentence can have exponentially many parses. Generating all parses is inefficient and obscures whatever true semantic ambiguities are in the input. This paper addresses the problem for a fairly general form of Combinatory Categorial Grammar, by means of an efficient, correct, and easy to implement normal-form parsing technique. The parser is proved to find exactly one parse in each semantic equivalence class of allowable parses; that is, spurious ambiguity (as carefully defined) is shown to be both safely and completely eliminated.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"oliver-mikelenic-2020-resipc","url":"https:\/\/aclanthology.org\/2020.lrec-1.869","title":"ReSiPC: a Tool for Complex Searches in Parallel Corpora","abstract":"In this paper, a tool specifically designed to allow for complex searches in large parallel corpora is presented. The formalism for the queries is very powerful as it uses standard regular expressions that allow for complex queries combining word forms, lemmata and POS-tags. As queries are performed over POS-tags, at least one of the languages in the parallel corpus should be POS-tagged. Searches can be performed in one of the languages or in both languages at the same time. The program is able to POS-tag the corpora using the Freeling analyzer through its Python API. ReSiPC is developed in Python version 3 and it is distributed under a free license (GNU GPL). The tool can be used to provide data for contrastive linguistics research and an example of use in a Spanish-Croatian parallel corpus is presented. ReSiPC is designed for queries in POS-tagged corpora, but it can be easily adapted for querying corpora containing other kinds of information.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bedrick-etal-2012-robust","url":"https:\/\/aclanthology.org\/W12-2107","title":"Robust kaomoji detection in Twitter","abstract":"In this paper, we look at the problem of robust detection of a very productive class of Asian style emoticons, known as facemarks or kaomoji. We demonstrate the frequency and productivity of these sequences in social media such as Twitter. Previous approaches to detection and analysis of kaomoji have placed limits on the range of phenomena that could be detected with their method, and have looked at largely monolingual evaluation sets (e.g., Japanese blogs). We find that these emoticons occur broadly in many languages, hence our approach is language agnostic. Rather than relying on regular expressions over a predefined set of likely tokens, we build weighted context-free grammars that reward graphical affinity and symmetry within whatever symbols are used to construct the emoticon.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rozovskaya-roth-2019-grammar","url":"https:\/\/aclanthology.org\/Q19-1001","title":"Grammar Error Correction in Morphologically Rich Languages: The Case of Russian","abstract":"Until now, most of the research in grammar error correction focused on English, and the problem has hardly been explored for other languages. We address the task of correcting writing mistakes in morphologically rich languages, with a focus on Russian. We present a corrected and error-tagged corpus of Russian learner writing and develop models that make use of existing state-of-the-art methods that have been well studied for English. Although impressive results have recently been achieved for grammar error correction of non-native English writing, these results are limited to domains where plentiful training data are available. Because annotation is extremely costly, these approaches are not suitable for the majority of domains and languages. We thus focus on methods that use ''minimal supervision''; that is, those that do not rely on large amounts of annotated training data, and show how existing minimal-supervision approaches extend to a highly inflectional language such as Russian. The results demonstrate that these methods are particularly useful for correcting mistakes in grammatical phenomena that involve rich morphology.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors thank Olesya Kisselev for her help with obtaining the RULEC corpus, and Elmira Mustakimova for sharing the error categories developed at the Russian National Corpus. The authors thank Mark Sammons and the anony-mous reviewers for their comments. This work was partially supported by contract HR0011-15-2-0025 with the US Defense Advanced Research Projects Agency (DARPA). The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the US Government.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ward-1991-evaluation","url":"https:\/\/aclanthology.org\/H91-1016","title":"Evaluation of the CMU ATIS System","abstract":"The CMU Phoenix system is an experiment in understanding spontaneous speech. It has been implemented for the Air Travel Information Service task. In this task, casual users are asked to obtain information from a database of air travel information. Users are not given a vocabulary, grammar or set of sentences to read. They compose queries themselves in a spontaneous manner. This task presents speech recognizers with many new problems compared to the Resource Management task. Not only is the speech not fluent, but the vocabulary and grammar are open. Also, the task is not just to produce a transcription, but to produce an action, retrieve data from the database. Taking such actions requires parsing and \"understanding\" the utteraoce. Word error rate is not as important as utterance understanding rate. Phoenix attempts to deal with phenomena that occur in spontaneous speech. Unknown words, restarts, repeats, and poody formed or unusual grammar are common is spontaneous speech and are very disruptive to standard recognizers. These events lead to misrecognitions which often cause a total parse failure. Our strategy is to apply grammatical constraints at the phrase level and to use semantic rather than lexical grammars. Semantics provide more constraint than parts of speech and must ultimately be delt with in order to take actions. Applying constraints at the phrase level is more flexible than recognizing sentences as a whole while providing much more constraint than word-spotting, Restarts and repeats are most often between phase occurences, so individual phrases can still be recognized correctly. Poorly constructed grammar often consists of well-formed phrases, and is often semantically well-formed. It is only syntactically incorrect. We associate phrases by frame-based semantics. Phrases represent word strings that can fill slots in frames. The slots represent information which the frame is able to act on. The current Phoenix system uses a bigram language model with the Sphinx speech recognition system. The top-scoring word string is passed to a flexible frame-based parser, The parser assigns phrases (word strings) from the input to slots in frames. The slots represent information content needed for the frame. A beam of frame hypotheses is produced and the best scoring one is used to produce an SQL query.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1991,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhang-etal-2019-complex","url":"https:\/\/aclanthology.org\/P19-1440","title":"Complex Question Decomposition for Semantic Parsing","abstract":"In this work, we focus on complex question semantic parsing and propose a novel Hierarchical Semantic Parsing (HSP) method, which utilizes the decompositionality of complex questions for semantic parsing. Our model is designed within a three-stage parsing architecture based on the idea of decompositionintegration. In the first stage, we propose a question decomposer which decomposes a complex question into a sequence of subquestions. In the second stage, we design an information extractor to derive the type and predicate information of these questions. In the last stage, we integrate the generated information from previous stages and generate a logical form for the complex question. We conduct experiments on COMPLEXWE-BQUESTIONS which is a large scale complex question semantic parsing dataset, results show that our model achieves significant improvement compared to state-of-the-art methods.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by the National Key Research and Development Program of China (2018YFB1004502) and the National Natural Science Foundation of China (61690203, 61532001). We thank the anonymous reviewers for their helpful comments.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"irvine-etal-2010-transliterating","url":"https:\/\/aclanthology.org\/2010.amta-papers.12","title":"Transliterating From All Languages","abstract":"Much of the previous work on transliteration has depended on resources and attributes specific to particular language pairs. In this work, rather than focus on a single language pair, we create robust models for transliterating from all languages in a large, diverse set to English. We create training data for 150 languages by mining name pairs from Wikipedia. We train 13 systems and analyze the effects of the amount of training data on transliteration performance. We also present an analysis of the types of errors that the systems make. Our analyses are particularly valuable for building machine translation systems for low resource languages, where creating and integrating a transliteration module for a language with few NLP resources may provide substantial gains in translation performance.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"benus-etal-2011-adapting","url":"https:\/\/aclanthology.org\/W11-2607","title":"Adapting Slovak ASR for native Germans speaking Slovak","abstract":"We explore variability involved in speech with a non-native accent. We first employ a combination of knowledge-based and datadriven approaches for the analysis of pronunciation variants between L1 (German) and target L2 (Slovak). Knowledge gained in this two-step process is then used in adapting acoustic models and the lexicon. We focus on modifications in the pronunciation dictionary and speech rate. Our results show that the recognition of German-accented Slovak is significantly improved with techniques modeling slow L2 speech, and that the adaptation of the pronunciation dictionary yields only insignificant gains.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the European Project of Structural Funds, ITMS: 26240220064.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"becker-riaz-2002-study","url":"https:\/\/aclanthology.org\/W02-1201","title":"A Study in Urdu Corpus Construction","abstract":"We are interested in contributing a small, publicly available Urdu corpus of written text to the natural language processing community. The Urdu text is stored in the Unicode character set, in its native Arabic script, and marked up according to the Corpus Encoding Standard (CES) XML Document Type Definition (DTD). All the tags and metadata are in English. To date, the corpus is made entirely of data from British Broadcasting Company's (BBC) Urdu Web site, although we plan to add data from other Urdu newspapers. Upon completion, the corpus will consist mostly of raw Urdu text marked up only to the paragraph level so it can be used as input for natural language processing (NLP) tasks. In addition, it will be hand-tagged for parts of speech so the data can be used to train and test NLP tools.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"caines-etal-2018-aggressive","url":"https:\/\/aclanthology.org\/W18-5109","title":"Aggressive language in an online hacking forum","abstract":"We probe the heterogeneity in levels of abusive language in different sections of the Internet, using an annotated corpus of Wikipedia page edit comments to train a binary classifier for abuse detection. Our test data come from the CrimeBB Corpus of hacking-related forum posts and we find that (a) forum interactions are rarely abusive, (b) the abusive language which does exist tends to be relatively mild compared to that found in the Wikipedia comments domain, and tends to involve aggressive posturing rather than hate speech or threats of violence. We observe that the purpose of conversations in online forums tend to be more constructive and informative than those in Wikipedia page edit comments which are geared more towards adversarial interactions, and that this may explain the lower levels of abuse found in our forum data than in Wikipedia comments. Further work remains to be done to compare these results with other inter-domain classification experiments, and to understand the impact of aggressive language in forum conversations.","label_nlp4sg":1,"task":["abuse detection"],"method":["binary classifier"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"This work was supported by The Alan Turing Institute's Defence & Security Programme, and the U.K. Engineering & Physical Sciences Research Council. We thank Emma Lenton, Dr Alastair Beresford, and the anonymous reviewers for their support and advice.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"fang-etal-2020-hybrid","url":"https:\/\/aclanthology.org\/2020.nlptea-1.9","title":"A Hybrid System for NLPTEA-2020 CGED Shared Task","abstract":"This paper introduces our system at NLPTEA2020 shared task for CGED, which is able to detect, locate, identify and correct grammatical errors in Chinese writings. The system consists of three components: GED, GEC, and post processing. GED is an ensemble of multiple BERT-based sequence labeling models for handling GED tasks. GEC performs error correction. We exploit a collection of heterogenous models, including Seq2Seq, GECToR and a candidate generation module to obtain correction candidates. Finally in the post processing stage, results from GED and GEC are fused to form the final outputs. We tune our models to lean towards optimizing precision, which we believe is more crucial in practice. As a result, among the six tracks in the shared task, our system performs well in the correction tracks: measured in F1 score, we rank first, with the highest precision, in the TOP3 correction track and third in the TOP1 correction track, also with the highest precision. Ours are among the top 4 to 6 in other tracks, except for FPR where we rank 12. And our system achieves the highest precisions among the top 10 submissions at IDENTIFICATION and POSITION tracks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhao-etal-2021-mutual","url":"https:\/\/aclanthology.org\/2021.emnlp-main.325","title":"Mutual-Learning Improves End-to-End Speech Translation","abstract":"A currently popular research area in end-toend speech translation is the use of knowledge distillation from a machine translation (MT) task to improve the speech translation (ST) task. However, such scenario obviously allows only a one way transfer, limiting the overall effectiveness of the approach by the performance of the pre-trained teacher model. Therefore, we pose that in this respect knowledge distillationbased approaches are sub-optimal. We propose an alternative-a trainable mutual-learning scenario, where the MT and ST models are collaboratively trained and are considered as peers, rather than teacher\/student. This allows us to improve the performance of end-to-end ST more effectively than with a teacher-student paradigm. As a side benefit, performance of the MT model also improves. Experimental results show that in our mutual-learning scenario, models can effectively utilise the auxiliary information from peer models and achieve compelling results on MuST-C datasets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"whittemore-etal-1991-event","url":"https:\/\/aclanthology.org\/P91-1003","title":"Event-building through Role-filling and Anaphora Resolution","abstract":"In this study we map out a way to build event representations incrementally, using information which may be widely distributed across a discourse. An enhanced Discourse Representation (Kamp, 1981) provides the vehicle both for carrying open event roles through the discourse until they can be instantiated by NPs, and for resolving the reference of these otherwise problematic NPs by binding them to the event roles.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to note that the idea of using DRs as a means for building events across clauses came from a comment by Rich Thomason, cited in Dowty (1986:32): \"Rich Thomason (p.c.) has suggested to me that a very natural way to construct a theory of event anaphora would be via Discourse Representation Theory.\" Thomason was addressing (we think) the notion of referring to events via nominalizations. We just extended the idea of using DRT to construct events across clauses to also include those denoted by verbs.","year":1991,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ostling-2016-bayesian","url":"https:\/\/aclanthology.org\/C16-1060","title":"A Bayesian model for joint word alignment and part-of-speech transfer","abstract":"Current methods for word alignment require considerable amounts of parallel text to deliver accurate results, a requirement which is met only for a small minority of the world's approximately 7,000 languages. We show that by jointly performing word alignment and annotation transfer in a novel Bayesian model, alignment accuracy can be improved for language pairs where annotations are available for only one of the languages-a finding which could facilitate the study and processing of a vast number of low-resource languages. We also present an evaluation where our method is used to perform single-source and multi-source part-of-speech transfer with 22 translations of the same text in four different languages. This allows us to quantify the considerable variation in accuracy depending on the specific source text(s) used, even with different translations into the same language.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Thanks to the anonymous reviewers, Mats Wir\u00e9n, J\u00f6rg Tiedemann and Joakim Nivre for advice.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yu-etal-2013-automatic","url":"https:\/\/aclanthology.org\/W13-4007","title":"Automatic Prediction of Friendship via Multi-model Dyadic Features","abstract":"In this paper we focus on modeling friendships between humans as a way of working towards technology that can initiate and sustain a lifelong relationship with users. We do this by predicting friendship status in a dyad using a set of automatically harvested verbal and nonverbal features from videos of the interaction of students in a peer tutoring study. We propose a new computational model used to model friendship status in our data, based on a group sparse model (GSM) with L2,1 norm which is designed to accommodate the sparse and noisy properties of the multi-channel features. Our GSM model achieved the best overall performance compared to a non-sparse linear model (NLM) and a regular sparse linear model (SLM), as well as outperforming human raters. Dyadic features, such as number and length of conversational turns and mutual gaze, in addition to low level features such as F0 and gaze at task, were found to be good predictors of friendship status.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Thanks to Angela Ng, Rachel Marino and Marissa Cross for data collection, Giota Stratou for visual feature extraction, Yi Yang, Louis-Philippe Morency, Shoou-I Yu, William Wang, and Eric Xing for valuable discussions, and the NSF IIS for generous funding.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"luuk-2019-type","url":"https:\/\/aclanthology.org\/R19-1078","title":"A type-theoretical reduction of morphological, syntactic and semantic compositionality to a single level of description","abstract":"The paper presents NLC, a new formalism for modeling natural language (NL) compositionality. NLC is a functional type system (i.e. one based on mathematical functions and their types). Its main features include a close correspondence with NL and an integrated modeling of morphological, syntactic and semantic compositionality. The paper also presents an implementation of NLC in Coq. The implementation formalizes a diverse fragment of NL, with NLC expressions type checking and failing to type check in exactly the same ways that NL expressions pass and fail their acceptability tests. Among other things, this demonstrates the possibility of reducing morphological, syntactic and semantic compositionality to a single level of description. The level is tentatively identified with semantic compositionality-an interpretation which, besides being supported by results from language processing, has interesting implications on NL structure and modeling.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"I thank Jason Gross, Hendrik Luuk, Erik Palmgren and Enrico Tassi for their advice. This work has been supported by IUT20-56 and European Regional Development Fund through CEES.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"reichart-etal-2010-improved","url":"https:\/\/aclanthology.org\/W10-2909","title":"Improved Unsupervised POS Induction Using Intrinsic Clustering Quality and a Zipfian Constraint","abstract":"Modern unsupervised POS taggers usually apply an optimization procedure to a nonconvex function, and tend to converge to local maxima that are sensitive to starting conditions. The quality of the tagging induced by such algorithms is thus highly variable, and researchers report average results over several random initializations. Consequently, applications are not guaranteed to use an induced tagging of the quality reported for the algorithm. In this paper we address this issue using an unsupervised test for intrinsic clustering quality. We run a base tagger with different random initializations, and select the best tagging using the quality test. As a base tagger, we modify a leading unsupervised POS tagger (Clark, 2003) to constrain the distributions of word types across clusters to be Zipfian, allowing us to utilize a perplexity-based quality test. We show that the correlation between our quality test and gold standard-based tagging quality measures is high. Our results are better in most evaluation measures than all results reported in the literature for this task, and are always better than the Clark average results.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"budanitsky-hirst-2006-evaluating","url":"https:\/\/aclanthology.org\/J06-1003","title":"Evaluating WordNet-based Measures of Lexical Semantic Relatedness","abstract":"The quantification of lexical semantic relatedness has many applications in NLP, and many different measures have been proposed. We evaluate five of these measures, all of which use WordNet as their central resource, by comparing their performance in detecting and correcting real-word spelling errors. An information-content-based measure proposed by Jiang and Conrath is found superior to those proposed by Hirst and St-Onge, Leacock and Chodorow, Lin, and Resnik. In addition, we explain why distributional similarity is not an adequate proxy for lexical semantic relatedness.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lee-etal-2004-analysis","url":"https:\/\/aclanthology.org\/C04-1182","title":"Analysis and Detection of Reading Miscues for Interactive Literacy Tutors","abstract":"The Colorado Literacy Tutor (CLT) is a technology-based literacy program, designed on the basis of cognitive theory and scientifically motivated reading research, which aims to improve literacy and student achievement in public schools. One of the critical components of the CLT is a speech recognition system which is used to track the child's progress during oral reading and to provide sufficient information to detect reading miscues. In this paper, we extend on prior work by examining a novel labeling of children's oral reading audio data in order to better understand the factors that contribute most significantly to speech recognition errors. While these events make up nearly 8% of the data, they are shown to account for approximately 30% of the word errors in a state-of-the-art speech recognizer. Next, we consider the problem of detecting miscues during oral reading. Using features derived from the speech recognizer, we demonstrate that 67% of reading miscues can be detected at a false alarm rate of 3%.","label_nlp4sg":1,"task":["Detection of Reading Miscues"],"method":["speech recognition system"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":"This work was supported by grants from the National Science Foundation's ITR and IERI Programs under grants NSF\/ITR: REC-0115419, NSF\/IERI: EIA-0121201, NSF\/ITR: IIS-0086107, NSF\/IERI: ","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"meng-etal-2005-lexicon","url":"https:\/\/aclanthology.org\/I05-1048","title":"A Lexicon-Constrained Character Model for Chinese Morphological Analysis","abstract":"This paper proposes a lexicon-constrained character model that combines both word and character features to solve complicated issues in Chinese morphological analysis. A Chinese character-based model constrained by a lexicon is built to acquire word building rules. Each character in a Chinese sentence is assigned a tag by the proposed model. The word segmentation and partof-speech tagging results are then generated based on the character tags. The proposed method solves such problems as unknown word identification, data sparseness, and estimation bias in an integrated, unified framework. Preliminary experiments indicate that the proposed method outperforms the best SIGHAN word segmentation systems in the open track on 3 out of the 4 test corpora. Additionally, our method can be conveniently integrated with any other Chinese morphological systems as a post-processing module leading to significant improvement in performance.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"levitan-etal-2012-acoustic","url":"https:\/\/aclanthology.org\/N12-1002","title":"Acoustic-Prosodic Entrainment and Social Behavior","abstract":"In conversation, speakers have been shown to entrain, or become more similar to each other, in various ways. We measure entrainment on eight acoustic features extracted from the speech of subjects playing a cooperative computer game and associate the degree of entrainment with a number of manually-labeled social variables acquired using Amazon Mechanical Turk, as well as objective measures of dialogue success. We find that male-female pairs entrain on all features, while male-male pairs entrain only on particular acoustic features (intensity mean, intensity maximum and syllables per second). We further determine that entrainment is more important to the perception of female-male social behavior than it is for same-gender pairs, and it is more important to the smoothness and flow of male-male dialogue than it is for female-female or mixedgender pairs. Finally, we find that entrainment is more pronounced when intensity or speaking rate is especially high or low.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This material is based upon work supported in part by NSF IIS-0307905, NSF IIS-0803148, UBACYT 20020090300087, ANPCYT PICT-2009-0026, CONICET, VEGA No. 2\/0202\/11; and the EUSF (ITMS 26240220060).","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hahn-etal-2012-iterative","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/755_Paper.pdf","title":"Iterative Refinement and Quality Checking of Annotation Guidelines --- How to Deal Effectively with Semantically Sloppy Named Entity Types, such as Pathological Phenomena","abstract":"We here discuss a methodology for dealing with the annotation of semantically hard to delineate, i.e., sloppy, named entity types. To illustrate sloppiness of entities, we treat an example from the medical domain, namely pathological phenomena. Based on our experience with iterative guideline refinement we propose to carefully characterize the thematic scope of the annotation by positive and negative coding lists and allow for alternative, short vs. long mention span annotations. Short spans account for canonical entity mentions (e.g., standardized disease names), while long spans cover descriptive text snippets which contain entity-specific elaborations (e.g., anatomical locations, observational details, etc.). Using this stratified approach, evidence for increasing annotation performance is provided by \u03ba-based inter-annotator agreement measurements over several, iterative annotation rounds using continuously refined guidelines. The latter reflects the increasing understanding of the sloppy entity class both from the perspective of guideline writers and users (annotators). Given our data, we have gathered evidence that we can deal with sloppiness in a controlled manner and expect inter-annotator agreement values around 80% for PATHOJEN, the pathological phenomena corpus currently under development in our lab.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgements. This work is partially funded by a grant from the German Ministry of Education and Research (BMBF) for the Jena Centre of Systems Biology of Ageing (JENAGE) (grant no. 0315581D).","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"briggs-scheutz-2014-modeling","url":"https:\/\/aclanthology.org\/W14-5001","title":"Modeling Blame to Avoid Positive Face Threats in Natural Language Generation","abstract":"Prior approaches to politeness modulation in natural language generation (NLG) often focus on manipulating factors such as the directness of requests that pertain to preserving the autonomy of the addressee (negative face threats), but do not have a systematic way of understanding potential impoliteness from inadvertently critical or blame-oriented communications (positive face threats). In this paper, we discuss ongoing work to integrate a computational model of blame to prevent inappropriate threats to positive face.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the reviewers for their helpful feedback. This work was supported by NSF grant #111323.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wu-gu-2014-reducing","url":"https:\/\/aclanthology.org\/C14-1125","title":"Reducing Over-Weighting in Supervised Term Weighting for Sentiment Analysis","abstract":"Recently the research on supervised term weighting has attracted growing attention in the field of Traditional Text Categorization (TTC) and Sentiment Analysis (SA). Despite their impressive achievements, we show that existing methods more or less suffer from the problem of over-weighting. Overlooked by prior studies, over-weighting is a new concept proposed in this paper. To address this problem, two regularization techniques, singular term cutting and bias term, are integrated into our framework of supervised term weighting schemes. Using the concepts of over-weighting and regularization, we provide new insights into existing methods and present their regularized versions. Moreover, under the guidance of our framework, we develop a novel supervised term weighting scheme, regularized entropy (re). The proposed framework is evaluated on three datasets widely used in SA. The experimental results indicate that our re enjoys the best results in comparisons with existing methods, and regularization techniques can significantly improve the performances of existing supervised weighting methods.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by National Natural Science Foundation of China under grant 61371148 and Shanghai National Natural Science Foundation under grant 12ZR1402500.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"liu-etal-2016-evaluate","url":"https:\/\/aclanthology.org\/D16-1230","title":"How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation","abstract":"We investigate evaluation metrics for dialogue response generation systems where supervised labels, such as task completion, are not available. Recent works in response generation have adopted metrics from machine translation to compare a model's generated response to a single target response. We show that these metrics correlate very weakly with human judgements in the non-technical Twitter domain, and not at all in the technical Ubuntu domain. We provide quantitative and qualitative results highlighting specific weaknesses in existing metrics, and provide recommendations for future development of better automatic evaluation metrics for dialogue systems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hu-etal-2021-syntax","url":"https:\/\/aclanthology.org\/2021.ranlp-1.64","title":"Syntax Matters! Syntax-Controlled in Text Style Transfer","abstract":"Existing text style transfer (TST) methods rely on style classifiers to disentangle the text's content and style attributes for text style transfer. While the style classifier plays a critical role in existing TST methods, there is no known investigation on its effect on the TST methods. In this paper, we conduct an empirical study on the limitations of the style classifiers used in existing TST methods. We demonstrate that the existing style classifiers cannot learn sentence syntax effectively and ultimately worsen existing TST models' performance. To address this issue, we propose a novel Syntax-Aware Controllable Generation (SACG) model, which includes a syntaxaware style classifier that ensures learned style latent representations effectively capture the syntax information for TST. Through extensive experiments on two popular TST tasks, we show that our proposed method significantly outperforms the state-of-the-art methods. Our case studies have also demonstrated SACG's ability to generate fluent target-style sentences that preserved the original content.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is supported by Living Sky Technologies Ltd, Canada under its research exploratory funding initiatives. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of Living Sky Technologies Ltd, Canada.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sedoc-etal-2017-predicting","url":"https:\/\/aclanthology.org\/E17-2090","title":"Predicting Emotional Word Ratings using Distributional Representations and Signed Clustering","abstract":"Inferring the emotional content of words is important for text-based sentiment analysis, dialogue systems and psycholinguistics, but word ratings are expensive to collect at scale and across languages or domains. We develop a method that automatically extends word-level ratings to unrated words using signed clustering of vector space word representations along with affect ratings. We use our method to determine a word's valence and arousal, which determine its position on the circumplex model of affect, the most popular dimensional model of emotion. Our method achieves superior out-of-sample word rating prediction on both affective dimensions across three different languages when compared to state-of-theart word similarity based methods. Our method can assist building word ratings for new languages and improve downstream tasks such as sentiment analysis and emotion detection.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors acknowledge the support of the Templeton Religion Trust, grant TRT-0048.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"heeman-1998-pos","url":"https:\/\/aclanthology.org\/W98-1121","title":"POS Tagging versus Classes in Language Modeling","abstract":"Language models for speech recognition concen-Irate solely on recognizing the words that were spoken. In this paper, we advocate redefining the speech recognition problem so that its goal is to find both the best sequence of words and their POS tags, and thus incorporate POS tagging. The use of POS tags allows more sophisticated generalizations than are afforded by using a class-based approach. Furthermore, if we want to incorporate speech repair and intonational phrase modeling into the language model, using POS tags rather than classes gives better performance in this task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We wish to thank Geraldine Damnati. The research involved in this paper was done while the first author was visiting at CNET, France Ttl6com.","year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nishino-etal-2016-phrase","url":"https:\/\/aclanthology.org\/P16-2066","title":"Phrase Table Pruning via Submodular Function Maximization","abstract":"Phrase table pruning is the act of removing phrase pairs from a phrase table to make it smaller, ideally removing the least useful phrases first. We propose a phrase table pruning method that formulates the task as a submodular function maximization problem, and solves it by using a greedy heuristic algorithm. The proposed method can scale with input size and long phrases, and experiments show that it achieves higher BLEU scores than state-of-the-art pruning methods.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chikka-2016-cde","url":"https:\/\/aclanthology.org\/S16-1192","title":"CDE-IIITH at SemEval-2016 Task 12: Extraction of Temporal Information from Clinical documents using Machine Learning techniques","abstract":"In this paper, we demonstrate our approach for identification of events, time expressions and temporal relations among them. This work was carried out as part of SemEval-2016 Challenge Task 12: Clinical TempEval. The task comprises six sub-tasks: identification of event spans, time spans and their attributes, document time relation and the narrative container relations among events and time expressions. We have participated in all six subtasks. We have provided with a manually annotated dataset which comprises of training dataset (293 documents), development dataset (147 documents) and 151 documents as test dataset. We have submitted our work as two systems for the challenge. One system is developed using machine learning techniques, Conditional Random Fields (CRF) and Support Vector machines (SVM) and the other system is developed using deep neural network (DNN) techniques. The results show that both systems have given relatively same performance on these tasks.","label_nlp4sg":1,"task":["Extraction of Temporal Information"],"method":["Conditional Random Fields","Support Vector machines","deep neural network"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"paetzold-specia-2017-ultimate","url":"https:\/\/aclanthology.org\/I17-5005","title":"The Ultimate Presentation Makeup Tutorial: How to Polish your Posters, Slides and Presentations Skills","abstract":"There is no question that our research community have, and still has been producing an insurmountable amount of interesting strategies, models and tools to a wide array of problems and challenges in diverse areas of knowledge. But for as long as interesting work has existed, we've been plagued by a great unsolved mystery: how come there is so much interesting work being published in conferences, but not as many interesting and engaging posters and presentations being featured in them? After extensive research and investigation, we think we have finally found cause.\nWe believe this problem is not being caused directly by our undoubtedly competent researchers themselves, but rather by three organisms which have seemingly infected a great deal of our community:","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ammanabrolu-riedl-2019-transfer","url":"https:\/\/aclanthology.org\/D19-5301","title":"Transfer in Deep Reinforcement Learning Using Knowledge Graphs","abstract":"Text adventure games, in which players must make sense of the world through text descriptions and declare actions through text descriptions, provide a stepping stone toward grounding action in language. Prior work has demonstrated that using a knowledge graph as a state representation and question-answering to pre-train a deep Q-network facilitates faster control policy learning. In this paper, we explore the use of knowledge graphs as a representation for domain knowledge transfer for training text-adventure playing reinforcement learning agents. Our methods are tested across multiple computer generated and human authored games, varying in domain and complexity, and demonstrate that our transfer learning methods let us learn a higher-quality control policy faster.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This material is based upon work supported by the National Science Foundation under Grant No. IIS-1350339. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"postma-etal-2015-vua","url":"https:\/\/aclanthology.org\/S15-2058","title":"VUA-background : When to Use Background Information to Perform Word Sense Disambiguation","abstract":"We present in this paper our submission to task 13 of SemEval2015, which makes use of background information and external resources (DBpedia and Wikipedia) to automatically disambiguate texts. Our approach follows two routes for disambiguation: one route is proposed by a state-of-the-art WSD system, and the other one by the predominant sense information extracted in an unsupervised way from an automatically built background corpus. We reached 4th position in terms of F1-score in task number 13 of Se-mEval2015: \"Multilingual All-Words Sense Disambiguation and Entity Linking\" (Moro and Navigli, 2015). All the software and code created for this approach are publicly available on GitHub 1 .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research for this paper was supported by the Netherlands Organisation for Scientific Research (NWO) via the Spinoza-prize Vossen projects (SPI 30-673, 2014(SPI 30-673, -2019.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"quattoni-carreras-2020-comparison","url":"https:\/\/aclanthology.org\/2020.sustainlp-1.21","title":"A comparison between CNNs and WFAs for Sequence Classification","abstract":"We compare a classical CNN architecture for sequence classification involving several convolutional and max-pooling layers against a simple model based on weighted finite state automata (WFA). Each model has its advantages and disadvantages and it is possible that they could be combined. However, we believe that the first research goal should be to investigate and understand how do these two apparently dissimilar models compare in the context of specific natural language processing tasks. This paper is the first step towards that goal. Our experiments with five sequence classification datasets suggest that, despite the apparent simplicity of WFA models and training algorithms, the performance of WFAs is comparable to that of the CNNs.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by the European Research Council (ERC StG INTERACT 853459).","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ding-etal-2013-detecting","url":"https:\/\/aclanthology.org\/I13-1014","title":"Detecting Spammers in Community Question Answering","abstract":"As the popularity of Community Question Answering(CQA) increases, spamming activities also picked up in numbers and variety. On CQA sites, spammers often pretend to ask questions, and select answers which were published by their partners or themselves as the best answers. These fake best answers cannot be easily detected by neither existing methods nor common users. In this paper, we address the issue of detecting spammers on CQA sites. We formulate the task as an optimization problem. Social information is incorporated by adding graph regularization constraints to the text-based predictor. To evaluate the proposed approach, we crawled a data set from a CQA portal. Experimental results demonstrate that the proposed method can achieve better performance than some state-of-the-art methods.","label_nlp4sg":1,"task":["Detecting Spammers","detecting spammers"],"method":["graph regularization"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"The authors wish to thank the anonymous reviewers for their helpful comments. This work was partially funded by National Natural Science Foundation of China (61003092, 61073069), National Major Science and Technology Special Project of China (2014ZX03006005), Shanghai Municipal Science and Technology Commission (No.12511504500) and \"Chen Guang\" project supported by Shanghai Municipal Education Commission and Shanghai Education Development Foundation(11CG05).","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"ma-etal-2017-group","url":"https:\/\/aclanthology.org\/P17-2053","title":"Group Sparse CNNs for Question Classification with Answer Sets","abstract":"Question classification is an important task with wide applications. However, traditional techniques treat questions as general sentences, ignoring the corresponding answer data. In order to consider answer information into question modeling, we first introduce novel group sparse autoencoders which refine question representation by utilizing group information in the answer set. We then propose novel group sparse CNNs which naturally learn question representation with respect to their answers by implanting group sparse autoencoders into traditional CNNs. The proposed model significantly outperform strong baselines on four datasets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their suggestions. This work is supported in part by NSF IIS-1656051, DARPA FA8750-13-2-0041 (DEFT), DARPA XAI, a Google Faculty Research Award, and an HP Gift.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kwok-1998-improving","url":"https:\/\/aclanthology.org\/X98-1019","title":"Improving English and Chinese Ad-Hoc Retrieval: TIPSTER Text Phase 3 Final Report","abstract":"We investigated both English and Chinese ad-hoc information retrieval (IR). Part of our objectives is to study the use of term, phrasal and topical concept level evidence, either individually or in combination, to improve retrieval accuracy. For short queries, we studied five term level techniques that together lead to improvements over standard ad-hoc 2-stage retrieval some 20% to 40% for TREC5 & 6 experiments.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is partially supported by a contract from the U.S. Department of Defense MDA904-96-C-1481. I like to express my appreciation to R. Weischedel for use of the BBN POS tagger; L. Hirschman for the Mitre POS tagger and W.B. Croft for the UMASS Chinese segmenter.","year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"grimes-2016-sentiment","url":"https:\/\/aclanthology.org\/W16-0402","title":"Sentiment, Subjectivity, and Social Analysis Go ToWork: An Industry View - Invited Talk","abstract":"Seth Grimes Alta Plana Corporation grimes@altaplana.com\nAffective computing has a commercial side. Numerous products and projects provide sentiment, emotion, and intent extraction capabilities, applied in consumer and financial markets, for healthcare and customer care, and for media, policy, and politics. Academic and industry researchers are naturally interested how sentiment and social technologies are being applied and in commercial market opportunities and trends, in what's being funded, what's falling flat, and what's on business's roadmap. Analyst Seth Grimes will provide an industry overview, surveying companies and applications in the sentiment and social analytics spaces as well as work at the tech giants. He will discuss commercialization strategy and the affective market outlook.","label_nlp4sg":1,"task":["Social Analysis"],"method":["surveying","industry overview"],"goal1":"Decent Work and Economic Growth","goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":1,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ponti-passarotti-2016-differentia","url":"https:\/\/aclanthology.org\/L16-1108","title":"Differentia compositionem facit. A Slower-Paced and Reliable Parser for Latin","abstract":"The Index Thomisticus Treebank is the largest available treebank for Latin; it contains Medieval Latin texts by Thomas Aquinas. After experimenting on its data with a number of dependency parsers based on different supervised machine learning techniques, we found that DeSR with a multilayer perceptron algorithm, a right-to-left transition, and a tailor-made feature model is the parser providing the highest accuracy rates. We improved the results further by using a technique that combines the output parses of DeSR with those provided by other parsers, outperforming the previous state of the art in parsing the Index Thomisticus Treebank. The key idea behind such improvement is to ensure a sufficient diversity and accuracy of the outputs to be combined; for this reason, we performed an in-depth evaluation of the results provided by the different parsers that we combined. Finally, we assessed that, although the general architecture of the parser is portable to Classical Latin, yet the model trained on Medieval Latin is inadequate for such purpose.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Felice Dell'Orletta and Bernd Bohnet for their suggestions on DeSR and MATE-tools parsers, respectively. Many thanks also to Marco Piastra for providing free access to the facilities of the Artificial Vision Laboratory (University of Pavia, Italy).","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hoang-etal-2016-incorporating","url":"https:\/\/aclanthology.org\/N16-1149","title":"Incorporating Side Information into Recurrent Neural Network Language Models","abstract":"Recurrent neural network language models (RNNLM) have recently demonstrated vast potential in modelling long-term dependencies for NLP problems, ranging from speech recognition to machine translation. In this work, we propose methods for conditioning RNNLMs on external side information, e.g., metadata such as keywords, description, document title or topic headline. Our experiments show consistent improvements of RNNLMs using side information over the baselines for two different datasets and genres in two languages. Interestingly, we found that side information in a foreign language can be highly beneficial in modelling texts in another language, serving as a form of cross-lingual language modelling.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank the reviewers for valuable comments and feedbacks. Cong Duy Vu Hoang was supported by research scholarships from the University of Melbourne, Australia. Dr Trevor Cohn was supported by the ARC (Future Fellowship).","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"barton-1961-application","url":"https:\/\/aclanthology.org\/1961.earlymt-1.7","title":"The application of the article in English","abstract":"THE fact that many languages are alike in using two alternative forms of the article suggests that they probably express a certain basic antithesis in every case. The purpose of this paper is to discuss this antithesis as it has been formulated, tentatively and as a working hypothesis for English, by members of the Centro di Cibernetica e di Attivita Linguistiche dell' Universita di Milano. In outline, our analysis is this. Among the many possible ways of regarding a thing, we find two which are opposed and complementary. One (A) regards the thing in isolation, the other (B) regards it as a thing among other things; at least together with one other thing. We regard a thing in isolation, when we are interested in presenting it in its own temporal continuity, its history past or future. The function of the definite article is to present a thing in this way. The indefinite article, on the other hand, presents the article as one among others, not singled out by its particular history. Clearly this opposition will only be relevant with things which do not admit the alternative singular\/plural; not, for instance, with things designated by abstracts, nouns of material, or proper names, in their ordinary uses. Abstracts are not pluralisable, because they consider a thing in respect of the internal relations which hold between its constitutive elements. Materials are not pluralisable, because they are not single objects with limits in time or space. And proper names are not pluralisable, because they carry with them their own spatio-temporal situation. In order for a thing to be capable of singularity and plurality, it must stand in relation to a frame of reference, to something outside itself. The choice of article for things which do not admit the singular\/plural alternative is made on a different basis, as I shall show, from the choice for things which do admit the alternative. A language without article, as Russian is, refrains from semantising this distinction except in special cases. Though the material for the (98026)","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1961,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"reiter-2011-task","url":"https:\/\/aclanthology.org\/W11-2704","title":"Task-Based Evaluation of NLG Systems: Control vs Real-World Context","abstract":"Currently there is little agreement about, or even discussion of, methodologies for taskbased evaluation of NLG systems. I discuss one specific issue in this area, namely the importance of control vs the importance of ecological validity (real-world context), and suggest that perhaps we need to put more emphasis on ecological validity in NLG evaluations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"suter-etal-2021-grounding","url":"https:\/\/aclanthology.org\/2021.alvr-1.4","title":"Grounding Plural Phrases: Countering Evaluation Biases by Individuation","abstract":"Phrase grounding (PG) is a multimodal task that grounds language in images. PG systems are evaluated on well-known benchmarks, using Intersection over Union (IoU) as evaluation metric. This work highlights a disconcerting bias in the evaluation of grounded plural phrases, which arises from representing sets of objects as a union box covering all component bounding boxes, in conjunction with the IoU metric. We detect, analyze and quantify an evaluation bias in the grounding of plural phrases and define a novel metric, c-IoU, based on a union box's component boxes. We experimentally show that our new metric greatly alleviates this bias and recommend using it for fairer evaluation of plural phrases in PG tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dickinson-jochim-2008-simple","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2008\/pdf\/210_paper.pdf","title":"A Simple Method for Tagset Comparision","abstract":"Based on the idea that local contexts predict the same basic category across a language, we develop a simple method for comparing tagsets across corpora. The principle differences between tagsets are evidenced by variation in categories in one corpus in the same contexts where another corpus exhibits only a single tag. Such mismatches highlight differences in the definitions of tags which are crucial when porting technology from one annotation scheme to another.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kulkarni-etal-2012-semantic","url":"https:\/\/aclanthology.org\/C12-1091","title":"Semantic Processing of Compounds in Indian Languages","abstract":"Compounds occur very frequently in Indian Languages. There are no strict orthographic conventions for compounds in modern Indian Languages. In this paper, Sanskrit compounding system is examined thoroughly and the insight gained from the Sanskrit grammar is applied for the analysis of compounds in Hindi and Marathi. It is interesting to note that compounding in Hindi deviates from that in Sanskrit in two aspects. The data analysed for Hindi does not contain any instance of Bahuvr\u012bhi (exo-centric) compound. Second, Hindi data presents many cases where quite a lot of compounds require a verb as well as vibhakti(a case marker) for its paraphrasing. Compounds requiring a verb for paraphrasing are termed as madhyama-pada-lop\u012b in Sanskrit, and they are found to be rare in Sanskrit.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"busemann-horacek-1998-flexible","url":"https:\/\/aclanthology.org\/W98-1425","title":"A Flexible Shallow Approach to Text Generation","abstract":"In order to support the efficient development of NL generation systems, two orthogonal \u2022 methods are currently pursued with emphasis: (i) reusable, general, and linguistically \u2022motivated surface realization components, and (2) Simple, task-oriented template-based techniques. In this paper we argue that, from an application-oriented perspective, the benefits of both are still limited, lax order to improve this situation, we suggest and evaluate shallow generation methods associated with increased flexibility. We advise a close connection between domain-motivated and linguistic ontologies that Supports the quick adaptation to new tasks and domains, rather than the reuse of general resources. Our method is especially designed for \u2022generating reports with limited linguistic variations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We consider it a scientific challenge to combine shallow and in-depth approaches to analysis and generation in such a way that more theoretically motivated research finds its way into real applications.","year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"steininger-etal-2002-user","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2002\/pdf\/352.pdf","title":"User-State Labeling Procedures For The Multimodal Data Collection Of SmartKom","abstract":"This contribution deals with the user-state labeling procedures of a multimodal data corpus that is created in the SmartKom project. The goal of the SmartKom project is the development of an intelligent computer-user interface that allows almost natural communication with an adaptive and self-explanatory machine. The system does not only allow input in the form of natural speech but also in the form of gestures. Additionally, facial expressions are analyzed. For the training of recognizers and the exploration of how users interact with the system, data is collected. The data comprises video and audio recordings from which the speech is transliterated and gestures and user-states are labeled. This paper gives an in depth description of the different annotation procedures for user-states. Some preliminary results will be presented, particularly a description of the homogeneity of the different user-states and their most important features.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is being supported by the German Federal Ministry of Education and Research, grant no. 01 IL 905. We give our thanks to the SmartKom group of the Institute of Phonetics in Munich that provided the Wizardof-Oz data. Many thanks to Alexander Borkowski for the help with analyzing the data and Bernd Lindemann for finding the examples.","year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hockenmaier-bisk-2010-normal","url":"https:\/\/aclanthology.org\/C10-1053","title":"Normal-form parsing for Combinatory Categorial Grammars with generalized composition and type-raising","abstract":"We propose and implement a modification of the Eisner (1996) normal form to account for generalized composition of bounded degree, and an extension to deal with grammatical type-raising. 2 Combinatory Categorial Grammar In CCG, every constituent (\"John saw\") has a syntactic category (S\/NP) and a semantic interpretation (\u03bbx.saw(john , x)). 2 Constituents combine according to a small set of language-1 Although Eisner (1996, section 5) also provides a safe and complete parsing algorithm which can return non-NF derivations when necessary to preseve an interpretation if composition is bounded or the grammar is restricted in other (arbitrary) ways. 2 More complex representations than simple predicateargument structures are equally possible.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Mark Steedman for helpful discussions, and Jason Eisner for his very generous feedback which helped to greatly improve this paper. All remaining errors and omissions are our own responsibility. J.H is supported by NSF grant IIS 08-03603 INT2-Medium.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lin-etal-2019-gazetteer","url":"https:\/\/aclanthology.org\/D19-1646","title":"Gazetteer-Enhanced Attentive Neural Networks for Named Entity Recognition","abstract":"Current region-based NER models only rely on fully-annotated training data to learn effective region encoder, which often face the training data bottleneck. To alleviate this problem, this paper proposes Gazetteer-Enhanced Attentive Neural Networks, which can enhance region-based NER by learning name knowledge of entity mentions from easilyobtainable gazetteers, rather than only from fully-annotated data. Specially, we first propose an attentive neural network (ANN), which explicitly models the mention-context association and therefore is convenient for integrating externally-learned knowledge. Then we design an auxiliary gazetteer network, which can effectively encode name regularity of mentions only using gazetteers. Finally, the learned gazetteer network is incorporated into ANN for better NER. Experiments show that our ANN can achieve the state-of-the-art performance on ACE2005 named entity recognition benchmark. Besides, incorporating gazetteer network can further improve the performance and significantly reduce the requirement of training data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We sincerely thank the reviewers for their insightful comments and valuable suggestions. Moreover, this work is supported by the National Natural Science Foundation of China under Grants no. 61433015, 61572477 and 61772505; and the Young Elite Scientists Sponsorship Program no. YESS20160177.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"iyyer-etal-2014-political","url":"https:\/\/aclanthology.org\/P14-1105","title":"Political Ideology Detection Using Recursive Neural Networks","abstract":"An individual's words often reveal their political ideology. Existing automated techniques to identify ideology from text focus on bags of words or wordlists, ignoring syntax. Taking inspiration from recent work in sentiment analysis that successfully models the compositional aspect of language, we apply a recursive neural network (RNN) framework to the task of identifying the political position evinced by a sentence. To show the importance of modeling subsentential elements, we crowdsource political annotations at a phrase and sentence level. Our model outperforms existing models on our newly annotated dataset and an existing dataset.","label_nlp4sg":1,"task":["Political Ideology Detection"],"method":["Recursive Neural Networks"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers, Hal Daum\u00e9, Yuening Hu, Yasuhiro Takayama, and Jyothi Vinjumur for their insightful comments. We also want to thank Justin Gross for providing the IBC and Asad Sayeed for help with the Crowdflower task design, as well as Richard Socher and Karl Moritz Hermann for assisting us with our model implementations. This work was supported by NSF Grant CCF-1018625. Boyd-Graber is also supported by NSF Grant IIS-1320538. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsor.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"voorhees-2005-using","url":"https:\/\/aclanthology.org\/H05-1038","title":"Using Question Series to Evaluate Question Answering System Effectiveness","abstract":"The original motivation for using question series in the TREC 2004 question answering track was the desire to model aspects of dialogue processing in an evaluation task that included different question types. The structure introduced by the series also proved to have an important additional benefit: the series is at an appropriate level of granularity for aggregating scores for an effective evaluation. The series is small enough to be meaningful at the task level since it represents a single user interaction, yet it is large enough to avoid the highly skewed score distributions exhibited by single questions. An analysis of the reliability of the per-series evaluation shows the evaluation is stable for differences in scores seen in the track.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"litman-2012-cohesion","url":"https:\/\/aclanthology.org\/W12-1627","title":"Cohesion, Entrainment and Task Success in Educational Dialog","abstract":"Researchers often study dialog corpora to better understand what makes some dialogs more successful than others. In this talk I will examine the relationship between coherence\/entrainment and task success, in several types of educational dialog corpora: 1) one-on-one tutoring, where students use dialog to interact with a human tutor in the physics domain, 2) one-on-one tutoring, where students instead interact with a spoken dialog system, and 3) engineering design, where student teams engage in multi-party dialog to complete a group project. I will first introduce several corpus-based measures of both lexical and acousticprosodic dialog cohesion and entrainment, and extend them to handle multi-party conversations. I will then show that the amount of cohesion and\/or entrainment positively correlates with measures of educational task success in all of our corpora. Finally, I will discuss how we are using our findings to build better tutorial dialog systems.","label_nlp4sg":1,"task":["Educational Dialog"],"method":["corpus - based measures"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"delucia-etal-2021-decoding","url":"https:\/\/aclanthology.org\/2021.gem-1.16","title":"Decoding Methods for Neural Narrative Generation","abstract":"Narrative generation is an open-ended NLP task in which a model generates a story given a prompt. The task is similar to neural response generation for chatbots; however, innovations in response generation are often not applied to narrative generation, despite the similarity between these tasks. We aim to bridge this gap by applying and evaluating advances in decoding methods for neural response generation to neural narrative generation. In particular, we employ GPT-2 and perform ablations across nucleus sampling thresholds and diverse decoding hyperparameters-specifically, maximum mutual information-analyzing results over multiple criteria with automatic and human evaluation. We find that (1) nucleus sampling is generally best with thresholds between 0.7 and 0.9; (2) a maximum mutual information objective can improve the quality of generated stories; and (3) established automatic metrics do not correlate well with human judgments of narrative quality on any qualitative metric.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Daphne Ippolito, Nathaniel Weir, Carlos Aguirre, Rachel Wicks, Arya McCarthy, and the anonymous reviewers for their helpful feedback. We also wish to thank the anonymous mechanical Turkers who provided invaluable suggestions for improving our human evaluation setup during earlier iterations of this study.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yang-etal-2015-sampling","url":"https:\/\/aclanthology.org\/W15-5011","title":"Sampling-based Alignment and Hierarchical Sub-sentential Alignment in Chinese--Japanese Translation of Patents","abstract":"This paper describes Chinese-Japanese translation systems based on different alignment methods using the JPO corpus and our submission (ID: WASUIPS) to the subtask of the 2015 Workshop on Asian Translation. One of the alignment methods used is bilingual hierarchical sub-sentential alignment combined with sampling-based multilingual alignment. We also accelerated this method and in this paper, we evaluate the translation results and time spent on several machine translation tasks. The training time is much faster than the standard baseline pipeline (GIZA++\/Moses) and MGIZA\/Moses.","label_nlp4sg":1,"task":["translation"],"method":["bilingual hierarchical sub - sentential alignment","sampling - based multilingual alignment"],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":"The paper is part of the outcome of research performed under a Waseda University Grant for Special Research Project (Project number: 2015A-063).","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tabak-purver-2020-temporal","url":"https:\/\/aclanthology.org\/2020.nlpcovid19-2.7","title":"Temporal Mental Health Dynamics on Social Media","abstract":"We describe a set of experiments for building a temporal mental health dynamics system. We utilise a pre-existing methodology for distantsupervision of mental health data mining from social media platforms and deploy the system during the global COVID-19 pandemic as a case study. Despite the challenging nature of the task, we produce encouraging results, both explicit to the global pandemic and implicit to a global phenomenon, Christmas Depression, supported by the literature. We propose a methodology for providing insight into temporal mental health dynamics to be utilised for strategic decision-making.","label_nlp4sg":1,"task":["Temporal Mental Health Dynamics"],"method":["distantsupervision"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"Purver is partially supported by the EPSRC under grant EP\/S033564\/1, and by the European Union's Horizon 2020 program under grant agreements 769661 (SAAM, Supporting Active Ageing through Multimodal coaching) and 825153 (EM-BEDDIA, Cross-Lingual Embeddings for Less-Represented Languages in European News Media). The results of this publication reflect only the authors' views and the Commission is not responsible for any use that may be made of the information it contains. We express our thanks to all of our data annotators: L. Achour, L. Del Zompo, N. Fiore, M. Hechler and R. Medivil Zamudio.","year":2020,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"roh-etal-2008-recognizing","url":"https:\/\/aclanthology.org\/Y08-1049","title":"Recognizing Coordinate Structures for Machine Translation of English Patent Documents","abstract":"Patent machine translation is one of main target areas of current practical MT systems. Patent documents have their own peculiar description style. Especially, abstracts or claims in patent documents are characterized by their long and complex syntactic structures, which are often caused by coordination. So, syntactic analysis of patent documents requires special treatment for coordination. This paper describes a method to deal with long sentences in patent documents by recognizing coordinate structures. Coordinate structures are recognized using a similarity table which reflects parallelism between conjuncts. Our method is applied to a practical MT system and improves its quality and efficiency.","label_nlp4sg":1,"task":["Patent machine translation"],"method":["similarity table"],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"passban-etal-2016-enriching","url":"https:\/\/aclanthology.org\/C16-1243","title":"Enriching Phrase Tables for Statistical Machine Translation Using Mixed Embeddings","abstract":"The phrase table is considered to be the main bilingual resource for the phrase-based statistical machine translation (PBSMT) model. During translation, a source sentence is decomposed into several phrases. The best match of each source phrase is selected among several target-side counterparts within the phrase table, and processed by the decoder to generate a sentence-level translation. The best match is chosen according to several factors, including a set of bilingual features. PBSMT engines by default provide four probability scores in phrase tables which are considered as the main set of bilingual features. Our goal is to enrich that set of features, as a better feature set should yield better translations. We propose new scores generated by a Convolutional Neural Network (CNN) which indicate the semantic relatedness of phrase pairs. We evaluate our model in different experimental settings with different language pairs. We observe significant improvements when the proposed features are incorporated into the PBSMT pipeline.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the three anonymous reviewers for their valuable and constructive comments and the Irish Center for High-End Computing (www.ichec.ie) for providing computational infrastructures. This research is supported by Science Foundation Ireland at ADAPT: Centre for Digital Content Platform Research (Grant 13\/RC\/2106).","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"satria-tokunaga-2017-evaluation","url":"https:\/\/aclanthology.org\/W17-5008","title":"Evaluation of Automatically Generated Pronoun Reference Questions","abstract":"This study provides a detailed analysis of evaluation of English pronoun reference questions which are created automatically by machine. Pronoun reference questions are multiple choice questions that ask test takers to choose an antecedent of a target pronoun in a reading passage from four options. The evaluation was performed from two perspectives: the perspective of English teachers and that of English learners. Item analysis suggests that machinegenerated questions achieve comparable quality with human-made questions. Correlation analysis revealed a strong correlation between the scores of machinegenerated questions and that of humanmade questions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hartung-frank-2011-assessing","url":"https:\/\/aclanthology.org\/W11-2506","title":"Assessing Interpretable, Attribute-related Meaning Representations for Adjective-Noun Phrases in a Similarity Prediction Task","abstract":"We present a distributional vector space model that incorporates Latent Dirichlet Allocation in order to capture the semantic relation holding between adjectives and nouns along interpretable dimensions of meaning: The meaning of adjective-noun phrases is characterized in terms of ontological attributes that are prominent in their compositional semantics. The model is evaluated in a similarity prediction task based on paired adjective-noun phrases from the Mitchell and Lapata (2010) benchmark data. Comparing our model against a high-dimensional latent word space, we observe qualitative differences that shed light on different aspects of similarity conveyed by both models and suggest integrating their complementary strengths.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"higashinaka-etal-2010-modeling","url":"https:\/\/aclanthology.org\/W10-4304","title":"Modeling User Satisfaction Transitions in Dialogues from Overall Ratings","abstract":"This paper proposes a novel approach for predicting user satisfaction transitions during a dialogue only from the ratings given to entire dialogues, with the aim of reducing the cost of creating reference ratings for utterances\/dialogue-acts that have been necessary in conventional approaches. In our approach, we first train hidden Markov models (HMMs) of dialogue-act sequences associated with each overall rating. Then, we combine such rating-related HMMs into a single HMM to decode a sequence of dialogueacts into state sequences representing to which overall rating each dialogue-act is most related, which leads to our rating predictions. Experimental results in two dialogue domains show that our approach can make reasonable predictions; it significantly outperforms a baseline and nears the upper bound of a supervised approach in some evaluation criteria. We also show that introducing states that represent dialogue-act sequences that occur commonly in all ratings into an HMM significantly improves prediction accuracy.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"oinam-etal-2018-treebank","url":"https:\/\/aclanthology.org\/W18-4916","title":"A Treebank for the Healthcare Domain","abstract":"This paper presents a treebank for the healthcare domain developed at ezDI. The treebank is created from a wide array of clinical health record documents across hospitals. The data has been de-identified and annotated for constituent syntactic structure. The treebank contains a total of 52053 sentences that have been sampled for subdomains as well as linguistic variations. The paper outlines the sampling process followed to ensure a better domain representation in the corpus, the annotation process and challenges, and corpus statistics. The Penn Treebank tagset and guidelines were largely followed, but there were many syntactic contexts that warranted adaptation of the guidelines. The treebank created was used to retrain the Berkeley parser and the Stanford parser. These parsers were also trained with the GENIA treebank for comparative quality assessment. Our treebank yielded greater accuracy on both parsers. Berkeley parser performed better on our treebank with an average F1 measure of 91 across 5-folds. This was a significant jump from the out-of-the-box F1 score of 70 on Berkeley parser's default grammar.","label_nlp4sg":1,"task":["Healthcare"],"method":["Treebank"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"We acknowledge the contribution of the linguists at JNU, New Delhi, namely, Srishti Singh, Arushi Uniyal, Sakshi Kalra and Azzam Obaid. We also acknowledge the help of Dr. Binni Shah and Disha Dave in understanding domain specific concepts and expressions. We would also like to thank Prof. Pushpak Bhattacharya and Prof. Girish Nath Jha for their advice.","year":2018,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"iida-etal-2019-attention","url":"https:\/\/aclanthology.org\/P19-2030","title":"Attention over Heads: A Multi-Hop Attention for Neural Machine Translation","abstract":"In this paper, we propose a multi-hop attention for the Transformer. It refines the attention for an output symbol by integrating that of each head, and consists of two hops. The first hop attention is the scaled dot-product attention which is the same attention mechanism used in the original Transformer. The second hop attention is a combination of multi-layer perceptron (MLP) attention and head gate, which efficiently increases the complexity of the model by adding dependencies between heads. We demonstrate that the translation accuracy of the proposed multi-hop attention outperforms the baseline Transformer significantly, +0.85 BLEU point for the IWSLT-2017 German-to-English task and +2.58 BLEU point for the WMT-2017 German-to-English task. We also find that the number of parameters required for a multi-hop attention is smaller than that for stacking another self-attention layer and the proposed model converges significantly faster than the original Transformer.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"xu-zhao-2012-using","url":"https:\/\/aclanthology.org\/C12-2131","title":"Using Deep Linguistic Features for Finding Deceptive Opinion Spam","abstract":"While most recent work has focused on instances of opinion spam which are manually identifiable or deceptive opinion spam which are written by paid writers separately, in this work we study both of these interesting topics and propose an effective framework which has good performance on both datasets. Based on the golden-standard opinion spam dataset, we propose a novel model which integrates some deep linguistic features derived from a syntactic dependency parsing tree to discriminate deceptive opinions from normal ones. On a background of multiple language tasks, our model is evaluated on both English (gold-standard) and Chinese (non-gold) datasets. The experimental results show that our model produces state-of-the-art results on both of the topics.","label_nlp4sg":1,"task":["Finding Deceptive Opinion Spam"],"method":["Deep Linguistic Features","syntactic dependency parsing tree"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"monson-etal-2004-unsupervised","url":"https:\/\/aclanthology.org\/W04-0107","title":"Unsupervised Induction of Natural Language Morphology Inflection Classes","abstract":"We propose a novel language-independent framework for inducing a collection of morphological inflection classes from a monolingual corpus of full form words. Our approach involves two main stages. In the first stage, we generate a large data structure of candidate inflection classes and their interrelationships. In the second stage, search and filtering techniques are applied to this data structure, to identify a select collection of \"true\" inflection classes of the language. We describe the basic methodology involved in both stages of our approach and present an evaluation of our baseline techniques applied to induction of major inflection classes of Spanish. The preliminary results on an initial training corpus already surpass an F 1 of 0.5 against ideal Spanish inflectional morphology classes.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research reported in this paper was funded in part by NSF grant number IIS-0121631.","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"levi-2018-connecting","url":"https:\/\/aclanthology.org\/W18-3010","title":"Connecting Supervised and Unsupervised Sentence Embeddings","abstract":"Representing sentences as numerical vectors while capturing their semantic context is an important and useful intermediate step in natural language processing. Representations that are both general and discriminative can serve as a tool for tackling various NLP tasks. While common sentence representation methods are unsupervised in nature, recently, an approach for learning universal sentence representation in a supervised setting was presented in (Conneau et al., 2017). We argue that although promising results were obtained, an improvement can be reached by adding various unsupervised constraints that are motivated by auto-encoders and by language models. We show that by adding such constraints, superior sentence embeddings can be achieved. We compare our method with the original implementation and show improvements in several tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Shimi Salant and Ofir Press for their helpful comments.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-etal-2017-top","url":"https:\/\/aclanthology.org\/K17-1011","title":"Top-Rank Enhanced Listwise Optimization for Statistical Machine Translation","abstract":"Pairwise ranking methods are the basis of many widely used discriminative training approaches for structure prediction problems in natural language processing (NLP). Decomposing the problem of ranking hypotheses into pairwise comparisons enables simple and efficient solutions. However, neglecting the global ordering of the hypothesis list may hinder learning. We propose a listwise learning framework for structure prediction problems such as machine translation. Our framework directly models the entire translation list's ordering to learn parameters which may better fit the given listwise samples. Furthermore, we propose top-rank enhanced loss functions, which are more sensitive to ranking errors at higher positions. Experiments on a large-scale Chinese-English translation task show that both our listwise learning framework and top-rank enhanced listwise losses lead to significant improvements in translation quality.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank the anonymous reviewers for their valuable comments. This work is supported by the National Science Foundation of China (No. 61672277, 61300158 and 61472183). Part of Huadong Chen's contribution was made while visiting University of Notre Dame. His visit was supported by the joint PhD program of China Scholarship Council.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"popa-stefanescu-2020-applying","url":"https:\/\/aclanthology.org\/2020.vardial-1.18","title":"Applying Multilingual and Monolingual Transformer-Based Models for Dialect Identification","abstract":"We study the ability of large fine-tuned transformer models to solve a binary classification task of dialect identification, with a special interest in comparing the performance of multilingual to monolingual ones. The corpus analyzed contains Romanian and Moldavian samples from the news domain, as well as tweets for assessing the performance. We find that the monolingual models are superior to the multilingual ones and the best results are obtained using an SVM ensemble of 5 different transformer-based models. We provide our experimental results and an analysis of the attention mechanisms of the best-performing individual classifiers to explain their decisions. The code we used was released under an open-source license. 1 Introduction Dialect Identification is a Natural Language Processing (NLP) task that started receiving more interest in recent years, in part due to VarDial, the workshop on NLP for Similar Languages, Varieties and Dialects (","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hazem-etal-2020-hierarchical","url":"https:\/\/aclanthology.org\/2020.coling-main.549","title":"Hierarchical Text Segmentation for Medieval Manuscripts","abstract":"In this paper, we address the segmentation of books of hours, Latin devotional manuscripts of the late Middle Ages, that exhibit challenging issues: a complex hierarchical entangled structure, variable content, noisy transcriptions with no sentence markers, and strong correlations between sections for which topical information is no longer sufficient to draw segmentation boundaries. We show that the main state-of-the-art segmentation methods are either inefficient or inapplicable for books of hours and propose a bottom-up greedy approach that considerably enhances the segmentation results. We stress the importance of such hierarchical segmentation of books of hours for historians to explore their overarching differences underlying conception about Church.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is part of the HORAE project (Hours -Recognition, Analysis, Editions) and is supported by the French National Research Agency under grant ANR-17-CE38-0008.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"martinez-etal-2007-melb","url":"https:\/\/aclanthology.org\/S07-1050","title":"MELB-MKB: Lexical Substitution system based on Relatives in Context","abstract":"In this paper we describe the MELB-MKB system, as entered in the SemEval-2007 lexical substitution task. The core of our system was the \"Relatives in Context\" unsupervised approach, which ranked the candidate substitutes by web-lookup of the word sequences built combining the target context and each substitute. Our system ranked third in the final evaluation, performing close to the top-ranked system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was carried out with support from Australian Research Council grant no. DP0663879.","year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"galarreta-etal-2017-corpus","url":"https:\/\/doi.org\/10.26615\/978-954-452-049-6_033","title":"Corpus Creation and Initial SMT Experiments between Spanish and Shipibo-konibo","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-ng-2016-chinese","url":"https:\/\/aclanthology.org\/P16-1074","title":"Chinese Zero Pronoun Resolution with Deep Neural Networks","abstract":"While unsupervised anaphoric zero pronoun (AZP) resolvers have recently been shown to rival their supervised counterparts in performance, it is relatively difficult to scale them up to reach the next level of performance due to the large amount of feature engineering efforts involved and their ineffectiveness in exploiting lexical features. To address these weaknesses, we propose a supervised approach to AZP resolution based on deep neural networks, taking advantage of their ability to learn useful task-specific representations and effectively exploit lexical features via word embeddings. Our approach achieves stateof-the-art performance when resolving the Chinese AZPs in the OntoNotes corpus.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the three anonymous reviewers for their detailed comments. This work was supported in part by NSF Grants IIS-1219142 and IIS-1528037. Any opinions, findings, conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views or official policies, either expressed or implied, of NSF.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"suri-etal-1999-methodology","url":"https:\/\/aclanthology.org\/J99-2001","title":"A Methodology for Extending Focusing Frameworks","abstract":"We address the problem of how to develop and assess algorithms for tracking local focus and for proposing referents of pronouns. Previous focusing research has not adequately addressed the processing of complex sentences. We discuss issues involved in processing complex sentences and review a methodology used by other researchers to develop their focusing frameworks. We identify difficulties with that methodology and difficulties with using a corpus analysis to extend focusing frameworks to handle complex sentences. We introduce a new methodology for extending focusing frameworks, which involves two steps. In the first step, a set of systematically constructed texts are used to identify an extension of the focusing framework to handle a particular kind of complex sentence. In the second step, a corpus analysis is used to confirm the extension. We explain how our methodology overcomes the difficulties faced by other approaches.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Jeff Lidz, John Hughes, and the anonymous reviewers for their many helpful comments on this research. We thank our informants for their help and time in providing us with judgments.","year":1999,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rayner-etal-1993-speech","url":"https:\/\/aclanthology.org\/H93-1042","title":"A Speech to Speech Translation System Built From Standard Components","abstract":"This paper I describes a speech to speech translation system using standard components and a suite of generalizable customization techniques. The system currently translates air travel planning queries from English to Swedish. The modulax architecture is designed to be easy to port to new domains and languages, and consists of a pipelined series of processing phases. The output of each phase consists of multiple hypotheses; statistical preference mechanisms, the data for which is derived from automatic processing of domain corpora, are used between each pair of phases to filter hypotheses. Linguistic knowledge is represented throughout the system in declarative form. We summarize the architectures of the component systems and the interfaces between them, and present initial performance results. 1 The research reported in this paper was sponsored by Swedish Telecom (Televerket Ngt). Several people not listed as co-authors have also made contributions to the project: among these we would particularly like to mention Marie-Susanne AgnKs, George Chen, Dick Crouch, Bsrbro Ekholm, Arnold Smith, Tomas Svensson and TorbjSm ~hs. 2The preference mechanism between target language text output and speech synthesis has not yet been implemented.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1993,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kirschnick-etal-2014-freepal","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/764_Paper.pdf","title":"Freepal: A Large Collection of Deep Lexico-Syntactic Patterns for Relation Extraction","abstract":"The increasing availability and maturity of both scalable computing architectures and deep syntactic parsers is opening up new possibilities for Relation Extraction (RE) on large corpora of natural language text. In this paper, we present FREEPAL, a resource designed to assist with the creation of relation extractors for more than 5,000 relations defined in the FREEBASE knowledge base (KB). The resource consists of over 10 million distinct lexico-syntactic patterns extracted from dependency trees, each of which is assigned to one or more FREEBASE relations with different confidence strengths. We generate the resource by executing a large-scale distant supervision approach on the CLUEWEB09 corpus to extract and parse over 260 million sentences labeled with FREEBASE entities and relations. We make FREEPAL freely available to the research community, and present a web demonstrator to the dataset, accessible from free-pal.appspot.com.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their helpful comments. Johannes Kirschnick and Holmer Hemsen received funding from the German Federal Ministry of Economics and Technology (BMWi) under grant agreement A01MD11018, 'A cloud-based Marketplace for Information and Analytics on the German Web' (MIA). Alan Akbik received funding from the European Union's Seventh Framework Programme (FP7\/2007 under grant agreement ICT-2009-4-1 270137 'Scalable Preservation Environments' (SCAPE).","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rotman-reichart-2019-deep","url":"https:\/\/aclanthology.org\/Q19-1044","title":"Deep Contextualized Self-training for Low Resource Dependency Parsing","abstract":"Neural dependency parsing has proven very effective, achieving state-of-the-art results on numerous domains and languages. Unfortunately, it requires large amounts of labeled data, which is costly and laborious to create. In this paper we propose a selftraining algorithm that alleviates this annotation bottleneck by training a parser on its own output. Our Deep Contextualized Self-training (DCST) algorithm utilizes representation models trained on sequence labeling tasks that are derived from the parser's output when applied to unlabeled data, and integrates these models with the base parser through a gating mechanism. We conduct experiments across multiple languages, both in low resource in-domain and in cross-domain setups, and demonstrate that DCST substantially outperforms traditional self-training as well as recent semi-supervised training methods. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the action editor and the reviewers, as well as the members of the IE@Technion NLP group for their valuable feedback and advice. This research was partially funded by an ISF personal grant no. 1625\/18. ","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"inagaki-nakagawa-1992-abstraction","url":"https:\/\/aclanthology.org\/C92-3130","title":"An Abstraction Method Using a Semantic Engine Based on Language Information Structure","abstract":"This paperdescribes the framework for a new abstraction method that utilizes event-tmits written in sentences. Event-units are expressed in Language Information Structnrc (LIS) form and the projection of LIS from a sentence is f)cffonned by a semantic engine. ABEX (ABstraction EXtraction system) utilizes the LIS outpttt of the semantic engine. ABI';X can extract events from sentences and classify them. Since ABEX and the LIS form nsc only limited knowledge , the system need not construct or nlaintain a large amount of knowledge.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cotterell-etal-2015-labeled","url":"https:\/\/aclanthology.org\/K15-1017","title":"Labeled Morphological Segmentation with Semi-Markov Models","abstract":"We present labeled morphological segmentation-an alternative view of morphological processing that unifies several tasks. We introduce a new hierarchy of morphotactic tagsets and CHIPMUNK, a discriminative morphological segmentation system that, contrary to previous work, explicitly models morphotactics. We show improved performance on three tasks for all six languages: (i) morphological segmentation, (ii) stemming and (iii) morphological tag classification. For morphological segmentation our method shows absolute improvements of 2-6 points F 1 over a strong baseline.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Jason Eisner, Helmut Schmid, \u00d6zlem \u00c7etinoglu and the anonymous reviewers for their comments. This material is based upon work supported by a Fulbright fellowship awarded to the first author by the German-American Fulbright Commission and the National Science Foundation under Grant No. 1423276. The second author is a recipient of the Google Europe Fellowship in Natural Language Processing, and this research is supported by this Google Fellowship. The fourth author was partially supported by Deutsche Forschungsgemeinschaft (grant SCHU 2246\/10-1). This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 644402 (HimL) and the DFG grant Models of Morphosyntax for Statistical Machine Translation.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"karanikolos-refanidis-2019-encoding","url":"https:\/\/aclanthology.org\/W19-7420","title":"Encoding Position Improves Recurrent Neural Text Summarizers","abstract":"Modern text summarizers are big neural networks (recurrent, convolutional, or transformers) trained end-to-end under an encoderdecoder framework. These networks equipped with an attention mechanism, that maintains a memory of their source hidden states, are able to generalize well to long text sequences. In this paper, we explore how the different modules involved in an encoder-decoder structure affect the produced summary quality as measured by ROUGE score in the widely used CNN\/Daily Mail and Gigaword summarization datasets. We find that encoding the position of the text tokens before feeding them to a recurrent text summarizer gives a significant, in terms of ROUGE, gain to its performance on the former but not the latter dataset.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is funded by the University of Macedonia Research Committee as part of the \"Principal Research 2019\" funding program. We thank the anonymous reviewers for helpful comments. The Titan Xp used for this work was donated by the NVIDIA Corporation.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cheng-zhulyn-2012-system","url":"https:\/\/aclanthology.org\/C12-1036","title":"A System for Multilingual Sentiment Learning On Large Data Sets","abstract":"Classifying documents according to the sentiment they convey (whether positive or negative) is an important problem in computational linguistics. There has not been much work done in this area on general techniques that can be applied effectively to multiple languages, nor have very large data sets been used in empirical studies of sentiment classifiers. We present an empirical study of the effectiveness of several sentiment classification algorithms when applied to nine languages (including Germanic, Romance, and East Asian languages). The algorithms are implemented as part of a system that can be applied to multilingual data. We trained and tested the system on a data set that is substantially larger than that typically encountered in the literature. We also consider a generalization of the n-gram model and a variant that reduces memory consumption, and evaluate their effectiveness.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sharma-mittal-2015-dependency","url":"https:\/\/aclanthology.org\/W15-5933","title":"Dependency Extraction for Knowledge-based Domain Classification","abstract":"Question classification is an important part in Question Answering. It refers to classifying a given question into a category. This paper presents a learning based question classifier. The previous works in this field have used UIUC questions dataset for the classification purpose. In contrast to this, we use the Web-Questions dataset to build the classifier. The dataset consists of questions with the links to the Freebase pages on which the answers will be found. To extract the exact answer of a question from a Freebase page, it is very essential to know the domain of the answer as it narrows down the number of possible answer candidates. Proposed classifier will be very helpful in extracting answers from the Freebase. Classifier uses the questions' features to classify a question into the domain of the answer, given the link to the freebase page on which the answer can be found.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"singh-li-2021-exploiting","url":"https:\/\/aclanthology.org\/2021.woah-1.1","title":"Exploiting Auxiliary Data for Offensive Language Detection with Bidirectional Transformers","abstract":"Offensive language detection (OLD) has received increasing attention due to its societal impact. Recent work shows that bidirectional transformer based methods obtain impressive performance on OLD. However, such methods usually rely on large-scale well-labeled OLD datasets for model training. To address the issue of data\/label scarcity in OLD, in this paper, we propose a simple yet effective domain adaptation approach to train bidirectional transformers. Our approach introduces domain adaptation (DA) training procedures to ALBERT, such that it can effectively exploit auxiliary data from source domains to improve the OLD performance in a target domain. Experimental results on benchmark datasets show that our approach, AL-BERT (DA), obtains the state-of-the-art performance in most cases. Particularly, our approach significantly benefits underrepresented and under-performing classes, with a significant improvement over ALBERT.","label_nlp4sg":1,"task":["Offensive Language Detection"],"method":["Auxiliary Data","ALBERT"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their insightful comments. This research is supported in part by the U.S. Army Research Office Award under Grant Number W911NF-21-1-0109.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"boitet-etal-1982-implementation","url":"https:\/\/aclanthology.org\/C82-1004","title":"Implementation and Conversational Environment of ARIANE 78.4, An Integrated System for Automated Translation and Human Revision","abstract":"ARIANE-78.4 is a computer system designed to offer an adequate environment for constructing machine translation programs, for running them, and for (humanly) revising the rough translations produced by the computer. ARIANE-78 has been operational at GETA for more than 4 years now. This paper refers to version 4. It has been used for a number of applications (russian and japanese, english to french and malay, portuguese to english) and has constantly been amended to meet the needs of the users. Parts of this system have been presented before [2, 3, 7, 8] , but its whole has only been described in internal technical documents.\nThis paper tries to give such a global presentation. Given the space constraint, it centers on the conversational environment under which users may manipulate the data bases of texts (with their partial transforms, their translations and their revision) and of linguistic programs (e.g. grammars ahd dictionaries), test and debug their programs, run complete translations and revise them.' Part I gives the necessary introduction to the system and its components. Each component (e.g. morphological analysis, structural analysis, etc.) is written is one of the four languages specialized for linguistic programming (LSLP) supported by the system (ATEF, ROBRA, TRANSF and SYGMOR). Those languages have been presented elsewhere.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1982,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"fayyaz-etal-2021-models","url":"https:\/\/aclanthology.org\/2021.blackboxnlp-1.29","title":"Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations","abstract":"Most of the recent works on probing representations have focused on BERT, with the presumption that the findings might be similar to the other models. In this work, we extend the probing studies to two other models in the family, namely ELECTRA and XLNet, showing that variations in the pre-training objectives or architectural choices can result in different behaviors in encoding linguistic information in the representations. Most notably, we observe that ELECTRA tends to encode linguistic knowledge in the deeper layers, whereas XLNet instead concentrates that in the earlier layers. Also, the former model undergoes a slight change during fine-tuning, whereas the latter experiences significant adjustments. Moreover, we show that drawing conclusions based on the weight mixing evaluation strategy-which is widely used in the context of layer-wise probing-can be misleading given the norm disparity of the representations across different layers. Instead, we adopt an alternative information-theoretic probing with minimum description length, which has recently been proven to provide more reliable and informative results.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Our work is in part supported by Tehran Institute for Advanced Studies (TeIAS), Khatam University.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"varshney-etal-2022-unsupervised","url":"https:\/\/aclanthology.org\/2022.findings-acl.159","title":"Unsupervised Natural Language Inference Using PHL Triplet Generation","abstract":"Transformer-based models achieve impressive performance on numerous Natural Language Inference (NLI) benchmarks when trained on respective training datasets. However, in certain cases, training samples may not be available or collecting them could be timeconsuming and resource-intensive. In this work, we address the above challenge and present an explorative study on unsupervised NLI, a paradigm in which no human-annotated training samples are available. We investigate it under three settings: PH, P, and NPH that differ in the extent of unlabeled data available for learning. As a solution, we propose a procedural data generation approach that leverages a set of sentence transformations to collect PHL (Premise, Hypothesis, Label) triplets for training NLI models, bypassing the need for humanannotated training data. Comprehensive experiments with several NLI datasets show that the proposed approach results in accuracies of up to 66.75%, 65.9%, 65.39% in PH, P, and NPH settings respectively, outperforming all existing unsupervised baselines. Furthermore, finetuning our model with as little as \u223c0.1% of the human-annotated training dataset (500 instances) leads to 12.2% higher accuracy than the model trained from scratch on the same 500 instances. Supported by this superior performance, we conclude with a recommendation for collecting high-quality task-specific data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their insightful feedback. This research was supported by DARPA SAIL-ON and DARPA CHESS programs. The views and opinions of the authors expressed herein do not necessarily state or reflect those of the funding agencies and employers.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hodosh-etal-2010-cross","url":"https:\/\/aclanthology.org\/W10-2920","title":"Cross-Caption Coreference Resolution for Automatic Image Understanding","abstract":"Recent work in computer vision has aimed to associate image regions with keywords describing the depicted entities, but actual image 'understanding' would also require identifying their attributes, relations and activities. Since this information cannot be conveyed by simple keywords, we have collected a corpus of \"action\" photos each associated with five descriptive captions. In order to obtain a consistent semantic representation for each image, we need to first identify which NPs refer to the same entities. We present three hierarchical Bayesian models for cross-caption coreference resolution. We have also created a simple ontology of entity classes that appear in images and evaluate how well these can be recovered.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was funded by NSF grant IIS 08-03603 INT2-Medium: Understanding the Meaning of Images. We are grateful for David Forsyth and Dan Roth's advice, and for Alex Sorokins support with MTurk.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chuang-etal-2015-topiccheck","url":"https:\/\/aclanthology.org\/N15-1018","title":"TopicCheck: Interactive Alignment for Assessing Topic Model Stability","abstract":"Content analysis, a widely-applied social science research method, is increasingly being supplemented by topic modeling. However, while the discourse on content analysis centers heavily on reproducibility, computer scientists often focus more on scalability and less on coding reliability, leading to growing skepticism on the usefulness of topic models for automated content analysis. In response, we introduce TopicCheck, an interactive tool for assessing topic model stability. Our contributions are threefold. First, from established guidelines on reproducible content analysis, we distill a set of design requirements on how to computationally assess the stability of an automated coding process. Second, we devise an interactive alignment algorithm for matching latent topics from multiple models, and enable sensitivity evaluation across a large number of models. Finally, we demonstrate that our tool enables social scientists to gain novel insights into three active research questions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by a grant from the Brown Institute for Media Innovation.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jones-etal-2020-robust","url":"https:\/\/aclanthology.org\/2020.acl-main.245","title":"Robust Encodings: A Framework for Combating Adversarial Typos","abstract":"Despite excellent performance on many tasks, NLP systems are easily fooled by small adversarial perturbations of inputs. Existing procedures to defend against such perturbations are either (i) heuristic in nature and susceptible to stronger attacks or (ii) provide guaranteed robustness to worst-case attacks, but are incompatible with state-of-the-art models like BERT. In this work, we introduce robust encodings (RobEn): a simple framework that confers guaranteed robustness, without making compromises on model architecture. The core component of RobEn is an encoding function, which maps sentences to a smaller, discrete space of encodings. Systems using these encodings as a bottleneck confer guaranteed robustness with standard training, and the same encodings can be used across multiple tasks. We identify two desiderata to construct robust encoding functions: perturbations of a sentence should map to a small set of encodings (stability), and models using encodings should still perform well (fidelity). We instantiate RobEn to defend against a large family of adversarial typos. Across six tasks from GLUE, our instantiation of RobEn paired with BERT achieves an average robust accuracy of 71.3% against all adversarial typos in the family considered, while previous work using a typo-corrector achieves only 35.3% accuracy against a simple greedy attack.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by NSF Award Grant no. 1805310 and the DARPA ASED program under FA8650-18-2-7882. A.R. is supported by a Google PhD Fellowship and the Open Philanthropy Project AI Fellowship. We thank Pang Wei Koh, Reid Pryzant, Ethan Chi, Daniel Kang, and the anonymous reviewers for their helpful comments.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pouliquen-etal-2012-statistical","url":"https:\/\/aclanthology.org\/2012.eamt-1.4","title":"Statistical Machine Translation prototype using UN parallel documents","abstract":"This paper presents a machine translation prototype developed with the United Nations (UN) corpus for automatic translation of UN documents from English to Spanish. The tool is based on open source Moses technology and has been developed by the World Intellectual Property Organization (WIPO). The two organizations pooled resources to create a model trained on an extensive corpus of manually translated UN documents. The performance of the SMT system as a translation assistant was shown to be very satisfactory (using both automatic and human evaluation). The use of the system in production within UN is now under discussion","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank the following managers and staff members of the Department for General Assembly and Conference Management of the United Nations Headquarters in New York for supporting the project: Mr. Shaaban M. Shaaban, Under-Secretary-General, Mr. Franz Baumann, Assistant Secretary-General, the members of the Departmental Management Group, the Information and Communications Technology Committee and the Technology Advisory Group of the Documentation Division, Ms. Maria Nobrega, Chief of the Spanish Translation Service, Ms. Ana Larrea, Training Officer of the Spanish Translation Service, and very specially, Igor Shpinov, who supported this project enthusiastically since its very beginning and facilitated its authorization. The project would not be possible without the dedicated collaboration and open-mindedness of Ms. Maria Barros, Ms. Rosario Fernandez and Ms. Carla Raffo, who served as human evaluators of the system and the staff of the Spanish Translation Service who provided constant feedback. . Thank you to Laurent Gottardo for his idea in presenting bars per year in the concordancer. Special thanks to Paul Halfpenny for his valuable proof-reading.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rojas-aikawa-2006-predicting","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2006\/pdf\/472_pdf.pdf","title":"Predicting MT Quality as a Function of the Source Language","abstract":"This paper describes one phase of a large-scale machine translation (MT) quality assurance project. We explore a novel approach to discriminating MT-unsuitable source sentences by predicting the expected quality of the output. 1 The resources required include a set of source\/MT sentence pairs, human judgments on the output, a source parser, and an MT system. We extract a number of syntactic, semantic, and lexical features from the source sentences only and train a classifier that we call the \"","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"viegas-etal-1999-using","url":"https:\/\/aclanthology.org\/1999.mtsummit-1.77","title":"Using computational semantics for Chinese translations","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1999,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cristea-etal-2002-ar","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2002\/pdf\/205.pdf","title":"AR-Engine - a framework for unrestricted co-reference resolution","abstract":"The paper presents a framework that allows the design, realisation and validation of different anaphora resolution models on real texts. The type of processing implemented by the engine is an incremental one, simulating the reading of texts by humans. Advanced behaviour like postponed resolution and accumulation of values for features of the discourse entities during reading is implemented. Four models are defined, plugged in the framework and tested on a small corpus. The approach is open to any type of anaphora resolution. However, the models reported deal only with co-reference anaphora, independent of the type of the anaphor. It is shown that the setting on of more and more features, generally results in an improvement of the analysis.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgments. The authors would like to thank Constantin Orasan for providing us with the FDG analysis and NP extraction and Vlad Ciubotariu for the program that helped to perform the manual co-reference annotation.","year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zobel-2007-measures","url":"https:\/\/aclanthology.org\/U07-1003","title":"Measures of Measurements: Robust Evaluation of Search Systems","abstract":"A good search system is one that helps a user to find useful documents. When building a new system, we hope, or hypothesise, that it will be more effective than existing alternatives. We apply a measure, which is often a drastic simplification, to establish whether the system is effective. Thus the ability of the system to help users and the measurement of this ability are only weakly connected, by assumptions that the researcher may not even be aware of. But how robust are these assumptions? If they are poor, is the research invalid? Such concerns apply not just to search, but to many other data-processing tasks. In this talk I introduce some of the recent developments in evaluation of search systems, and use these developments to examine some of the assumptions that underlie much of the research in this field.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"paroubek-etal-2010-second","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/430_Paper.pdf","title":"The Second Evaluation Campaign of PASSAGE on Parsing of French","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"brown-1996-example","url":"https:\/\/aclanthology.org\/C96-1030","title":"Example-Based Machine Translation in the Pangloss System","abstract":"The Pangloss Example-Based Machine Translation engine (I'anEI3MT) l is a translation system reql,iring essentially no knowledge of the structure of a language, merely a large parallel corpus of example sentences atn[ a bilingual dictionary. Input texts are segmented into sequences of words occurring in the corpus, for which translations are determined by subsententia[ alignment of the sentence pairs containing those sequences. These partial translations are then combined with the results of other translation en gines to form the final translation produced by the Pangloss system. In an internal evaluation, PanEBMT achieved 70.2% coverage of unrestricted Spanish news-wire text, despite a simplistic subsententia[ alignment algorithm, a subop ritual dictionary, and a corpus Dora a different domain than the evalual, ion texts.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"koeling-2000-chunking","url":"https:\/\/aclanthology.org\/W00-0729","title":"Chunking with Maximum Entropy Models","abstract":"In this paper I discuss a first attempt to create a text chunker using a Maximum Entropy model. The first experiments, implementing classifiers that tag every word in a sentence with a phrasetag using very local lexical information, partof-speech tags and phrase tags of surrounding words, give encouraging results.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"koit-oim-2000-dialogue","url":"https:\/\/aclanthology.org\/W00-1012","title":"Dialogue Management in the Agreement Negotiation Process: A Model that Involves Natural Reasoning","abstract":"In the paper we describe an approach to dialogue management in the agreement negotiation where one of the central roles is attributed to the model of natural human reasoning. The reasoning model consists of the model of human motivational sphere, and of reasoning algorithms. The reasoning model is interacting with the model of communication process. \"\/'he latter is considered as rational activity where central role play the concepts of communicative strategies and tactics.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by Science Foundation (grant No 4467).","year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"akiba-etal-2002-using","url":"https:\/\/aclanthology.org\/C02-1076","title":"Using Language and Translation Models to Select the Best among Outputs from Multiple MT Systems","abstract":"This paper addresses the problem of automatically selecting the best among outputs from multiple machine translation (MT) systems. Existing approaches select the output assigned the highest score according to a target language model. In some cases, the existing approaches do not work well. This paper proposes two methods to improve performance. The rst method is based on a m ultiple comparison test and checks whether a score from language and translation models is signicantly higher than the others. The second method is based on probability that a translation is not inferior to the others, which is predicted from the above scores. Experimental results show that the proposed methods achieve an improvement o f 2 t o 6 % in performance.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bunt-etal-2012-using","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/1107_Paper.pdf","title":"Using DiAML and ANVIL for multimodal dialogue annotations","abstract":"This paper shows how interoperable annotations of multimodal dialogue, which apply the annotation scheme and the markup language (DiAML, Dialogue Act Markup Language) defined ISO standard 24617-2, can conveniently be obtained using the newly implemented facility in the ANVIL annotation tool to produce XML-based output directly in the DiAML format. ANVIL offers the use of multiple user-defined 'tiers' for annotating various kinds of information. This is shown to be convenient not only for multimodal information but also for dialogue act annotation according to ISO standard 24617-2 because of the latter's multidimensionality: functional dialogue segments are viewed as expressing one or more dialogue acts, and every dialogue act belongs to one of a number of dimensions of communication, defined in the standard, for each of which a different ANVIL tier can conveniently be used. Annotations made in the multi-tier interface can be exported in the ISO 24617-2 format, thus supporting the creation of interoperable annotated corpora of multimodal dialogue.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"huang-2014-keyword","url":"https:\/\/aclanthology.org\/Y14-1062","title":"A Keyword-based Monolingual Sentence Aligner in Text Simplification","abstract":" 1 u901571, 3 chen.meihua@gmail.com, 2 max+@cs.cmu.edu, 4","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"PACLIC 28","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"michel-neubig-2018-extreme","url":"https:\/\/aclanthology.org\/P18-2050","title":"Extreme Adaptation for Personalized Neural Machine Translation","abstract":"Every person speaks or writes their own flavor of their native language, influenced by a number of factors: the content they tend to talk about, their gender, their social status, or their geographical origin. When attempting to perform Machine Translation (MT), these variations have a significant effect on how the system should perform translation, but this is not captured well by standard one-sizefits-all models. In this paper, we propose a simple and parameter-efficient adaptation technique that only requires adapting the bias of the output softmax to each particular user of the MT system, either directly or through a factored approximation. Experiments on TED talks in three languages demonstrate improvements in translation accuracy, and better reflection of speaker traits in the target text.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors give their thanks the anonymous reviewers for their useful feedback which helped make this paper what it is, as well as the members of Neulab who helped proof read this paper and provided constructive criticism. This work was supported by a Google Faculty Research Award 2016 on Machine Translation.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"vanroy-etal-2020-lt3","url":"https:\/\/aclanthology.org\/2020.semeval-1.135","title":"LT3 at SemEval-2020 Task 7: Comparing Feature-Based and Transformer-Based Approaches to Detect Funny Headlines","abstract":"This paper presents two different systems for the SemEval shared task 7 on Assessing Humor in Edited News Headlines, sub-task 1, where the aim was to estimate the intensity of humor generated in edited headlines. Our first system is a feature-based machine learning system that combines different types of information (e.g. word embeddings, string similarity, part-of-speech tags, perplexity scores, named entity recognition) in a Nu Support Vector Regressor (NuSVR). The second system is a deep learning-based approach that uses the pre-trained language model RoBERTa to learn latent features in the news headlines that are useful to predict the funniness of each headline. The latter system was also our final submission to the competition and is ranked seventh among the 49 participating teams, with a root-mean-square error (RMSE) of 0.5253.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"poesio-etal-2004-learning","url":"https:\/\/aclanthology.org\/P04-1019","title":"Learning to Resolve Bridging References","abstract":"We use machine learning techniques to find the best combination of local focus and lexical distance features for identifying the anchor of mereological bridging references. We find that using first mention, utterance distance, and lexical distance computed using either Google or WordNet results in an accuracy significantly higher than obtained in previous experiments.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The creation of the GNOME corpus was supported by the EPSRC project GNOME, GR\/L51126\/01.","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"koeva-2010-lexicon","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/705_Paper.pdf","title":"Lexicon and Grammar in Bulgarian FrameNet","abstract":"In this paper, we report on our attempt at assigning semantic information from the English FrameNet to lexical units in the Bulgarian valency lexicon. The paper briefly presents the model underlying the Bulgarian FrameNet (BulFrameNet): each lexical entry consists of a lexical unit; a semantic frame from the English FrameNet, expressing abstract semantic structure; a grammatical class, defining the inflexional paradigm; a valency frame describing (some of) the syntactic and lexical-semantic combinatory properties (an optional component); and (semantically and syntactically) annotated examples. The target is a corpus-based lexicon giving an exhaustive account of the semantic and syntactic combinatory properties of an extensive number of Bulgarian lexical units. The Bulgarian FrameNet database so far contains unique descriptions of over 3 000 Bulgarian lexical units, approx. one tenth of them aligned with appropriate semantic frames, supports XML import and export and will be accessible, i.e., displayed and queried via the web.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Bulgarian FrameNet is a research project funded by the Scientific Research Fund of the Bulgarian Ministry of Education, Youth and Science (Grant No. \u0414\u0422\u041a 02 \/ 53).","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"shigeto-etal-2020-video","url":"https:\/\/aclanthology.org\/2020.lrec-1.574","title":"Video Caption Dataset for Describing Human Actions in Japanese","abstract":"In recent years, automatic video caption generation has attracted considerable attention. This paper focuses on the generation of Japanese captions for describing human actions. While most currently available video caption datasets have been constructed for English, there is no equivalent Japanese dataset. To address this, we constructed a large-scale Japanese video caption dataset consisting of 79,822 videos and 399,233 captions. Each caption in our dataset describes a video in the form of \"who does what and where.\" To describe human actions, it is important to identify the details of a person, place, and action. Indeed, when we describe human actions, we usually mention the scene, person, and action. In our experiments, we evaluated two caption generation methods to obtain benchmark results. Further, we investigated whether those generation methods could specify \"who does what and where.\"","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank anonymous reviewers for their valuable comments and suggestions. This paper is based on results obtained from a project commissioned by the New Energy and Industrial Technology Development Organization (NEDO).","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"utsuro-matsumoto-1997-learning","url":"https:\/\/aclanthology.org\/A97-1053","title":"Learning Probabilistic Subcategorization Preference by Identifying Case Dependencies and Optimal Noun Class Generalization Level","abstract":"This paper proposes a novel method of learning probabilistic subcategorization preference. In the method, for the purpose of coping with the ambiguities of case dependencies and noun class generalization of argument\/adjunct nouns, we introduce a data structure which represents a tuple of independent partial subcategorization frames. Each collocation of a verb and argument\/adjunct nouns is assumed to be generated from one of the possible tuples of independent partial subcategorization frames. Parameters of subcategorization preference are then estimated so as to maximize the subcategorization preference function for each collocation of a verb and argument\/adjunct nouns in the training corpus. We also describe the results of the experiments on learning probabilistic subcategorization preference from the EDR Japanese bracketed corpus, as well as those on evaluating the performance of subcategorization preference.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"banko-etal-2000-headline","url":"https:\/\/aclanthology.org\/P00-1041","title":"Headline Generation Based on Statistical Translation","abstract":"Extractive summarization techniques cannot generate document summaries shorter than a single sentence, something that is often required. An ideal summarization system would understand each document and generate an appropriate summary directly from the results of that understanding. A more practical approach to this problem results in the use of an approximation: viewing summarization as a problem analogous to statistical machine translation. The issue then becomes one of generating a target document in a more concise language from a source document in a more verbose language. This paper presents results on experiments using this approach, in which statistical models of the term selection and term ordering are jointly applied to produce summaries in a style learned from a training corpus.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"papay-etal-2020-dissecting","url":"https:\/\/aclanthology.org\/2020.emnlp-main.396","title":"Dissecting Span Identification Tasks with Performance Prediction","abstract":"Span identification (in short, span ID) tasks such as chunking, NER, or code-switching detection, ask models to identify and classify relevant spans in a text. Despite being a staple of NLP, and sharing a common structure, there is little insight on how these tasks' properties influence their difficulty, and thus little guidance on what model families work well on span ID tasks, and why. We analyze span ID tasks via performance prediction, estimating how well neural architectures do on different tasks. Our contributions are: (a) we identify key properties of span ID tasks that can inform performance prediction; (b) we carry out a large-scale experiment on English data, building a model to predict performance for unseen span ID tasks that can support architecture choices; (c), we investigate the parameters of the meta model, yielding new insights on how model and task properties interact to affect span ID performance. We find, e.g., that span frequency is especially important for LSTMs, and that CRFs help when spans are infrequent and boundaries non-distinctive.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by IBM Research AI through the IBM AI Horizons Network. We also acknowledge funding from Deutsche Forschungsgemeinschaft (project PA 1956\/4). We thank Laura Ana Maria Oberl\u00e4nder and Heike Adel for fruitful discussions.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cassidy-2013-interoperable","url":"https:\/\/aclanthology.org\/W13-0504","title":"Interoperable Annotation in the Australian National Corpus","abstract":"The Australian National Corpus (AusNC) provides a technical infrastructure for collecting and publishing language resources representing Australian language use. As part of the project we have ingested a wide range of resource types into the system, bringing together the different meta-data and annotations into a single interoperable database. This paper describes the initial collections in AusNC and the procedures used to parse a variety of data types into a single unified annotation store.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"winnemoller-2004-constructing","url":"https:\/\/aclanthology.org\/W04-0903","title":"Constructing text sense representations","abstract":"In this paper we present a novel approach to map textual entities such as words, phrases, sentences, paragraphs or arbitrary text fragments onto artificial structures which we call \"Text Sense Representation Trees\" (TSR trees). These TSR trees represent an abstract notion of the meaning of the respective text, subjective to an abstract \"common\" understanding within the World Wide Web. TSR Trees can be used to support text and language processing systems such as text categorizers, classifiers, automatic summarizers and applications of the Semantic Web. We will explain how to construct the TSR tree structures and how to use them properly; furthermore we describe some preliminary evaluation results.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"traum-habash-2000-generation","url":"https:\/\/aclanthology.org\/W00-0207","title":"Generation from Lexical Conceptual Structures","abstract":"This paper describes a system for generating natural language sentences from an interlingual representation, Lexical Conceptual Structure (LCS). This system has been developed as part of a Chinese-English Machine Translation system, however, it promises to be useful for many other MT language pairs. The generation system has also been used in Cross-Language information retrieval research (Levow et al., 2000) .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the US Department of Defense through contract MDA904-96-I:t-0738. The Nitrogen system used in the realization process was provided by USC\/ISI, we would like to thank Keven Knight and Irene Langkilde for help and advice in using it. The adjective classifications described in Section 3 were devised by Carol Van Ess-Dykema. David Clark and Noah Smith worked on previous versions of the system, and we are indebted to Some of their ideas for the current implementation. We would also like to thank the CLIP group at Uni-","year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chu-etal-2018-joint","url":"https:\/\/aclanthology.org\/C18-1045","title":"Joint Modeling of Structure Identification and Nuclearity Recognition in Macro Chinese Discourse Treebank","abstract":"Discourse parsing is a challenging task and plays a critical role in discourse analysis. This paper focus on the macro level discourse structure analysis, which has been less studied in the previous researches. We explore a macro discourse structure presentation schema to present the macro level discourse structure, and propose a corresponding corpus, named Macro Chinese Discourse Treebank. On these bases, we concentrate on two tasks of macro discourse structure analysis, including structure identification and nuclearity recognition. In order to reduce the error transmission between the associated tasks, we adopt a joint model of the two tasks, and an Integer Linear Programming approach is proposed to achieve global optimization with various kinds of constraints.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful for the help of Jingjing Wang for his initial discussion. We thank our anonymous reviewers for their constructive comments, which helped to improve the paper. This work is supported by the National Natural Science Foundation of China (61773276, 61751206, 61673290) and Jiangsu Provincial Science and Technology Plan (No. BK20151222).","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhou-etal-2020-sentix","url":"https:\/\/aclanthology.org\/2020.coling-main.49","title":"SentiX: A Sentiment-Aware Pre-Trained Model for Cross-Domain Sentiment Analysis","abstract":"Pre-trained language models have been widely applied to cross-domain NLP tasks like sentiment analysis, achieving state-of-the-art performance. However, due to the variety of users' emotional expressions across domains, fine-tuning the pre-trained models on the source domain tends to overfit, leading to inferior results on the target domain. In this paper, we pre-train a sentimentaware language model (SENTIX) via domain-invariant sentiment knowledge from large-scale review datasets, and utilize it for cross-domain sentiment analysis task without fine-tuning. We propose several pre-training tasks based on existing lexicons and annotations at both token and sentence levels, such as emoticons, sentiment words, and ratings, without human interference. A series of experiments are conducted and the results indicate the great advantages of our model. We obtain new state-of-the-art results in all the cross-domain sentiment analysis tasks, and our proposed SENTIX can be trained with only 1% samples (18 samples) and it achieves better performance than BERT with 90% samples.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors wish to thank the reviewers for their helpful comments and suggestions. This work was supported by National Key R&D Program of China (No. 2018AAA0100503&2018AAA0100500), and by the Science and Technology Commission of Shanghai Municipality (19511120200), Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai 200241, China. The computation is performed in ECNU Multifunctional Platform for Innovation(001). The corresponding authors are Yuanbin Wu and Liang He.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kobus-etal-2008-normalizing","url":"https:\/\/aclanthology.org\/C08-1056","title":"Normalizing SMS: are Two Metaphors Better than One ?","abstract":"Electronic written texts used in computermediated interactions (e-mails, blogs, chats, etc) present major deviations from the norm of the language. This paper presents an comparative study of systems aiming at normalizing the orthography of French SMS messages: after discussing the linguistic peculiarities of these messages, and possible approaches to their automatic normalization, we present, evaluate and contrast two systems, one drawing inspiration from the Machine Translation task; the other using techniques that are commonly used in automatic speech recognition devices. Combining both approaches, our best normalization system achieves about 11% Word Error Rate on a test set of about 3000 unseen messages.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors wish to thank E. Guimier de Neef for providing us with one of the databases and other useful resources. Many thanks to our anonymous reviewers for helpful comments.","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tesprasit-etal-2003-context","url":"https:\/\/aclanthology.org\/N03-2035","title":"A Context-Sensitive Homograph Disambiguation in Thai Text-to-Speech Synthesis","abstract":"Homograph ambiguity is an original issue in Text-to-Speech (TTS). To disambiguate homograph, several efficient approaches have been proposed such as part-of-speech (POS) n-gram, Bayesian classifier, decision tree, and Bayesian-hybrid approaches. These methods need words or\/and POS tags surrounding the question homographs in disambiguation. Some languages such as Thai, Chinese, and Japanese have no word-boundary delimiter. Therefore before solving homograph ambiguity, we need to identify word boundaries. In this paper, we propose a unique framework that solves both word segmentation and homograph ambiguity problems altogether. Our model employs both local and longdistance contexts, which are automatically extracted by a machine learning technique called Winnow.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rahimtoroghi-etal-2016-learning","url":"https:\/\/aclanthology.org\/W16-3644","title":"Learning Fine-Grained Knowledge about Contingent Relations between Everyday Events","abstract":"Much of the user-generated content on social media is provided by ordinary people telling stories about their daily lives. We develop and test a novel method for learning fine-grained common-sense knowledge from these stories about contingent (causal and conditional) relationships between everyday events. This type of knowledge is useful for text and story understanding, information extraction, question answering, and text summarization. We test and compare different methods for learning contingency relation, and compare what is learned from topic-sorted story collections vs. general-domain stories. Our experiments show that using topic-specific datasets enables learning finer-grained knowledge about events and results in significant improvement over the baselines. An evaluation on Amazon Mechanical Turk shows 82% of the relations between events that we learn from topicsorted stories are judged as contingent.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"amaya-benedi-2001-improvement","url":"https:\/\/aclanthology.org\/P01-1003","title":"Improvement of a Whole Sentence Maximum Entropy Language Model Using Grammatical Features","abstract":"In this paper, we propose adding long-term grammatical information in a Whole Sentence Maximun Entropy Language Model (WSME) in order to improve the performance of the model. The grammatical information was added to the WSME model as features and were obtained from a Stochastic Context-Free grammar. Finally, experiments using a part of the Penn Treebank corpus were carried out and significant improvements were acheived.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2001,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"budhkar-etal-2019-generative","url":"https:\/\/aclanthology.org\/W19-4303","title":"Generative Adversarial Networks for Text Using Word2vec Intermediaries","abstract":"Generative adversarial networks (GANs) have shown considerable success, especially in the realistic generation of images. In this work, we apply similar techniques for the generation of text. We propose a novel approach to handle the discrete nature of text, during training, using word embeddings. Our method is agnostic to vocabulary size and achieves competitive results relative to methods with various discrete gradient estimators.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This study was partially funded by the Vector Institute for Artificial Intelligence. Rudzicz is an Inaugural CIFAR Chair in artificial intelligence.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"goba-vasiljevs-2007-development","url":"https:\/\/aclanthology.org\/W07-2411","title":"Development of Text-To-Speech system for Latvian","abstract":"This paper describes the development of the first text-to-speech (TTS) synthesizer for Latvian language. It provides an overview of the project background and describes the general approach, the choices and particular implementation aspects of the principal TTS components: NLP, prosody and waveform generation. A novelty for waveform synthesis is the combination of corpusbased unit selection methods with traditional diphone synthesis. We conclude that the proposed combination of rather simple language models and synthesis methods yields a cost effective TTS synthesizer of adequate quality.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jehl-riezler-2018-document","url":"https:\/\/aclanthology.org\/W18-1802","title":"Document-Level Information as Side Constraints for Improved Neural Patent Translation","abstract":"We investigate the usefulness of document information as side constraints for machine translation. We adapt two approaches to encoding this information as features for neural patent translation: As special tokens which are attached to the source sentence, and as tags which are attached to the source words. We found that sentence-attached features produced the same or better results as word-attached features. Both approaches produced significant gains of over 1% BLEU over the baseline on a German-English translation task, while sentence-attached features also produced significant gains of 0.7% BLEU on a Japanese-English task. We also describe a method to encode document information as additional phrase features for phrasebased translation, but did not find any improvements.","label_nlp4sg":1,"task":["Neural Patent Translation"],"method":["Document - Level Information","sentence - attached features"],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":"This work was supported by DFG Research Grant RI 2221\/1-2 \"Weakly Supervised Learning of Cross-Lingual Systems\". We thank the anonymous reviewers for their insightful comments.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wu-etal-2021-multimodal","url":"https:\/\/aclanthology.org\/2021.findings-acl.226","title":"Multimodal Fusion with Co-Attention Networks for Fake News Detection","abstract":"Fake news with textual and visual contents has a better story-telling ability than text-only contents, and can be spread quickly with social media. People can be easily deceived by such fake news, and traditional expert identification is labor-intensive. Therefore, automatic detection of multimodal fake news has become a new hot-spot issue. A shortcoming of existing approaches is their inability to fuse multimodality features effectively. They simply concatenate unimodal features without considering inter-modality relations. Inspired by the way people read news with image and text, we propose a novel Multimodal Co-Attention Networks (MCAN) to better fuse textual and visual features for fake news detection. Extensive experiments conducted on two realworld datasets demonstrate that MCAN can learn inter-dependencies among multimodal features and outperforms state-of-the-art methods.","label_nlp4sg":1,"task":["Fake News Detection"],"method":["Co - Attention Networks"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"This research was supported by National Research and Development Program of China (No.2017YFB1010004).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"garain-etal-2012-leveraging","url":"https:\/\/aclanthology.org\/C12-2034","title":"Leveraging Statistical Transliteration for Dictionary-Based English-Bengali CLIR of OCR`d Text","abstract":"This paper describes experiments with transliteration of out-of-vocabulary English terms into Bengali to improve the effectiveness of English-Bengali Cross-Language Information Retrieval. We use a statistical translation model as a basis for transliteration, and present evaluation results on the FIRE 2011 RISOT Bengali test collection. Incorporating transliteration is shown to substantially and statistically significantly improve Mean Average Precision for both the text and OCR conditions. Learning a distortion model for OCR errors and then using that model to improve recall is also shown to yield a further substantial and statistically significant improvement for the OCR condition.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"One of the authors thanks the Indo-US Science and Technology Forum for providing him with a support to conduct a part of this research at the University of Maryland. Thanks to Ann Irvine of John Hopkins University and Jiaul Paik of ISI, Kolkata for their kind help.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"stevenson-gaizauskas-2000-using","url":"https:\/\/aclanthology.org\/A00-1040","title":"Using Corpus-derived Name Lists for Named Entity Recognition","abstract":"This paper describes experiments to establish the performance of a named entity recognition system which builds categorized lists of names from manually annotated training data. Names in text are then identified using only these lists. This approach does not perform as well as state-of-the-art named entity recognition systems. However, we then show that by using simple filtering techniques for improving the automatically acquired lists, substantial performance benefits can be achieved, with resulting Fmeasure scores of 87% on a standard test set. These results provide a baseline against which the contribution of more sophisticated supervised learning techniques for NE recognition should be measured.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"warren-1982-issues","url":"https:\/\/aclanthology.org\/P82-1012","title":"Issues in Natural Language Access to Databases From a Logic Programming Perspective","abstract":"Chat processes a NL question in three main stages: translation planning execution English .... > logic .... > Prolog .... > answer corresponding roughly to: \"What does the question mean?\", \"How shall I answer it?\", \"What is the answer?\".\nThe meaning of a NL question, and the database of information about the application domain, are both represented as statements in an extension of a subset of flrst-order logic, which we call \"definite closed world\" (DCW) logic. This logic is a subset of flrst-order logic, in that it admits only \"definite\" statements; uncertain information (\"Either this or that\") is not allowed.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1982,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"barriere-2009-web","url":"https:\/\/aclanthology.org\/2009.mtsummit-btm.2","title":"The Web as a Source of Informative Background Knowledge","abstract":"In this paper, we present how a tool called TerminoWeb can be used to help translators find background information on the Web about a domain, or more specifically about terms found in a text to be translated. Termi-noWeb contains different modules working together to achieve such goal: (1) a Web search module specifically tuned for informative texts and glossaries where background knowledge is likely to be found, (2) a term extractor module to automatically discover important terms of a source text, (3) a query generator module to automatically launch multiple queries on the Web from a set of extracted terms. The result of these first three steps is a background knowledge corpus which can then be explored by (4) a corpus exploration module in search of definitional sentences and concordances. In this article, an in-depth example is used to provide a proof of concept of TerminoWeb's background information search and exploration capability.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"singla-etal-2017-automatic","url":"https:\/\/aclanthology.org\/W17-4506","title":"Automatic Community Creation for Abstractive Spoken Conversations Summarization","abstract":"Summarization of spoken conversations is a challenging task, since it requires deep understanding of dialogs. Abstractive summarization techniques rely on linking the summary sentences to sets of original conversation sentences, i.e. communities. Unfortunately, such linking information is rarely available or requires trained annotators. We propose and experiment automatic community creation using cosine similarity on different levels of representation: raw text, WordNet SynSet IDs, and word embeddings. We show that the abstractive summarization systems with automatic communities significantly outperform previously published results on both English and Italian corpora.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kruengkrai-etal-2006-conditional","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2006\/pdf\/137_pdf.pdf","title":"A Conditional Random Field Framework for Thai Morphological Analysis","abstract":"This paper presents a framework for Thai morphological analysis based on the theoretical background of conditional random fields. We formulate morphological analysis of an unsegmented language as the sequential supervised learning problem. Given a sequence of characters, all possibilities of word\/tag segmentation are generated, and then the optimal path is selected with some criterion. We examine two different techniques, including the Viterbi score and the confidence estimation. Preliminary results are given to show the feasibility of our proposed framework.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wu-yarowsky-2020-wiktionary","url":"https:\/\/aclanthology.org\/2020.coling-main.413","title":"Wiktionary Normalization of Translations and Morphological Information","abstract":"We extend the Yawipa Wiktionary Parser (Wu and Yarowsky, 2020) to extract and normalize translations from etymology glosses, and morphological form-of relations, resulting in 300K unique translations and over 4 million instances of 168 annotated morphological relations. We propose a method to identify typos in translation annotations. Using the extracted morphological data, we develop multilingual neural models for predicting three types of word formationclipping, contraction, and eye dialect-and improve upon a standard attention baseline by using copy attention.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-etal-2020-hw","url":"https:\/\/aclanthology.org\/2020.iwslt-1.23","title":"The HW-TSC Video Speech Translation System at IWSLT 2020","abstract":"In this paper, we present details of our system in the IWSLT Video Speech Translation evaluation. The system works in a cascade form, which contains three modules: 1) A proprietary ASR system. 2) A disfluency correction system aims to remove interregnums or other disfluent expressions with a fine-tuned BERT and a series of rule based algorithms. 3) An NMT System based on the Transformer and trained with massive publicly available corpus is used for translation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tambouratzis-pouli-2016-linguistically","url":"https:\/\/aclanthology.org\/L16-1091","title":"Linguistically Inspired Language Model Augmentation for MT","abstract":"The present article reports on efforts to improve the translation accuracy of a corpus-based Machine Translation (MT) system. In order to achieve that, an error analysis performed on past translation outputs has indicated the likelihood of improving the translation accuracy by augmenting the coverage of the Target-Language (TL) side language model. The method adopted for improving the language model is initially presented, based on the concatenation of consecutive phrases. The algorithmic steps are then described that form the process for augmenting the language model. The key idea is to only augment the language model to cover the most frequent cases of phrase sequences, as counted over a TL-side corpus, in order to maximize the cases covered by the new language model entries. Experiments presented in the article show that substantial improvements in translation accuracy are achieved via the proposed method, when integrating the grown language model to the corpus-based MT system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research reported in this article has been funded partly by a number of projects including the PRESEMT project (ICT-FP7-Call4\/248307) and the POLYTROPON project (KRIPIS-GSRT, MIS: 448306). The authors wish to acknowledge the assistance of Ms. M. Vassiliou of ILSP\/Athena R.C. on the setting up of experiments and of Dr. S. Sofianopoulos of ILSP\/Athena R.C., in integrating the new structure selection algorithm to the PRESEMT prototype and on providing the translation results for the MOSESbased SMT system.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"siddharthan-2003-resolving","url":"https:\/\/aclanthology.org\/W03-2602","title":"Resolving Pronouns Robustly: Plumbing the Depths of Shallowness","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"caparas-2017-stylistic","url":"https:\/\/aclanthology.org\/Y17-1030","title":"A Stylistic Analysis of a Philippine Essay, ``The Will of the River''","abstract":"The continuous study of stylistics has been regarded as significant in identifying the border between language and literature. Hence the study presented a stylistic analysis of Alfredo Q. Gonzales's essay \"The Will of the River.\" The lexis-grammar complementary analysis on the personal narrative of the author focused on the vocabulary of the essay and the grammatical structure of the sentence primarily the use of sentence-initial adjuncts that leads to the unraveling of the essay's general theme of man and nature.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"filali-bilmes-2007-generalized","url":"https:\/\/aclanthology.org\/N07-2009","title":"Generalized Graphical Abstractions for Statistical Machine Translation","abstract":"We introduce a novel framework for the expression, rapid-prototyping, and evaluation of statistical machine-translation (MT) systems using graphical models. The framework extends dynamic Bayesian networks with multiple connected different-length streams, switching variable existence and dependence mechanisms, and constraint factors. We have implemented a new general-purpose MT training\/decoding system in this framework, and have tested this on a variety of existing MT models (including the 4 IBM models), and some novel ones as well, all using Europarl as a test corpus. We describe the semantics of our representation, and present preliminary evaluations, showing that it is possible to prototype novel MT ideas in a short amount of time.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-etal-2018-pre","url":"https:\/\/aclanthology.org\/P18-1250","title":"Pre- and In-Parsing Models for Neural Empty Category Detection","abstract":"Motivated by the positive impact of empty categories on syntactic parsing, we study neural models for pre-and in-parsing detection of empty categories, which has not previously been investigated. We find several non-obvious facts: (a) BiLSTM can capture non-local contextual information which is essential for detecting empty categories, (b) even with a BiLSTM, syntactic information is still able to enhance the detection, and (c) automatic detection of empty categories improves parsing quality for overt words. Our neural ECD models outperform the prior state-of-the-art by significant margins.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the National Natural Science Foundation of China (61772036, 61331011) and the Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We thank the anonymous reviewers for their helpful comments. Weiwei Sun is the corresponding author.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sigurbergsson-derczynski-2020-offensive","url":"https:\/\/aclanthology.org\/2020.lrec-1.430","title":"Offensive Language and Hate Speech Detection for Danish","abstract":"The presence of offensive language on social media platforms and the implications this poses is becoming a major concern in modern society. Given the enormous amount of content created every day, automatic methods are required to detect and deal with this type of content. Until now, most of the research has focused on solving the problem for the English language, while the problem is multilingual. We construct a Danish dataset DKHATE containing user-generated comments from various social media platforms, and to our knowledge, the first of its kind, annotated for various types and target of offensive language. We develop four automatic classification systems, each designed to work for both the English and the Danish language. In the detection of offensive language in English, the best performing system achieves a macro averaged F1-score of 0.74, and the best performing system for Danish achieves a macro averaged F1-score of 0.70. In the detection of whether or not an offensive post is targeted, the best performing system for English achieves a macro averaged F1-score of 0.62, while the best performing system for Danish achieves a macro averaged F1-score of 0.73. Finally, in the detection of the target type in a targeted offensive post, the best performing system for English achieves a macro averaged F1-score of 0.56, and the best performing system for Danish achieves a macro averaged F1-score of 0.63. Our work for both the English and the Danish language captures the type and targets of offensive language, and present automatic methods for detecting different kinds of offensive language such as hate speech and cyberbullying.","label_nlp4sg":1,"task":["Offensive Language and Hate Speech Detection"],"method":["classification systems","Danish dataset"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"We would like to thank Digitalt Ansvar for helpful conversations in the formation of this research, and Pushshift.io for the availability of Reddit archive data.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"tziafas-etal-2021-fighting","url":"https:\/\/aclanthology.org\/2021.nlp4if-1.18","title":"Fighting the COVID-19 Infodemic with a Holistic BERT Ensemble","abstract":"This paper describes the TOKOFOU system, an ensemble model for misinformation detection tasks based on six different transformer-based pre-trained encoders, implemented in the context of the COVID-19 Infodemic Shared Task for English. We fine tune each model on each of the task's questions and aggregate their prediction scores using a majority voting approach. TOKOFOU obtains an overall F1 score of 89.7%, ranking first.","label_nlp4sg":1,"task":["misinformation detection"],"method":["BERT","transformer"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"The authors would like to acknowledge the RUG university computer cluster, Peregrine, for providing the computational infrastructure which allowed the implementation of the current work.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"rieser-etal-2011-adaptive","url":"https:\/\/aclanthology.org\/W11-2813","title":"Adaptive Information Presentation for Spoken Dialogue Systems: Evaluation with real users","abstract":"We present evaluation results with human subjects for a novel data-driven approach to Natural Language Generation in spoken dialogue systems. We evaluate a trained Information Presentation (IP) strategy in a deployed tourist-information spoken dialogue system. The IP problem is formulated as statistical decision making under uncertainty using Reinforcement Learning, where both content planning and attribute selection are jointly optimised based on data collected in a Wizard-of-Oz study. After earlier work testing and training this model in simulation, we now present results from an extensive online user study, involving 131 users and more than 800 test dialogues, which explores its contribution to overall 'global' task success. We find that the trained Information Presentation strategy significantly improves dialogue task completion, with up to a 9.7% increase (30% relative) compared to the deployed dialogue system which uses conventional, hand-coded presentation prompts. We also present subjective evaluation results and discuss the implications of these results for future work in dialogue management and NLG.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research leading to these results has received funding from the EC's 7th Framework Programme (FP7\/2007-2013) under grant agreement no. 216594 (CLASSiC project www.classic-project. org), and (FP7\/2011-2014) under grant agreement no. 270019 (SpaceBook project).","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ogura-etal-2003-building","url":"https:\/\/aclanthology.org\/W03-2108","title":"Building a New Internet Chat System for Sharing Timing Information","abstract":"Chat system has gained popularity as a tool for real-time conversation. However, standard chat systems have problems due to lack of timing information. To tackle this problem, we have built a system which has the following functions: 1) function of making typing state visible; 2) floor holding function at the start of typing. The evaluation results show that the system with each new function significantly increases the number of turns, which indicates the effectiveness of the new functions for smooth communication. The survey results showed that the system with the function of making typing state visible significantly different from that without them concerning 1) easiness of adjusting the timing of utterances and smoothness of conversations, and 2) easiness of using the system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"power-evans-2004-wysiwym","url":"https:\/\/aclanthology.org\/P04-3030","title":"Wysiwym with wider coverage","abstract":"We describe an extension of the Wysiwym technology for knowledge editing through natural language feedback. Previous applications have addressed relatively simple tasks requiring a very limited range of nominal and clause patterns. We show that by adding a further editing operation called reconfiguration, the technology can achieve a far wider coverage more in line with other general-purpose generators. The extension will be included in a Java-based library package for producing Wysiwym applications.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lyu-etal-2004-toward","url":"https:\/\/aclanthology.org\/O04-3001","title":"Toward Constructing A Multilingual Speech Corpus for Taiwanese (Min-nan), Hakka, and Mandarin","abstract":"The Formosa speech database (ForSDat) is a multilingual speech corpus collected at Chang Gung University and sponsored by the National Science Council of Taiwan. It is expected that a multilingual speech corpus will be collected, covering the three most frequently used languages in Taiwan: Taiwanese (Min-nan), Hakka, and Mandarin. This 3-year project has the goal of collecting a phonetically abundant speech corpus of more than 1,800 speakers and hundreds of hours of speech. Recently, the first version of this corpus containing speech of 600 speakers of Taiwanese and Mandarin was finished and is ready to be released. It contains about 49 hours of speech and 247,000 utterances.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zampieri-vela-2014-quantifying","url":"https:\/\/aclanthology.org\/W14-0314","title":"Quantifying the Influence of MT Output in the Translators' Performance: A Case Study in Technical Translation","abstract":"This paper presents experiments on the use of machine translation output for technical translation. MT output was used to produced translation memories that were used with a commercial CAT tool. Our experiments investigate the impact of the use of different translation memories containing MT output in translations' quality and speed compared to the same task without the use of translation memory. We evaluated the performance of 15 novice translators translating technical English texts into German. Results suggest that translators are on average over 28% faster when using TM.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the students who participated in these experiments for their time. We would also like to thank the detailed feedback provided by the anonymous reviewers who helped us to increase the quality of this paper.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hensman-dunnion-2005-representing","url":"https:\/\/aclanthology.org\/I05-2036","title":"Representing Semantics of Texts - a Non-Statistical Approach","abstract":"This paper describes a non-statistical approach for semantic annotation of documents by analysing their syntax and by using semantic\/syntactic behaviour patterns described in VerbNet. We use a two-stage approach, firstly identifying the semantic roles in a sentence, and then using these roles to represent some of the relations between the concepts in the sentence and a list of noun behaviour patterns to resolve some of the unknown (generic) relations between concepts. All outlined algorithms were tested on two corpora which differs in size, type, style and genre, and the performance does not vary significantly.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hughes-etal-2005-distributed","url":"https:\/\/aclanthology.org\/U05-1029","title":"A Distributed Architecture for Interactive Parse Annotation","abstract":"In this paper we describe a modular system architecture for distributed parse annotation using interactive correction. This involves interactively adding constraints to an existing parse until the returned parse is correct. Using a mixed initiative approach, human annotators interact live with distributed ccg parser servers through an annotation gui. The examples presented to each annotator are selected by an active learning framework to maximise the value of the annotated corpus for machine learners. We report on an initial implementation based on a distributed workflow architecture.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their helpful feedback, and to David Vadas and Toby Hawker for testing the ccg gui. This work has been supported by the Australian Research Council under Discovery Project DP0453131.","year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"smrz-kouril-2014-semantic","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/1058_Paper.pdf","title":"Semantic Search in Documents Enriched by LOD-based Annotations","abstract":"This paper deals with information retrieval on semantically enriched web-scale document collections. It particularly focuses on web-crawled content in which mentions of entities appearing in Freebase, DBpedia and other Linked Open Data resources have been identified. A special attention is paid to indexing structures and advanced query mechanisms that have been employed into a new semantic retrieval system. Scalability features are discussed together with performance statistics and results of experimental evaluation of presented approaches. Examples given to demonstrate key features of the developed solution correspond to the cultural heritage domain in which the results of our work have been primarily applied.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"qin-etal-2020-using","url":"https:\/\/aclanthology.org\/2020.findings-emnlp.101","title":"Using the Past Knowledge to Improve Sentiment Classification","abstract":"This paper studies sentiment classification in the lifelong learning setting that incrementally learns a sequence of sentiment classification tasks. It proposes a new lifelong learning model (called L2PG) that can retain and selectively transfer the knowledge learned in the past to help learn the new task. A key innovation of this proposed model is a novel parameter-gate (p-gate) mechanism that regulates the flow or transfer of the previously learned knowledge to the new task. Specifically, it can selectively use the network parameters (which represent the retained knowledge gained from the previous tasks) to assist the learning of the new task t. Knowledge distillation is also employed in the process to preserve the past knowledge by approximating the network output at the state when task t \u2212 1 was learned. Experimental results show that L2PG outperforms strong baselines, including even multiple task learning.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"marujo-etal-2011-bp2ep","url":"https:\/\/aclanthology.org\/2011.eamt-1.19","title":"BP2EP - Adaptation of Brazilian Portuguese texts to European Portuguese","abstract":"This paper describes a method to efficiently leverage Brazilian Portuguese resources as European Portuguese resources. Brazilian Portuguese and European Portuguese are two Portuguese varieties very close and usually mutually intelligible, but with several known differences, which are studied in this work. Based on this study, we derived a rule based system to translate Brazilian Portuguese resources. Some resources were enriched with multiword units retrieved semi-automatically from phrase tables created using statistical machine translation tools. Our experiments suggest that applying our translation step improves the translation quality between English and Portuguese, relatively to the same process using the same resources.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank Nuno Mamede, Am\u00e1lia Mendes, and the anonymous reviewers for many helpful comments. Support for this research by FCT through the Carnegie Mellon Portugal Program under FCT grant SFRH\/BD\/33769\/ 2009, FCT grant SFRH\/BD\/51157\/2010, FCT grant SFRH\/BD\/62151\/2009, and also through projects CMU-PT\/HuMach\/0039\/2008, CMU-","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-etal-2017-leveraging","url":"https:\/\/aclanthology.org\/D17-1282","title":"Leveraging Linguistic Structures for Named Entity Recognition with Bidirectional Recursive Neural Networks","abstract":"In this paper, we utilize the linguistic structures of texts to improve named entity recognition by BRNN-CNN, a special bidirectional recursive network attached with a convolutional network. Motivated by the observation that named entities are highly related to linguistic constituents, we propose a constituent-based BRNN-CNN for named entity recognition. In contrast to classical sequential labeling methods, the system first identifies which text chunks are possible named entities by whether they are linguistic constituents. Then it classifies these chunks with a constituency tree structure by recursively propagating syntactic and semantic information to each constituent node. This method surpasses current state-of-the-art on OntoNotes 5.0 with automatically generated parses.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by the 2016 Summer Internship Program of IIS, Academia Sinica.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-etal-2017-statistical","url":"https:\/\/aclanthology.org\/I17-2032","title":"A Statistical Framework for Product Description Generation","abstract":"We present in this paper a statistical framework that generates accurate and fluent product description from product attributes. Specifically, after extracting templates and learning writing knowledge from attribute-description parallel data, we use the learned knowledge to decide what to say and how to say for product description generation. To evaluate accuracy and fluency for the generated descriptions, in addition to BLEU and Recall, we propose to measure what to say (in terms of attribute coverage) and to measure how to say (by attribute-specified generation) separately. Experimental results show that our framework is effective.","label_nlp4sg":1,"task":["Product Description Generation"],"method":["Statistical Framework"],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"xu-etal-2021-soft","url":"https:\/\/aclanthology.org\/2021.metanlp-1.2","title":"Soft Layer Selection with Meta-Learning for Zero-Shot Cross-Lingual Transfer","abstract":"Multilingual pre-trained contextual embedding models (Devlin et al., 2019) have achieved impressive performance on zero-shot cross-lingual transfer tasks. Finding the most effective strategy to fine-tune these models on high-resource languages so that it transfers well to the zero-shot languages is a nontrivial task. In this paper, we propose a novel meta-optimizer to soft-select which layers of the pre-trained model to freeze during fine-tuning. We train the meta-optimizer by simulating the zero-shot transfer scenario. Results on cross-lingual natural language inference show that our approach improves over the simple fine-tuning baseline and X-MAML (Nooralahzadeh et al., 2020).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"uramoto-1996-positioning","url":"https:\/\/aclanthology.org\/C96-2161","title":"Positioning Unknown Words in a Thesaurus by Using Information Extracted from a Corpus","abstract":"This p~q)er describes a. method for positio,ing unknown words in an existing thesa,rus by using wordto-word rela.tionships with relation (case) markers extracted from a large corpus. A suitable area (if the thesaurus for an unknown woM ix estimated l)y integrating the human intuition I)urled in the thesaurus and statistical data extracted from the corpus. To overcome the prohlem of data sparseness, distinguishing features of each node, called \"viewpoints\" are. extracted a.utomatically and used to calcMa.te the similarity between the unknown woM and a. word in the thesaurus. The results of a.tl experiment confirm the COrltril)ution of viewl)oints to the I)ositioning task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tsarfaty-simaan-2007-three","url":"https:\/\/aclanthology.org\/W07-2219","title":"Three-Dimensional Parametrization for Parsing Morphologically Rich Languages","abstract":"Current parameters of accurate unlexicalized parsers based on Probabilistic Context-Free Grammars (PCFGs) form a twodimensional grid in which rewrite events are conditioned on both horizontal (headoutward) and vertical (parental) histories. In Semitic languages, where arguments may move around rather freely and phrasestructures are often shallow, there are additional morphological factors that govern the generation process. Here we propose that agreement features percolated up the parse-tree form a third dimension of parametrization that is orthogonal to the previous two. This dimension differs from mere \"state-splits\" as it applies to a whole set of categories rather than to individual ones and encodes linguistically motivated co-occurrences between them. This paper presents extensive experiments with extensions of unlexicalized PCFGs for parsing Modern Hebrew in which tuning the parameters in three dimensions gradually leads to improved performance. Our best result introduces a new, stronger, lower bound on the performance of treebank grammars for parsing Modern Hebrew, and is on a par with current results for parsing Modern Standard Arabic obtained by a fully lexicalized parser trained on a much larger treebank.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the Knowledge Center for Processing Hebrew and Dalia Bojan for providing us with the newest version of the MH treebank. We are particularly grateful to the development team of version 2.0, Adi Mile'a and Yuval Krymolowsky, supervised by Yoad Winter for continued collaboration and technical support. We further thank Felix Hageloh for allowing us to use the software resulting from his M.Sc. thesis work. We also like to thank Remko Scha, Jelle Zuidema, Yoav Seginer and three anonymous reviewers for helpful comments on the text, and Noa Tsarfaty for technical help in the graphical display. The work of the first author is funded by the Netherlands Organization for Scientific Research (NWO), grant number 017.001.271, for which we are grateful.","year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kejriwal-koehn-2020-exploratory","url":"https:\/\/aclanthology.org\/2020.wmt-1.108","title":"An exploratory approach to the Parallel Corpus Filtering shared task WMT20","abstract":"This document describes an exploratory look into the Parallel Corpus Filtering Shared Task in WMT20. We submitted scores for both Pashto-English and Khmer-English systems combining multiple techniques like monolingual language model scores, length based filters, language ID filters with confidence and norm of embedings. 1 https:\/\/github.com\/facebookresearch\/ LASER 2 cos(x,y) here refers to cosine similarity between the vectors x and y","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"soroa-etal-2010-kyoto","url":"https:\/\/aclanthology.org\/S10-1093","title":"Kyoto: An Integrated System for Specific Domain WSD","abstract":"This document describes the preliminary release of the integrated Kyoto system for specific domain WSD. The system uses concept miners (Tybots) to extract domain-related terms and produces a domain-related thesaurus, followed by knowledge-based WSD based on wordnet graphs (UKB). The resulting system can be applied to any language with a lexical knowledge base, and is based on publicly available software and resources. Our participation in Semeval task #17 focused on producing running systems for all languages in the task, and we attained good results in all except Chinese. Due to the pressure of the time-constraints in the competition, the system is still under development, and we expect results to improve in the near future.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work task is partially funded by the European Commission (KYOTO ICT-2007-211423), the Spanish Research Department (KNOW-2 TIN2009-14715-C04-01) and the Basque Government (BERBATEK IE09-262).","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"marcu-2012-new","url":"https:\/\/aclanthology.org\/2012.amta-government.9","title":"A New Method for Automatic Translation Scoring-HyTER","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"almaghout-etal-2012-extending","url":"https:\/\/aclanthology.org\/2012.eamt-1.44","title":"Extending CCG-based Syntactic Constraints in Hierarchical Phrase-Based SMT","abstract":"In this paper, we describe two approaches to extending syntactic constraints in the Hierarchical Phrase-Based (HPB) Statistical Machine Translation (SMT) model using Combinatory Categorial Grammar (CCG). These extensions target the limitations of previous syntax-augmented HPB SMT systems which limit the coverage of the syntactic constraints applied. We present experiments on Arabic-English and Chinese-English translation. Our experiments show that using extended CCG labels helps to increase nonterminal label coverage and achieve significant improvements over the baseline for Arabic-English translation. In addition, combining extended CCG labels with CCGaugmented glue grammar helps to improve the performance of the Chinese-English translation over the baseline systems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by Science Foundation Ireland (Grant No. 07\/CE\/I1142) as part of the Centre for Next Generation Localisation (www.cngl.ie) at Dublin City University.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jindal-etal-2020-killed","url":"https:\/\/aclanthology.org\/2020.coling-main.10","title":"Is Killed More Significant than Fled? A Contextual Model for Salient Event Detection","abstract":"Identifying the key events in a document is critical to holistically understanding its important information. Although measuring the salience of events is highly contextual, most previous work has used a limited representation of events that omits essential information. In this work, we propose a highly contextual model of event salience that uses a rich representation of events, incorporates document-level information and allows for interactions between latent event encodings. Our experimental results on an event salience dataset (Liu et al., 2018) demonstrate that our model improves over previous work by an absolute 2-4% on standard metrics, establishing a new state-of-the-art performance for the task. We also propose a new evaluation metric which addresses flaws in previous evaluation methodologies. Finally, we discuss the importance of salient event detection for the downstream task of summarization. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their helpful feedback and suggestions. ","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"eisele-2006-parallel","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2006\/pdf\/643_pdf.pdf","title":"Parallel Corpora and Phrase-Based Statistical Machine Translation for New Language Pairs via Multiple Intermediaries","abstract":"We present a large parallel corpus of texts published by the United Nations Organization, which we exploit for the creation of phrasebased statistical machine translation (SMT) systems for new language pairs. We present a setup where phrase tables for these language pairs are used for translation between languages for which parallel corpora of sufficient size are so far not available. We give some preliminary results for this novel application of SMT and discuss further refinements.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work described in this paper was made possible by the DFG in the framework of the Ptolemaios project at Saarland University, headed by Jonas Kuhn. I also want to thank Sascha Osherenko and Greg Gulrajani for interesting discussions and for practical help with crawling the UNO web site.","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kim-2017-incremental","url":"https:\/\/aclanthology.org\/U17-1010","title":"Incremental Knowledge Acquisition Approach for Information Extraction on both Semi-Structured and Unstructured Text from the Open Domain Web","abstract":"Extracting information from semistructured text has been studied only for limited domain sources due to its heterogeneous formats. This paper proposes a Ripple-Down Rules (RDR) based approach to extract relations from both semistructured and unstructured text in open domain Web pages. We find that RDR's 'case-by-case' incremental knowledge acquisition approach provides practical flexibility for (1) handling heterogeneous formats of semi-structured text; (2) conducting knowledge engineering on any Web pages with minimum start-up cost and (3) allowing open-ended settings on relation schema. The efficacy of the approach has been demonstrated by extracting contact information from randomly collected open domain Web pages. The rGALA system achieved 0.87 F1 score on a testing dataset of 100 Web pages, after only 7 hours of knowledge engineering on a training set of 100 Web pages.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nariyama-2001-multiple","url":"https:\/\/aclanthology.org\/2001.mtsummit-papers.44","title":"Multiple argument ellipses resolution in Japanese","abstract":"Some Japanese clauses contain more than one argument ellipsis, and yet this fact has not adequately been accounted for in the study of ellipsis resolution in the current literature, which predominantly focus resolving one ellipsis per sentence. This paper proposes a method using a \"salient referent list\", which identifies the referents of such multiple argument ellipses as well as offers ellipsis resolution as a whole by considering contextual information.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2001,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"moreira-etal-2013-reaction","url":"https:\/\/aclanthology.org\/S13-2081","title":"REACTION: A naive machine learning approach for sentiment classification","abstract":"We evaluate a naive machine learning approach to sentiment classification focused on Twitter in the context of the sentiment analysis task of SemEval-2013. We employ a classifier based on the Random Forests algorithm to determine whether a tweet expresses overall positive, negative or neutral sentiment. The classifier was trained only with the provided dataset and uses as main features word vectors and lexicon word counts. Our average F-score for all three classes on the Twitter evaluation dataset was 51.55%. The average F-score of both positive and negative classes was 45.01%. For the optional SMS evaluation dataset our overall average F-score was 58.82%. The average between positive and negative Fscores was 50.11%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partially supported by FCT (Portuguese research funding agency) under project grants UTA-Est\/MAI\/0006\/2009 (RE-ACTION) and PTDC\/CPJ-CPO\/116888\/2010 (POPSTAR). FCT also supported scholarship SFRH\/BD\/89020\/2012. This research was also funded by the PIDDAC Program funds (INESC-ID multi annual funding) and the LASIGE multi annual support.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"luong-manning-2016-achieving","url":"https:\/\/aclanthology.org\/P16-1100","title":"Achieving Open Vocabulary Neural Machine Translation with Hybrid Word-Character Models","abstract":"Nearly all previous work on neural machine translation (NMT) has used quite restricted vocabularies, perhaps with a subsequent method to patch in unknown words. This paper presents a novel wordcharacter solution to achieving open vocabulary NMT. We build hybrid systems that translate mostly at the word level and consult the character components for rare words. Our character-level recurrent neural networks compute source word representations and recover unknown target words when needed. The twofold advantage of such a hybrid approach is that it is much faster and easier to train than character-based ones; at the same time, it never produces unknown words as in the case of word-based models. On the WMT'15 English to Czech translation task, this hybrid approach offers an addition boost of +2.1\u221211.4 BLEU points over models that already handle unknown words. Our best system achieves a new state-of-the-art result with 20.7 BLEU score. We demonstrate that our character models can successfully learn to not only generate well-formed words for Czech, a highly-inflected language with a very complex vocabulary, but also build correct representations for English source words.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partially supported by NSF Award IIS-1514268 and by a gift from Bloomberg L.P. We thank Dan Jurafsky, Andrew Ng, and Quoc Le for earlier feedback on the work, as well as Sam Bowman, Ziang Xie, and Jiwei Li for their valuable comments on the paper draft. Lastly, we thank NVIDIA Corporation for the donation of Tesla K40 GPUs as well as Andrew Ng and his group for letting us use their computing resources.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"coughlin-2003-correlating","url":"https:\/\/aclanthology.org\/2003.mtsummit-papers.9","title":"Correlating automated and human assessments of machine translation quality","abstract":"We describe a large-scale investigation of the correlation between human judgments of machine translation quality and the automated metrics that are increasingly used to drive progress in the field. We compare the results of 124 human evaluations of machine translated sentences to the scores generated by two automatic evaluation metrics (BLEU and NIST). When datasets are held constant or file size is sufficiently large, BLEU and NIST scores closely parallel human judgments. Surprisingly, this was true even though these scores were calculated using just one human reference. We suggest that when human evaluators are forced to make decisions without sufficient context or domain expertise, they fall back on strategies that are not unlike determining n-gram precision. 1 14 English =>German (EG) 4 German => English (GE) 36 English =>Spanish (ES) 38 Spanish => English (SE) 20 French => English (FE) 8 Hansards French => English (QE) 4 French => Spanish (FS)","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"I wish to thank my colleagues at Microsoft Research NLP: Mike Carlson for ideas, advice and inspiration, Bill Dolan for substantial edits and counsel, Chris Quirk for patient tutorials in statistics, and the entire team for contributing to the MT effort which spurred the need for evaluation. I would like to also acknowledge the fine work from the Butler Hill Group: Mo Corston-Oliver for extensive work normalizing the evaluation results and the many evaluators who provided the quality scores that are at the foundation of this paper.","year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"akama-etal-2017-generating","url":"https:\/\/aclanthology.org\/I17-2069","title":"Generating Stylistically Consistent Dialog Responses with Transfer Learning","abstract":"We propose a novel, data-driven, and stylistically consistent dialog responsegeneration system. To create a userfriendly system, it is crucial to make generated responses not only appropriate but also stylistically consistent. For leaning both the properties effectively, our proposed framework has two training stages inspired by transfer learning. First, we train the model to generate appropriate responses, and then we ensure that the responses have a specific style. Experimental results demonstrate that the proposed method produces stylistically consistent responses while maintaining the appropriateness of the responses learned in a general domain.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by JSPS KAKENHI Grant Number 15H01702.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lancioni-etal-2020-keyphrase","url":"https:\/\/aclanthology.org\/2020.sustainlp-1.12","title":"Keyphrase Generation with GANs in Low-Resources Scenarios","abstract":"Keyphrase Generation is the task of predicting Keyphrases (KPs), short phrases that summarize the semantic meaning of a given document. Several past studies provided diverse approaches to generate Keyphrases for an input document. However, all of these approaches still need to be trained on very large datasets. In this paper, we introduce BeGan-KP, a new conditional GAN model to address the problem of Keyphrase Generation in a low-resource scenario. Our main contribution relies in the Discriminator's architecture: a new BERT-based module which is able to distinguish between the generated and humancurated KPs reliably. Its characteristics allow us to use it in a low-resource scenario, where only a small amount of training data are available, obtaining an efficient Generator. The resulting architecture achieves, on five public datasets, competitive results with respect to the state-of-the-art approaches, using less than 1% of the training data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"decamp-2008-working","url":"https:\/\/aclanthology.org\/2008.amta-govandcom.6","title":"Working with the US Government: Information Resources","abstract":"This document provides information on how companies and researchers in machine translation can work with the U.S. Government. Specifically, it addresses information on (1) groups in the U.S. Government working with translation and potentially having a need for machine translation; (2) means for companies and researchers to provide information to the United States Government about their work; and (3) U.S. Government organizations providing grants of possible interest to this community.","label_nlp4sg":1,"task":["Working with the US Government"],"method":["information"],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":"Glenn Nordin, Senior Advisor for Language and Culture for the Office of the U.S. Secretary of Defense, and Nick Bemish, Senior Technology Language Authority for the Defense Intelligence Agency, provided input and review for this paper.","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"poerner-schutze-2019-multi","url":"https:\/\/aclanthology.org\/D19-1173","title":"Multi-View Domain Adapted Sentence Embeddings for Low-Resource Unsupervised Duplicate Question Detection","abstract":"We address the problem of Duplicate Question Detection (DQD) in low-resource domainspecific Community Question Answering forums. Our multi-view framework MV-DASE combines an ensemble of sentence encoders via Generalized Canonical Correlation Analysis, using unlabeled data only. In our experiments, the ensemble includes generic and domain-specific averaged word embeddings, domain-finetuned BERT and the Universal Sentence Encoder. We evaluate MV-DASE on the CQADupStack corpus and on additional low-resource Stack Exchange forums. Combining the strengths of different encoders, we significantly outperform BM25, all singleview systems as well as a recent supervised domain-adversarial DQD method.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Bernt Andrassy and Pankaj Gupta at Siemens MIC-DE, as well as our anonymous reviewers, for their helpful comments. This research was funded by Siemens AG.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"barba-etal-2022-extend","url":"https:\/\/aclanthology.org\/2022.acl-long.177","title":"ExtEnD: Extractive Entity Disambiguation","abstract":"Local models for Entity Disambiguation (ED) have today become extremely powerful, in most part thanks to the advent of large pretrained language models. However, despite their significant performance achievements, most of these approaches frame ED through classification formulations that have intrinsic limitations, both computationally and from a modeling perspective. In contrast with this trend, here we propose EXTEND, a novel local formulation for ED where we frame this task as a text extraction problem, and present two Transformer-based architectures that implement it. Based on experiments in and out of domain, and training over two different data regimes, we find our approach surpasses all its competitors in terms of both data efficiency and raw performance. EXTEND outperforms its alternatives by as few as 6 F 1 points on the more constrained of the two data regimes and, when moving to the other higher-resourced regime, sets a new state of the art on 4 out of 6 benchmarks under consideration, with average improvements of 0.7 F 1 points overall and 1.1 F 1 points out of domain. In addition, to gain better insights from our results, we also perform a fine-grained evaluation of our performances on different classes of label frequency, along with an ablation study of our architectural choices and an error analysis. We release our code and models for research purposes at https:\/\/ github.com\/SapienzaNLP\/extend.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors gratefully acknowledge the support of the ERC Consolidator Grant MOUSSE No. 726487.This work was partially supported by the MIUR under the grant \"Dipartimenti di eccellenza 2018-2022\" of the Department of Computer Science of Sapienza University.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"doran-etal-1994-xtag","url":"https:\/\/aclanthology.org\/C94-2149","title":"XTAG System - A Wide Coverage Grammar for English","abstract":"This paper presents the XTAG system, a grammar development tool based on the Tree Adjoining Grammar (TAG) formalism that includes a wide-coverage syntactic grammar for English. The various components of the system are discussed and preliminary evaluation results from the parsing of various corpora are given. Results from the comparison of X3AG against the IBM statistical parser and the Alvey Natural Language Tool parser are also given.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"novak-siklosi-2015-automatic","url":"https:\/\/aclanthology.org\/D15-1275","title":"Automatic Diacritics Restoration for Hungarian","abstract":"In this paper, we describe a method based on statistical machine translation (SMT) that is able to restore accents in Hungarian texts with high accuracy. Due to the agglutination in Hungarian, there are always plenty of word forms unknown to a system trained on a fixed vocabulary. In order to be able to handle such words, we integrated a morphological analyzer into the system that can suggest accented word candidates for unknown words. We evaluated the system in different setups, achieving an accuracy above 99% at the highest.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"atanasova-etal-2020-diagnostic","url":"https:\/\/aclanthology.org\/2020.emnlp-main.263","title":"A Diagnostic Study of Explainability Techniques for Text Classification","abstract":"Recent developments in machine learning have introduced models that approach human performance at the cost of increased architectural complexity. Efforts to make the rationales behind the models' predictions transparent have inspired an abundance of new explainability techniques. Provided with an already trained model, they compute saliency scores for the words of an input instance. However, there exists no definitive guide on (i) how to choose such a technique given a particular application task and model architecture, and (ii) the benefits and drawbacks of using each such technique. In this paper, we develop a comprehensive list of diagnostic properties for evaluating existing explainability techniques. We then employ the proposed list to compare a set of diverse explainability techniques on downstream text classification tasks and neural network architectures. We also compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones. Overall, we find that the gradient-based explanations perform best across tasks and model architectures, and we present further insights into the properties of the reviewed explainability techniques.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\u0142odowska-Curie grant agreement No 801199.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"meisheri-dey-2018-tcs","url":"https:\/\/aclanthology.org\/S18-1043","title":"TCS Research at SemEval-2018 Task 1: Learning Robust Representations using Multi-Attention Architecture","abstract":"This paper presents system description of our submission to the SemEval-2018 task-1: Affect in tweets for the English language. We combine three different features generated using deep learning models and traditional methods in support vector machines to create a unified ensemble system. A robust representation of a tweet is learned using a multi-attention based architecture which uses a mixture of different pre-trained embeddings. In addition, analysis of different features is also presented. Our system ranked 2 nd , 5 th , and 7 th in different subtasks among 75 teams.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"swen-2001-buffered","url":"https:\/\/aclanthology.org\/W01-1831","title":"Buffered Shift-Reduce Parsing","abstract":"A parsing method called buffered shift-reduce parsing is presented, which adds an intermediate buffer (queue) to the usual LR parser. The buffer's usage is analogous to that of the wait-and-see parsing, but it has unlimited buffer length, and may serve as a separate reduction (pruning) stack. The general structure of its parser and some features of its grammars and parsing tables are discussed.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2001,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rohrer-forst-2006-improving","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2006\/pdf\/99_pdf.pdf","title":"Improving coverage and parsing quality of a large-scale LFG for German","abstract":"We describe experiments in parsing the German TIGER Treebank. In parsing the complete treebank, 86.44% of the sentences receive full parses; 13.56% receive fragment parses. We discuss the methods used to enhance coverage and parsing quality and we present an evaluation on a gold standard, to our knowledge the first one for a deep grammar of German. Considering the selection performed by our current version of a stochastic disambiguation component, we achieve an f-score of 84.2%, the upper and lower bounds being 87.4% and 82.3% respectively.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"vulic-etal-2020-good","url":"https:\/\/aclanthology.org\/2020.emnlp-main.257","title":"Are All Good Word Vector Spaces Isomorphic?","abstract":"Existing algorithms for aligning cross-lingual word vector spaces assume that vector spaces are approximately isomorphic. As a result, they perform poorly or fail completely on nonisomorphic spaces. Such non-isomorphism has been hypothesised to result from typological differences between languages. In this work, we ask whether non-isomorphism is also crucially a sign of degenerate word vector spaces. We present a series of experiments across diverse languages which show that variance in performance across language pairs is not only due to typological differences, but can mostly be attributed to the size of the monolingual resources available, and to the properties and duration of monolingual training (e.g. \"under-training\").","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work of IV is supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909). AS is supported by a Google Focused Research Award. We thank Chris Dyer and Phil Blunsom for feedback on a draft.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mihalcea-moldovan-1998-word","url":"https:\/\/aclanthology.org\/W98-0703","title":"Word Sense Disambiguation based on Semantic Density","abstract":"This paper presents a Word Sense Disambiguation method based on the idea of semantic density between words. The disambiguation is done in the context of WordNet. The Internet is used as a raw corpora to provide statistical information for word associations. A metric is introduced and used to measure the semantic density and to rank all possible combinations of the senses of two words. This method provides a precision of 58% in indicating the correct sense for both words at the same time. The precision increases as we consider more choices: 70% for top two ranked and 7'3% for top three ranked.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kardas-etal-2020-axcell","url":"https:\/\/aclanthology.org\/2020.emnlp-main.692","title":"AxCell: Automatic Extraction of Results from Machine Learning Papers","abstract":"Tracking progress in machine learning has become increasingly difficult with the recent explosion in the number of papers. In this paper, we present AXCELL, an automatic machine learning pipeline for extracting results from papers. AXCELL uses several novel components, including a table segmentation subtask, to learn relevant structural knowledge that aids extraction. When compared with existing methods, our approach significantly improves the state of the art for results extraction. We also release a structured, annotated dataset for training models for results extraction, and a dataset for evaluating the performance of models on this task. Lastly, we show the viability of our approach enables it to be used for semi-automated results extraction in production, suggesting our improvements make this task practically viable for the first time. Code is available on GitHub. 1 Back-translation. . .","label_nlp4sg":1,"task":["Automatic Extraction of Results"],"method":["machine learning pipeline"],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank Waleed Ammar, Sebastian Kohlmeier, Iz Beltagy, and Adam Liska for useful discussion and feedback.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lemon-gruenstein-2002-language","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2002\/pdf\/102.pdf","title":"Language Resources for Multi-Modal Dialogue Systems.","abstract":"This paper reviews a resource base of software agents for hub-based architectures, which can be used generally for advanced dialogue systems research and deployment. The problem of domain-specificity of dialogue managers is discussed, and we describe an approach to it developed at CSLI, involving a domain-general dialogue manager with application specific \"Activity Models\". We also describe relevant grammar development tools.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"With thanks to John Dowding (NASA Rialist), Beth-Ann Hockey (NASA Rialist), Stina Ericsson (Gothenburg), Johan Bos (HCRC), Staffan Larsson (Gothenburg), Stanley Peters (CSLI).","year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dennis-henderson-etal-2020-life","url":"https:\/\/aclanthology.org\/2020.latechclfl-1.11","title":"Life still goes on: Analysing Australian WW1 Diaries through Distant Reading","abstract":"An increasing amount of historic data is now available in digital (text) formats. This gives quantitative researchers an opportunity to use distant reading techniques, as opposed to traditional close reading, in order to analyse larger quantities of historic data. Distant reading allows researchers to view overall patterns within the data and reduce researcher bias. One such data set that has recently been transcribed is a collection of over 500 Australian World War I (WW1) diaries held by the State Library of New South Wales. Here we apply distant reading techniques to this corpus to understand what soldiers wrote about and how they felt over the course of the war. Extracting dates accurately is important as it allows us to perform our analysis over time, however, it is very challenging due to the variety of date formats and abbreviations diarists use. But with that data, topic modelling and sentiment analysis can then be applied to show trends, for instance, that despite the horrors of war, Australians in WW1 primarily wrote about their everyday routines and experiences. Our results detail some of the challenges likely to be encountered by quantitative researchers intending to analyse historical texts, and provide some approaches to these issues.","label_nlp4sg":1,"task":["Analysing Australian WW1 Diaries"],"method":["distant reading"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"We acknowledge the State Library of New South Wales for providing the data which made this research possible.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"lautenbacher-etal-2015-towards","url":"https:\/\/aclanthology.org\/W15-2803","title":"Towards Reliable Automatic Multimodal Content Analysis","abstract":"This poster presents a pilot where audio description is used to enhance automatic content analysis, for a project aiming at creating a tool for easy access to large AV archives.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pezik-2018-increasing","url":"https:\/\/aclanthology.org\/L18-1678","title":"Increasing the Accessibility of Time-Aligned Speech Corpora with Spokes Mix","abstract":"Spokes Mix is an online service providing access to a number of spoken corpora of Polish, including three newly released time-aligned collections of manually transcribed spoken-conversational data. The purpose of this service is twofold. Firstly, it functions as a programmatic interface to a number of unique collections of conversational Polish and potentially also spoken corpora of other languages, exposing their full content with complete metadata and annotations. Equally important, however, is its second function of increasing the general accessibility of these resources for research on spoken and conversational language by providing a centralized, easy-to-use corpus query engine with a responsive web-based user interface.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Research and development described in this paper was financed as part of the investment in the CLARIN-PL research infrastructure funded by the Polish Ministry of Science and Higher Education.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"collins-traum-2016-towards","url":"https:\/\/aclanthology.org\/L16-1018","title":"Towards a Multi-dimensional Taxonomy of Stories in Dialogue","abstract":"In this paper, we present a taxonomy of stories told in dialogue. We based our scheme on prior work analyzing narrative structure and method of telling, relation to storyteller identity, as well as some categories particular to dialogue, such as how the story gets introduced. Our taxonomy currently has 5 major dimensions, with most having sub-dimensions-each dimension has an associated set of dimension-specific labels. We adapted an annotation tool for this taxonomy and have annotated portions of two different dialogue corpora, Switchboard and the Distress Analysis Interview Corpus. We present examples of some of the tags and concepts with stories from Switchboard, and some initial statistics of frequencies of the tags.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"raghu-etal-2015-statistical","url":"https:\/\/aclanthology.org\/W15-4645","title":"A statistical approach for Non-Sentential Utterance Resolution for Interactive QA System","abstract":"Non-Sentential Utterances (NSUs) are short utterances that do not have the form of a full sentence but nevertheless convey a complete sentential meaning in the context of a conversation. NSUs are frequently used to ask follow up questions during interactions with question answer (QA) systems resulting into incorrect answers being presented to their users. Most of the current methods for resolving such NSUs have adopted rule or grammar based approach and have limited applicability.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Martin Schmid, IBM Watson Prague and Adam J Sporka, Pavel Slavik, Czech Techincal University Prague for providing us with the corpus of dialog ellipsis (dataset for NSU resolution) without which training and evaluation of our system would not have been possible.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-etal-2021-data","url":"https:\/\/aclanthology.org\/2021.emnlp-main.434","title":"Data Augmentation for Cross-Domain Named Entity Recognition","abstract":"Current work in named entity recognition (NER) shows that data augmentation techniques can produce more robust models. However, most existing techniques focus on augmenting in-domain data in low-resource scenarios where annotated data is quite limited. In contrast, we study cross-domain data augmentation for the NER task. We investigate the possibility of leveraging data from highresource domains by projecting it into the lowresource domains. Specifically, we propose a novel neural architecture to transform the data representation from a high-resource to a low-resource domain by learning the patterns (e.g. style, noise, abbreviations, etc.) in the text that differentiate them and a shared feature space where both domains are aligned. We experiment with diverse datasets and show that transforming the data to the low-resource domain representation achieves significant improvements over only using data from highresource domains. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partially supported by the National Science Foundation (NSF) under grant #1910192. We would like to thank the members from the RiT-UAL lab at the University of Houston for their invaluable feedback. We also thank the anonymous EMNLP reviewers for their valuable suggestions.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhang-etal-2021-refer","url":"https:\/\/aclanthology.org\/2021.findings-acl.450","title":"What Did You Refer to? Evaluating Co-References in Dialogue","abstract":"Existing neural end-to-end dialogue models have limitations on exactly interpreting the linguistic structures, such as ellipsis, anaphor and co-reference, etc., in dialogue history context. Therefore, it is hard to determine whether the dialogue models truly understand a dialogue or not, only depending on the coherence evaluation of their generated responses. To address these issues, in this paper, we proposed to directly measure the capability of dialogue models on understanding the entity-oriented structures via question answering and construct a new benchmark dataset, DEQA, including large-scale English and Chinese humanhuman dialogues. Experiments carried on representative dialogue models show that these models all face challenges on the proposed dialogue understanding task. The DEQA dataset will release for research use.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This paper is supported by the National Natural Science Foundation of China (No. 62076081, No. 61772153 and No. 61936010) and Science and Technology Innovation 2030 Major Project of China (No. 2020AAA0108605).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bily-1981-experience","url":"https:\/\/aclanthology.org\/W81-0107","title":"Experience with COMMENTATOR, a computer system simulating verbal behaviour","abstract":"Milan BIIJ Department of Linguistics University of Lund, Sweden EXPERIENCE WITH COMMENTATOR, A COMPUTER SYSTEM SIMULATING VERBAL BEHAVIOUR 0. The project \"COMMENTATOR\" at the department of general linguis tics at the university of Lund is intended to test ideas about language production. The system Implemented in BASIC on the ABC 80 micro-computer generates a scene on the monitor where two per sons, Adam and Eve, move randomly around a gate. Not only the pre sent positions of Adam and Eve are shown on the screen but even the positions before the last \"jump\". This setting is also used for presenting human subjects the same sort of stimuli as the compu ter. The moves are generated randomly but the operator can choose the lenght of jumps. The initial placement of Adam and Eve can be determined by the operator, too, as well as the instruction for the machine concerning the \"focus of attention\" (Adam or Eve) and the primary goal of the focused actor (the gate or the other actor). On the operator's command the computer makes written comments on the development happening om the monitor screen. (The present version of COMMENTATOR comments in Swedish but it Isintended to use the same set of abstract semantic presications \"percieved\" by COMMENTATOR for production in several languages, all according to the operator's choice. As COMMENTATOR is a research tool, it does not use any ready-made sentences describing foreseeable si tuations .) 1. The system works roughly as follows: From the primary informa tion (the coordinates of the gate and the two actors) some more complex values are derived (distances, relations \"to left\", to right\" etc). Then the topics and their \"goals\" are determined.\nAfter that the conditions are tested for the use of the abstract predicates in the given situation -the so-called question menu. This results in positive or negative abstract propositions. The abstract sentence constituents are ordered as subjects, predicates, and objects. Connective elements are added if possible. These connect the last propositions to the previous ones, i.e. conjuctlons or connective adverbs are Inserted in the proposition. The use of proper names, pronouns, or other NPs is chosen on the basis of reference relations to the preceding proposition. The abstract propositions are substituted by surface phrases and words. The assembled structure is printed. When the whole repertoir of comments is exhausted, a new situation is generated on the screen and the process is repeated. (For a more extensive description of the program and one version of the program Itself see Sigurd 1980.) 2. To my knowledge , COMMENTATOR is the only system of its sort in Sweden, if not in the whole of Scandinavia, but there exist some related projects in other countries implemented on larger computers, such as SUPP described in Okada (1980) . (SUPP is primarily aimed at recognition of picture patterns.) However, a lot of linguistic research has been done in recent years that will appear useful for the further development of automatic systems of this sort. Badler (1975) is one example of descriptions relevant for COMMENTATOR;","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1981,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"al-omari-etal-2019-emodet","url":"https:\/\/aclanthology.org\/S19-2032","title":"EmoDet at SemEval-2019 Task 3: Emotion Detection in Text using Deep Learning","abstract":"Task 3, EmoContext, in the International Workshop SemEval 2019 provides training and testing datasets for the participant teams to detect emotion classes (Happy, Sad, Angry, or Others). This paper proposes a participating system (EmoDet) to detect emotions using deep learning architecture. The main input to the system is a combination of Word2Vec word embeddings and a set of semantic features (e.g. from AffectiveTweets Wekapackage). The proposed system (EmoDet) ensembles a fully connected neural network architecture and LSTM neural network to obtain performance results that show substantial improvements (F1-Score 0.67) over the baseline model provided by Task 3 organizers (F1score 0.58).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"seroussi-etal-2014-authorship","url":"https:\/\/aclanthology.org\/J14-2003","title":"Authorship Attribution with Topic Models","abstract":"Authorship attribution deals with identifying the authors of anonymous texts. Traditionally, research in this field has focused on formal texts, such as essays and novels, but recently more attention has been given to texts generated by on-line users, such as e-mails and blogs. Authorship attribution of such on-line texts is a more challenging task than traditional authorship attribution, because such texts tend to be short, and the number of candidate authors is often larger than in traditional settings. We address this challenge by using topic models to obtain author representations. In addition to exploring novel ways of applying two popular topic models to this task, we test our new model that projects authors and documents to two disjoint topic spaces. Utilizing our model in authorship attribution yields state-of-the-art performance on several data sets, containing either formal texts written by a few authors or informal texts generated by tens to thousands of on-line users. We also present experimental results that demonstrate the applicability of topical author representations to two other problems: inferring the sentiment polarity of texts, and predicting the ratings that users would give to items such as movies.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by grant LP0883416 from the Australian Research Council. The authors thank Russell Smyth for the collaboration on initial results on the Judgment data set, Mark Carman for fruitful discussions on topic modeling, and the anonymous reviewers for their insightful comments.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"camacho-collados-navigli-2016-find","url":"https:\/\/aclanthology.org\/W16-2508","title":"Find the word that does not belong: A Framework for an Intrinsic Evaluation of Word Vector Representations","abstract":"We present a new framework for an intrinsic evaluation of word vector representations based on the outlier detection task. This task is intended to test the capability of vector space models to create semantic clusters in the space. We carried out a pilot study building a gold standard dataset and the results revealed two important features: human performance on the task is extremely high compared to the standard word similarity task, and stateof-the-art word embedding models, whose current shortcomings were highlighted as part of the evaluation, still have considerable room for improvement.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors gratefully acknowledge the support of the ERC Starting Grant MultiJEDI No. 259234.We would like to thank Claudio Delli Bovi, Ilenia Giambruno, Ignacio Iacobacci, Massimiliano Mancini, Tommaso Pasini, Taher Pilehvar, and Alessandro Raganato for their help in the construction and evaluation of the outlier detection dataset. We would also like to thank Jim Mc-Manus for his comments on the manuscript.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kirov-etal-2018-unimorph","url":"https:\/\/aclanthology.org\/L18-1293","title":"UniMorph 2.0: Universal Morphology","abstract":"The Universal Morphology (UniMorph) project is a collaborative effort to improve how NLP handles complex morphology across the world's languages. The project releases annotated morphological data using a universal tagset, the UniMorph schema. Each inflected form is associated with a lemma, which typically carries its underlying lexical meaning, and a bundle of morphological features from our schema. Additional supporting data and tools are also released on a per-language basis when available. UniMorph is based at the Center for Language and Speech Processing (CLSP) at Johns Hopkins University in Baltimore, Maryland. This paper details advances made to the collection, annotation, and dissemination of project resources since the initial UniMorph release described at LREC 2016.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"he-etal-2008-improving","url":"https:\/\/aclanthology.org\/C08-1041","title":"Improving Statistical Machine Translation using Lexicalized Rule Selection","abstract":"This paper proposes a novel lexicalized approach for rule selection for syntax-based statistical machine translation (SMT). We build maximum entropy (MaxEnt) models which combine rich context information for selecting translation rules during decoding. We successfully integrate the MaxEnt-based rule selection models into the state-of-the-art syntax-based SMT model. Experiments show that our lexicalized approach for rule selection achieves statistically significant improvements over the state-of-the-art SMT system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to show our special thanks to Hwee Tou Ng, Liang Huang, Yajuan Lv and Yang Liu for their valuable suggestions. We also appreciate the anonymous reviewers for their detailed comments and recommendations. This work was supported by the National Natural Science Foundation of China (NO. 60573188 and 60736014), and the High Technology Research and Development Program of China (NO. 2006AA010108).","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lu-chu-2013-evaluation","url":"https:\/\/aclanthology.org\/Y13-1049","title":"Evaluation of Corpus Assisted Spanish Learning","abstract":"In the development of corpus linguistics, the creation of corpora has had a critical role in corpus-based studies. The majority of created corpora have been associated with English and native languages, while other languages and types of corpora have received relatively less attention. Because an increasing number of corpora have been constructed, and each corpus is constructed for a definite purpose, this study identifies the functions of corpora and combines the values of various types of corpora for auto-learning based on the existing corpora. Specifically, the following three corpora are adopted: (a) the Corpus of Spanish; (b) the Corpus of Taiwanese Learners of Spanish; and (c) the Parallel Corpus of Spanish, English, and Chinese. These corpora represent a type of native, learner, and parallel language, respectively. We apply these corpora as auxiliary resources to identify the advantages of applying various types of corpora in language learning from a learner's perspective. In the environment of auto-learning, 28 participants completed frequency questions related to semantic and lexical aspects. After analyzing the questionnaire data, we obtained the following findings: (a) the native corpus requires a more advanced level of Spanish proficiency to manage ampler and deeper context; (b) the learners' corpus facilitates the distinction between error and correction during the learning process; (c) the parallel corpus assists learners in connecting form and meaning; (d) learning is more efficient if the learner can capitalizes on specific functions provided by various corpora in the application order of parallel, learner and native corpora.","label_nlp4sg":1,"task":["Evaluation of Corpus"],"method":["questionnaire"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"melby-etal-1980-interactive","url":"https:\/\/aclanthology.org\/C80-1064","title":"ITS: Interactive Translation System","abstract":"At COLING78 we reported on an interactive translation system now called ITS, which uses on-line man-machine interaction. This paper is an update on ITS with suggestions for future work. Summary of ITS ITS is a second-generation machine translation system. Processing is divided into three major steps: analysis, transfer, and synthesis. Analysis is generally independent of the target language, and synthesis is nearly independent of the source language. The transfer step is dependent on both source and target languages. The intermediate representation produced by analysis, adjusted by transfer, and processed by synthesis is defined by Junction Grammar I and consists of objects called J-trees.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1980,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hobbs-1986-overview","url":"https:\/\/aclanthology.org\/H86-1003","title":"Overview of the TACITUS Project","abstract":"The specific aim of the TACITUS project is to develop interpretation processes for handling casualty reports (casreps), which are messages in freeflowing text about breakdowns of machinery. 1 These interpretation processes will be an essential component, and indeed the principal component, of systems for automatic message routing and systems for the automatic extraction of information from messages for entry into a data base or an expert system. In the latter application, for example, it is desirable to be able to recognize conditions in the message that instantiate conditions in the antecedents of the expert system's rules, so that the expert system can reason on the basis of more up-to-date and more specific information.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1986,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"noh-etal-2011-pomy","url":"https:\/\/aclanthology.org\/W11-2043","title":"POMY: A Conversational Virtual Environment for Language Learning in POSTECH","abstract":"This demonstration will illustrate an interactive immersive computer game, POMY, designed to help Korean speakers learn English. This system allows learners to exercise their visual and aural senses, receiving a full immersion experience to increase their memory and concentration abilities to a greatest extent. In POMY, learners can have free conversations with game characters and receive corrective feedback to their errors. Game characters show various emotional expressions based on learners' input to keep learners motivated. Through this system, learners can repeatedly practice conversations in everyday life setting in a foreign language with no embarrassment.","label_nlp4sg":1,"task":["Language Learning"],"method":["Conversational Virtual Environment"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":"This work was supported by the Industrial Strategic technology development program, 10035252, development of dialog-based spontaneous speech interface technology on mobile platform, funded by the Ministry of Knowledge Economy (MKE, Korea), and by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0019523).","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"haldar-etal-2021-dsc","url":"https:\/\/aclanthology.org\/2021.fnp-1.8","title":"DSC-IITISM at FinCausal 2021: Combining POS tagging with Attention-based Contextual Representations for Identifying Causal Relationships in Financial Documents","abstract":"Causality detection draws plenty of attention in the field of Natural Language Processing and linguistics research. It has essential applications in information retrieval, event prediction, question answering, financial analysis, and market research. In this study, we explore several methods to identify and extract cause-effect pairs in financial documents using transformers. For this purpose, we propose an approach that combines POS tagging with the BIO scheme, which can be integrated with modern transformer models to address this challenge of identifying causality in a given text. Our best methodology achieves an F1-Score of 0.9551, and an Exact Match Score of 0.8777 on the blind test in the FinCausal-2021 Shared Task at the FinCausal 2021 Workshop.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yeh-2003-bilingual","url":"https:\/\/aclanthology.org\/O03-2004","title":"Bilingual Sentence Alignment Based on Punctuation Marks","abstract":"We present a new approach to aligning English and Chinese sentences in parallel corpora based solely on punctuations. Although the length based approach produces high accuracy rates of sentence alignment for clean parallel corpora written in two Western languages such as French-English and German-English, it does not fair as well for parallel corpora that are noisy or written in two distant languages such as Chinese-English. It is possible to use cognates on top of length-based approach to increase alignment accuracy. However, cognates do not exist between two distant languages, therefore limiting the applicability of cognate-based approach. In this paper, we examine the feasibility of using punctuations for high accuracy sentence alignment. We have experimented with an implementation of the proposed method on the parallel corpus of Chinese-English Sinorama Magazine Corpus with satisfactory results. We also demonstrated that the method was applicable to other language pairs such as English-Japanese with minimal additional effort.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hershcovich-etal-2020-comparison","url":"https:\/\/aclanthology.org\/2020.coling-main.264","title":"Comparison by Conversion: Reverse-Engineering UCCA from Syntax and Lexical Semantics","abstract":"Building robust natural language understanding systems will require a clear characterization of whether and how various linguistic meaning representations complement each other. To perform a systematic comparative analysis, we evaluate the mapping between meaning representations from different frameworks using two complementary methods: (i) a rule-based converter, and (ii) a supervised delexicalized parser that parses to one framework using only information from the other as features. We apply these methods to convert the STREUSLE corpus (with syntactic and lexical semantic annotations) to UCCA (a graph-structured full-sentence meaning representation). Both methods yield surprisingly accurate target representations, close to fully supervised UCCA parser quality-indicating that UCCA annotations are partially redundant with STREUSLE annotations. Despite this substantial convergence between frameworks, we find several important areas of divergence.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by grant 2016375 from the United States-Israel Binational Science Foundation (BSF), Jerusalem, Israel. ML is funded by a Google Focused Research Award. We acknowledge the computational resources provided by CSC in Helsinki and Sigma2 in Oslo through NeIC-NLPL (www.nlpl.eu).","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kluth-schultheis-2018-rating","url":"https:\/\/aclanthology.org\/W18-2807","title":"Rating Distributions and Bayesian Inference: Enhancing Cognitive Models of Spatial Language Use","abstract":"We present two methods that improve the assessment of cognitive models. The first method is applicable to models computing average acceptability ratings. For these models, we propose an extension that simulates a full rating distribution (instead of average ratings) and allows generating individual ratings. Our second method enables Bayesian inference for models generating individual data. To this end, we propose to use the cross-match test (Rosenbaum, 2005) as a likelihood function. We exemplarily present both methods using cognitive models from the domain of spatial language use. For spatial language use, determining linguistic acceptability judgments of a spatial preposition for a depicted spatial relation is assumed to be a crucial process (Logan and Sadler, 1996). Existing models of this process compute an average acceptability rating. We extend the models and-based on existing data-show that the extended models allow extracting more information from the empirical data and yield more readily interpretable information about model successes and failures. Applying Bayesian inference, we find that model performance relies less on mechanisms of capturing geometrical aspects than on mapping the captured geometry to a rating interval.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by the Cluster of Excellence Cognitive Interaction Technology 'CITEC' (EXC 277) at Bielefeld University, which is funded by the German Research Foundation (DFG).","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"collier-etal-2014-impact","url":"https:\/\/aclanthology.org\/W14-1103","title":"The impact of near domain transfer on biomedical named entity recognition","abstract":"Current research in fully supervised biomedical named entity recognition (bioNER) is often conducted in a setting of low sample sizes. Whilst experimental results show strong performance in-domain it has been recognised that quality suffers when models are applied to heterogeneous text collections. However the causal factors have until now been uncertain. In this paper we describe a controlled experiment into near domain bias for two Medline corpora on hereditary diseases. Five strategies are employed for mitigating the impact of near domain transference including simple transference, pooling, stacking, class re-labeling and feature augmentation. We measure their effect on f-score performance against an in domain baseline. Stacking and feature augmentation mitigate f-score loss but do not necessarily result in superior performance except for selected classes. Simple pooling of data across domains failed to exploit size effects for most classes. We conclude that we can expect lower performance and higher annotation costs if we do not adequately compensate for the distributional dissimilarities of domains during learning.","label_nlp4sg":1,"task":["biomedical named entity recognition"],"method":["domain transfer","feature augmentation","pooling","stacking"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"The authors gratefully acknowledge the many helpful comments from the anonymous reviewers of this paper. Nigel Collier's research is supported by the European Commission through the Marie Curie International Incoming Fellowship (IIF) programme (Project: Phenominer, Ref: 301806).","year":2014,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hirabayashi-etal-2020-composing","url":"https:\/\/aclanthology.org\/2020.paclic-1.46","title":"Composing Word Vectors for Japanese Compound Words Using Bilingual Word Embeddings","abstract":"This study conducted an experiment to compare the word embeddings of a compound word and a word in Japanese on the same vector space using bilingual word embeddings. Because Japanese does not have word delimiters between words; thus various word definitions exist according to dictionaries and corpora. We divided one corpus into words on the basis of two definitions, namely, shorter and ordinary words and longer compound words, and regarded two word-sequences as a parallel corpus of different languages. We then generated word embeddings from the corpora of these languages and mapped the vectors into the common space using monolingual mapping methods, a linear transformation matrix, and VecMap. We evaluated our methods by synonym ranking using a thesaurus. Furthermore, we conducted experiments of two comparative methods: (1) a method where the compound words were divided into words and the word embeddings were averaged and (2) a method where the word embeddings of the latter words are regarded as those of the compound words. The VecMap results with the supervised option outperformed that with the identical option, linear transformation matrix, and the latter word method, but could not beat the average method.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by JSPS KAKENHI Grants Number 18K11421, 17KK0002, and a project of the Younger Researchers Grants from Ibaraki University.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yung-etal-2015-sequential","url":"https:\/\/aclanthology.org\/W15-3101","title":"Sequential Annotation and Chunking of Chinese Discourse Structure","abstract":"We propose a linguistically driven approach to represent discourse relations in Chinese text as sequences. We observe that certain surface characteristics of Chinese texts, such as the order of clauses, are overt markers of discourse structures, yet existing annotation proposals adapted from formalism constructed for English do not fully incorporate these characteristics. We present an annotated resource consisting of 325 articles in the Chinese Treebank. In addition, using this annotation, we introduce a discourse chunker based on a cascade of classifiers and report 70% top-level discourse sense accuracy.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kinnunen-etal-2012-swan","url":"https:\/\/aclanthology.org\/E12-2005","title":"SWAN - Scientific Writing AssistaNt. A Tool for Helping Scholars to Write Reader-Friendly Manuscripts","abstract":"Difficulty of reading scholarly papers is significantly reduced by reader-friendly writing principles. Writing reader-friendly text, however, is challenging due to difficulty in recognizing problems in one's own writing. To help scholars identify and correct potential writing problems, we introduce SWAN (Scientific Writing AssistaNt) tool. SWAN is a rule-based system that gives feedback based on various quality metrics based on years of experience from scientific writing classes including 960 scientists of various backgrounds: life sciences, engineering sciences and economics. According to our first experiences, users have perceived SWAN as helpful in identifying problematic sections in text and increasing overall clarity of manuscripts.","label_nlp4sg":1,"task":["identify and correct potential writing problems"],"method":["Scientific Writing AssistaNt","rule - based system"],"goal1":"Quality Education","goal2":"Industry, Innovation and Infrastructure","goal3":null,"acknowledgments":"This works of T. Kinnunen and T. Kakkonen were supported by the Academy of Finland. The authors would like to thank Arttu Viljakainen, Teemu Turunen and Zhengzhe Wu in implementing various parts of SWAN.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ha-yaneva-2018-automatic","url":"https:\/\/aclanthology.org\/W18-0548","title":"Automatic Distractor Suggestion for Multiple-Choice Tests Using Concept Embeddings and Information Retrieval","abstract":"Developing plausible distractors (wrong answer options) when writing multiple-choice questions has been described as one of the most challenging and time-consuming parts of the item-writing process. In this paper we propose a fully automatic method for generating distractor suggestions for multiple-choice questions used in high-stakes medical exams. The system uses a question stem and the correct answer as an input and produces a list of suggested distractors ranked based on their similarity to the stem and the correct answer. To do this we use a novel approach of combining concept embeddings with information retrieval methods. We frame the evaluation as a prediction task where we aim to \"predict\" the human-produced distractors used in large sets of medical questions, i.e. if a distractor generated by our system is good enough it is likely to feature among the list of distractors produced by the human item-writers. The results reveal that combining concept embeddings with information retrieval approaches significantly improves the generation of plausible distractors and enables us to match around 1 in 5 of the human-produced distractors. The approach proposed in this paper is generalisable to all scenarios where the distractors refer to concepts.","label_nlp4sg":1,"task":["Distractor Suggestion"],"method":["Concept Embeddings","Information Retrieval"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"konstas-lapata-2012-unsupervised","url":"https:\/\/aclanthology.org\/N12-1093","title":"Unsupervised Concept-to-text Generation with Hypergraphs","abstract":"Concept-to-text generation refers to the task of automatically producing textual output from non-linguistic input. We present a joint model that captures content selection (\"what to say\") and surface realization (\"how to say\") in an unsupervised domain-independent fashion. Rather than breaking up the generation process into a sequence of local decisions, we define a probabilistic context-free grammar that globally describes the inherent structure of the input (a corpus of database records and text describing some of them). We represent our grammar compactly as a weighted hypergraph and recast generation as the task of finding the best derivation tree for a given input. Experimental evaluation on several domains achieves competitive results with state-of-the-art systems that use domain specific constraints, explicit feature engineering or labeled data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to Percy Liang and Gabor Angeli for providing us with their code and data. We would also like to thank Luke Zettlemoyer and Tom Kwiatkowski for sharing their ATIS dataset with us and Frank Keller for his feedback on an earlier version of this paper.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ren-etal-2021-novel","url":"https:\/\/aclanthology.org\/2021.emnlp-main.208","title":"A Novel Global Feature-Oriented Relational Triple Extraction Model based on Table Filling","abstract":"Table filling based relational triple extraction methods are attracting growing research interests due to their promising performance and their abilities on extracting triples from complex sentences. However, this kind of methods are far from their full potential because most of them only focus on using local features but ignore the global associations of relations and of token pairs, which increases the possibility of overlooking some important information during triple extraction. To overcome this deficiency, we propose a global feature-oriented triple extraction model that makes full use of the mentioned two kinds of global associations. Specifically, we first generate a table feature for each relation. Then two kinds of global associations are mined from the generated table features. Next, the mined global associations are integrated into the table feature of each relation. This \"generate-mine-integrate\" process is performed multiple times so that the table feature of each relation is refined step by step. Finally, each relation's table is filled based on its refined table feature, and all triples linked to this relation are extracted based on its filled table. We evaluate the proposed model on three benchmark datasets. Experimental results show our model is effective and it achieves state-of-the-art results on all of these datasets. The source code of our work is","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pustejovsky-etal-2005-merging","url":"https:\/\/aclanthology.org\/W05-0302","title":"Merging PropBank, NomBank, TimeBank, Penn Discourse Treebank and Coreference","abstract":"Many recent annotation efforts for English have focused on pieces of the larger problem of semantic annotation, rather than initially producing a single unified representation. This paper discusses the issues involved in merging four of these efforts into a unified linguistic structure: PropBank, NomBank, the Discourse Treebank and Coreference Annotation undertaken at the University of Essex. We discuss resolving overlapping and conflicting annotation as well as how the various annotation schemes can reinforce each other to produce a representation that is greater than the sum of its parts.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"montgomery-etal-1992-language-systems","url":"https:\/\/aclanthology.org\/M92-1028","title":"Language Systems, Inc. Description of the DBG System as Used for MUC-4","abstract":"LSI's Data Base Generation (DBG) system is a syntax-driven natural language processing system that integrate s syntax and semantics to analyze message text. The goal of the DBG system is to perform full-scale lexical, syntactic, semantic, and discourse analyses of message text and produce a system-internal knowledge representatio n of the text that can serve as input to a downstream system or external data structure, such as the MUC-4 templates.\nDBG's development has been based on analysis of large volumes of message traffic (thousands of Air Force an d Army messages) in five domains . The DBG internal knowledge representation has been mapped to external data structures for purposes of data base update, expert system update, and fusion of message content with the content of other messages and other information sources. Although our research on natural language understanding systems goes back almost 20 years, the actual implementations for the individual components of the system ar e all quite recent, generally occurring within the last two to five years. The texts in the various domains rang e from formal written messages to transcribed radiotelephone conversations. The DBG system has been formally tested on previously unseen messages in three of the domains, with competitive tests against humans performing the same task in two domains. Recently, the system has been adapted to the Machine Aided Voice Translatio n (MAVT) project. In this application, the system takes a \"live\" voice input sentence, uses a speech recognizer t o convert it to written text, processes the written text, and generates a written translation of the sentence in the target language. This written output is then input to a speech generator to produce a voice translation of the original utterance . The languages processed thus far by this version of the system are English and Spanish, wit h translation in both directions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"novotny-etal-2021-one","url":"https:\/\/aclanthology.org\/2021.ranlp-1.120","title":"One Size Does Not Fit All: Finding the Optimal Subword Sizes for FastText Models across Languages","abstract":"Unsupervised representation learning of words from large multilingual corpora is useful for downstream tasks such as word sense disambiguation, semantic text similarity, and information retrieval. The representation precision of log-bilinear fastText models is mostly due to their use of subword information. In previous work, the optimization of fast-Text's subword sizes has not been fully explored, and non-English fastText models were trained using subword sizes optimized for English and German word analogy tasks. In our work, we find the optimal subword sizes on the English, German, Czech, Italian, Spanish, French, Hindi, Turkish, and Russian word analogy tasks. We then propose a simplegram coverage model and we show that it predicts better-than-default subword sizes on the Spanish, French, Hindi, Turkish, and Russian word analogy tasks. We show that the optimization of fastText's subword sizes matters and results in a 14% improvement on the Czech word analogy task. We also show that expensive parameter optimization can be replaced by a simple-gram coverage model that consistently improves the accuracy of fastText models on the word analogy tasks by up to 3% compared to the default subword sizes, and that it is within 1% accuracy of the optimal subword sizes.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chu-etal-2021-unsupervised","url":"https:\/\/aclanthology.org\/2021.findings-acl.365","title":"Unsupervised Label Refinement Improves Dataless Text Classification","abstract":"Dataless text classification is capable of classifying documents into previously unseen labels by assigning a score to any document paired with a label description. While promising, it crucially relies on accurate descriptions of the label set for each downstream task. This reliance causes dataless classifiers to be highly sensitive to the choice of label descriptions and hinders the broader application of dataless classification in practice. In this paper, we ask the following question: how can we improve dataless text classification using the inputs of the downstream task dataset? Our primary solution is a clustering based approach. Given a dataless classifier, our approach refines its set of predictions using k-means clustering. We demonstrate the broad applicability of our approach by improving the performance of two widely used classifier architectures, one that encodes text-category pairs with two independent encoders and one with a single joint encoder. Experiments show that our approach consistently improves dataless classification across different datasets and makes the classifier more robust to the choice of label descriptions. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for helpful comments. This research was supported in part by a Bloomberg data science research grant to KS and KG.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lee-don-2017-splitting","url":"https:\/\/aclanthology.org\/W17-6307","title":"Splitting Complex English Sentences","abstract":"This paper applies parsing technology to the task of syntactic simplification of English sentences, focusing on the identification of text spans that can be removed from a complex sentence. We report the most comprehensive evaluation to-date on this task, using a dataset of sentences that exhibit simplification based on coordination, subordination, punctuation\/parataxis, adjectival clauses, participial phrases, and appositive phrases. We train a decision tree with features derived from text span length, POS tags and dependency relations, and show that it significantly outperforms a parser-only baseline.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"iida-tokunaga-2014-building","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/155_Paper.pdf","title":"Building a Corpus of Manually Revised Texts from Discourse Perspective","abstract":"This paper presents building a corpus of manually revised texts which includes both before and after-revision information. In order to create such a corpus, we propose a procedure for revising a text from a discourse perspective, consisting of dividing a text to discourse units, organising and reordering groups of discourse units and finally modifying referring and connective expressions, each of which imposes limits on freedom of revision. Following the procedure, six revisers who have enough experience in either teaching Japanese or scoring Japanese essays revised 120 Japanese essays written by Japanese native speakers. Comparing the original and revised texts, we found some specific manual revisions frequently occurred between the original and revised texts, e.g. 'thesis' statements were frequently placed at the beginning of a text. We also evaluate text coherence using the original and revised texts on the task of pairwise information ordering, identifying a more coherent text. The experimental results using two text coherence models demonstrated that the two models did not outperform the random baseline.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"diaz-torres-etal-2020-automatic","url":"https:\/\/aclanthology.org\/2020.trac-1.21","title":"Automatic Detection of Offensive Language in Social Media: Defining Linguistic Criteria to build a Mexican Spanish Dataset","abstract":"Phenomena such as bullying, homophobia, sexism and racism have transcended to social networks, motivating the development of tools for their automatic detection. The challenge becomes greater when speakers make use of popular sayings, colloquial expressions and idioms which may contain vulgar, profane or rude words, but not always have the intention to offend; a situation often found in the Mexican Spanish variant. Under these circumstances, the identification of the offense goes beyond the lexical and syntactic elements of the message. This first work aims to define the main linguistic features of aggressive, offensive and vulgar language in social networks in order to establish linguistic-based criteria to facilitate the identification of abusive language. For this purpose, a Mexican Spanish Twitter corpus was compiled and analyzed. The dataset included words that, despite being rude, need to be considered in context to determine they are part of an offense. Based on the analysis of this corpus, linguistic criteria were defined to determine whether a message is offensive. To simplify the application of these criteria, an easy-to-follow diagram was designed. The paper presents an example of the use of the diagram, as well as the basic statistics of the corpus.","label_nlp4sg":1,"task":["Automatic Detection of Offensive Language"],"method":["Mexican Spanish Dataset","linguistic - based criteria"],"goal1":"Peace, Justice and Strong Institutions","goal2":"Reduced Inequalities","goal3":null,"acknowledgments":"We would like to thank CONACyT for partially supporting this work under grants CB-2015-01-257383 and the Thematic Networks program (Language Technologies Thematic Network).","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":1,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"lehtola-1986-dpl","url":"https:\/\/aclanthology.org\/W85-0114","title":"DPL -- a computational method for describing grammars and modelling parsers","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1986,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"regalado-etal-2015-salinlahi","url":"https:\/\/aclanthology.org\/W15-4413","title":"Salinlahi III: An Intelligent Tutoring System for Filipino Heritage Language Learners","abstract":"Heritage language learners are learners of the primary language of their parents which they might have been exposed to but have not learned it as a language they can fluently use to communicate with other people. Salinlahi, an Interactive Learning Environment, was developed to teach these young Filipino heritage learners about basic Filipino vocabulary while Salinlahi II included a support for collaborative learning. With the aim of teaching learners with basic knowledge in Filipino we developed Salinlahi III to teach higher level lessons focusing on Filipino grammar and sentence construction. An internal evaluation of the system has shown that the user interface and feedback of the tutor was appropriate. Moreover, in an external evaluation of the system, experimental and controlled field tests were done and results showed that there is a positive learning gain after using the system.","label_nlp4sg":1,"task":["teaching","grammar and sentence construction"],"method":["Tutoring System","Interactive Learning Environment"],"goal1":"Quality Education","goal2":"Reduced Inequalities","goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":1,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"schwartz-etal-2017-story","url":"https:\/\/aclanthology.org\/W17-0907","title":"Story Cloze Task: UW NLP System","abstract":"This paper describes University of Washington NLP's submission for the Linking Models of Lexical, Sentential and Discourse-level Semantics (LSDSem 2017) shared task-the Story Cloze Task. Our system is a linear classifier with a variety of features, including both the scores of a neural language model and style features. We report 75.2% accuracy on the task. A further discussion of our results can be found in Schwartz et al. (2017).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors thank the shared task organizers and anonymous reviewers for feedback. This research was supported in part by DARPA under the Communicating with Computers program.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"frermann-szarvas-2017-inducing","url":"https:\/\/aclanthology.org\/D17-1200","title":"Inducing Semantic Micro-Clusters from Deep Multi-View Representations of Novels","abstract":"Automatically understanding the plot of novels is important both for informing literary scholarship and applications such as summarization or recommendation. Various models have addressed this task, but their evaluation has remained largely intrinsic and qualitative. Here, we propose a principled and scalable framework leveraging expert-provided semantic tags (e.g., mystery, pirates) to evaluate plot representations in an extrinsic fashion, assessing their ability to produce locally coherent groupings of novels (micro-clusters) in model space. We present a deep recurrent autoencoder model that learns richly structured multi-view plot representations, and show that they i) yield better microclusters than less structured representations; and ii) are interpretable, and thus useful for further literary analysis or labelling of the emerging micro-clusters.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Alex Klementiev, Kevin Small, Joon Hao Chuah and Mohammad Kanso for their valuable insights, feedback and technical help on the work presented in this paper. We also thank the anonymous reviewers for their valuable feedback and suggestions.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rizov-2008-hydra","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2008\/pdf\/389_paper.pdf","title":"Hydra: a Modal Logic Tool for Wordnet Development, Validation and Exploration","abstract":"This paper presents a multipurpose system for wordnet (WN) development, named Hydra. Hydra is an application for data editing and validation, as well as for data retrieval and synchronization between wordnets for different languages. The use of modal language for wordnet, the representation of wordnet as a relational database and the concurrent access are among its main advantages (Rizov, 2006).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"I am indebted to my scientific advisor Assoc. Prof. PhD. Tinko Tinchev for the valuable help and comments in my work on Hydra's design and development, as well as in discussing versions of this paper.","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"soper-etal-2021-bart","url":"https:\/\/aclanthology.org\/2021.wnut-1.31","title":"BART for Post-Correction of OCR Newspaper Text","abstract":"Optical character recognition (OCR) from newspaper page images is susceptible to noise due to degradation of old documents and variation in typesetting. In this report, we present a novel approach to OCR postcorrection. We cast error correction as a translation task, and fine-tune BART, a transformerbased sequence-to-sequence language model pretrained to denoise corrupted text. We are the first to use sentence-level transformer models for OCR post-correction, and our best model achieves a 29.4% improvement in character accuracy over the original noisy OCR text. Our results demonstrate the utility of pretrained language models for dealing with noisy text.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"In addition to the listed authors, Suraj Subraveti, Michael Brodie, Brent Carter, and Craig Whatcott also made important contributions to the work described in this report. We also thank the anonymous reviewers for their helpful feedback.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hegde-talukdar-2015-entity","url":"https:\/\/aclanthology.org\/D15-1061","title":"An Entity-centric Approach for Overcoming Knowledge Graph Sparsity","abstract":"Automatic construction of knowledge graphs (KGs) from unstructured text has received considerable attention in recent research, resulting in the construction of several KGs with millions of entities (nodes) and facts (edges) among them. Unfortunately, such KGs tend to be severely sparse in terms of number of facts known for a given entity, i.e., have low knowledge density. For example, the NELL KG consists of only 1.34 facts per entity. Unfortunately, such low knowledge density makes it challenging to use such KGs in real-world applications. In contrast to best-effort extraction paradigms followed in the construction of such KGs, in this paper we argue in favor of ENTIty Centric Expansion (ENTICE), an entity-centric KG population framework, to alleviate the low knowledge density problem in existing KGs. By using ENTICE, we are able to increase NELL's knowledge density by a factor of 7.7 at 75.5% accuracy. Additionally, we are also able to extend the ontology discovering new relations and entities.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported in part by a gift from Google. Thanks to Uday Saini for carefully reading a draft of the paper.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gupta-carvalho-2019-committee","url":"https:\/\/aclanthology.org\/W19-4325","title":"On Committee Representations of Adversarial Learning Models for Question-Answer Ranking","abstract":"Adversarial training is a process in Machine Learning that explicitly trains models on adversarial inputs (inputs designed to deceive or trick the learning process) in order to make it more robust or accurate. In this paper we investigate how representing adversarial training models as committees can be used to effectively improve the performance of Question-Answer (QA) Ranking. We start by empirically probing the effects of adversarial training over multiple QA ranking algorithms, including the state-of-the-art Multihop Attention Network model. We evaluate these algorithms on several benchmark datasets and observe that, while adversarial training is beneficial to most baseline algorithms, there are cases where it may lead to overfitting and performance degradation. We investigate the causes of such degradation, and then propose a new representation procedure for this adversarial learning problem, based on committee learning, that not only is capable of consistently improving all baseline algorithms, but also outperforms the previous state-of-the-art algorithm by as much as 6% in NDCG (Normalized Discounted Cumulative Gain).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nn-1986-finite-string-newsletter-site","url":"https:\/\/aclanthology.org\/J86-2008","title":"The Finite String Newsletter: Site Report the ESPRIT Project LOKI","abstract":"Facilities: L'INC (Langauge, Information, and Computation) laboratory, which consists of a dedicated VAX 11\/785, 10 Symbolics Lisp machines, 7 HP 68020-based AI workstations, a SUN workstation, several Macintoshes, and a laser printer. These machines are networked together and to other research facilities in the department.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1986,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hamalainen-etal-2021-detecting","url":"https:\/\/aclanthology.org\/2021.wnut-1.3","title":"Detecting Depression in Thai Blog Posts: a Dataset and a Baseline","abstract":"We present the first openly available corpus for detecting depression in Thai. Our corpus is compiled by expert verified cases of depression in several online blogs. We experiment with two different LSTM based models and two different BERT based models. We achieve a 77.53% accuracy with a Thai BERT model in detecting depression. This establishes a good baseline for future researcher on the same corpus. Furthermore, we identify a need for Thai embeddings that have been trained on a more varied corpus than Wikipedia. Our corpus, code and trained models have been released openly on Zenodo.","label_nlp4sg":1,"task":["Detecting Depression"],"method":["BERT","LSTM"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"turchi-etal-2009-learning","url":"https:\/\/aclanthology.org\/2009.eamt-smart.6","title":"Learning to translate: a statistical and computational analysis","abstract":"Extensive study of a Phrase based SMT system using Moses, Europarl and a HPC cluster.\nTry to answer the previous questions by extrapolating the performance of the system under different conditions:","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gi-etal-2021-verdict","url":"https:\/\/aclanthology.org\/2021.fever-1.7","title":"Verdict Inference with Claim and Retrieved Elements Using RoBERTa","abstract":"Automatic fact verification has attracted recent research attention as the increasing dissemination of disinformation on social media platforms. The FEVEROUS shared task introduces a benchmark for fact verification, in which a system is challenged to verify the given claim using the extracted evidential elements from Wikipedia documents. In this paper, we propose our 3 rd place three-stage system consisting of document retrieval, element retrieval, and verdict inference for the FEVER-OUS shared task. By considering the context relevance in the fact extraction and verification task, our system achieves 0.29 FEVER-OUS score on the development set and 0.25 FEVEROUS score on the blind test set, both outperforming the FEVEROUS baseline.","label_nlp4sg":1,"task":["Verdict Inference","fact extraction"],"method":["RoBERTa"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"kay-roscheisen-1993-text","url":"https:\/\/aclanthology.org\/J93-1006","title":"Text-Translation Alignment","abstract":"We present an algorithm for aligning texts with their translations that is based only on internal evidence. The relaxation process rests on a notion of which word in one text corresponds to which word in the other text that is essentially based on the similarity of their distributions. It exploits a partial alignment of the word level to induce a maximum likelihood alignment of the sentence level, which is in turn used, in the next iteration, to refine the word level estimate. The algorithm appears to converge to the correct sentence alignment in only a few iterations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1993,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"fragkou-etal-2008-boemie","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2008\/pdf\/324_paper.pdf","title":"BOEMIE Ontology-Based Text Annotation Tool","abstract":"The huge amount of the available information in the Web creates the need of effective information extraction systems that are able to produce metadata that satisfy user's information needs. The development of such systems, in the majority of cases, depends on the availability of an appropriately annotated corpus in order to learn extraction models. The production of such corpora can be significantly facilitated by annotation tools that are able to annotate, according to a defined ontology, not only named entities but most importantly relations between them. This paper describes the BOEMIE ontology-based annotation tool which is able to locate blocks of text that correspond to specific types of named entities, fill tables corresponding to ontology concepts with those named entities and link the filled tables based on relations defined in the domain ontology. Additionally, it can perform annotation of blocks of text that refer to the same topic. The tool has a user-friendly interface, supports automatic pre-annotation, annotation comparison as well as customization to other annotation schemata. The annotation tool has been used in a large scale annotation task involving 3000 web pages regarding athletics. It has also been used in another annotation task involving 503 web pages with medical information, in different languages.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lane-etal-2021-computational","url":"https:\/\/aclanthology.org\/2021.dash-1.16","title":"A Computational Model for Interactive Transcription","abstract":"Transcribing low resource languages can be challenging in the absence of a comprehensive lexicon and proficient transcribers. Accordingly, we seek a way to enable interactive transcription, whereby the machine amplifies human efforts. This paper presents a computational model for interactive transcription, supporting multiple modes of interactivity and increasing the likelihood of finding tasks that stimulate local participation. The approach also supports other applications which are useful in low resource contexts, including spoken document retrieval and language learning.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful for the support of the Warddeken Rangers of West Arnhem. This work was covered by a research permit from the Northern Land Council, and was sponsored by the Australian government through a PhD scholarship, and grants from the Australian Research Council and the Indigenous Language and Arts Program.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zalila-haddar-2011-construction","url":"https:\/\/aclanthology.org\/R11-1081","title":"Construction of an HPSG Grammar for the Arabic Relative Sentences","abstract":"The paper proposes a treatment of relative sentences within the framework of Head-driven Phrase Structure Grammar (HPSG). Relative sentences are considered as a rather delicate linguistic phenomenon and not explored enough by Arabic researchers. In an attempt to deal with this phenomenon, we propose in this paper a study about different forms of relative clauses and the interaction of relatives with other linguistic phenomena such as ellipsis and coordination. In addition, in this paper we shed light on the recursion in Arabic relative sentences which makes this phenomenon more delicate in its treatment. This study will be used for the construction of an HPSG grammar that can process relative sentences. The HPSG formalism is based on two fundamental components: features and AVM (Attribute-Value-Matrix). In fact, an adaptation of HPSG for the Arabic language is made here in order to integrate features and rules of the Arabic language. The established HPSG grammar is specified in TDL (Type Description Language). This specification is used by the LKB platform (Linguistic Knowledge Building) in order to generate the parser.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"brierley-atwell-2008-proposel-prosody","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2008\/pdf\/724_paper.pdf","title":"ProPOSEL: A Prosody and POS English Lexicon for Language Engineering","abstract":"ProPOSEL is a prototype prosody and PoS (part-of-speech) English lexicon for Language Engineering, derived from the following language resources: the computer-usable dictionary CUVPlus, the CELEX-2 database, the Carnegie-Mellon Pronouncing Dictionary, and the BNC, LOB and Penn Treebank PoS-tagged corpora. The lexicon is designed for the target application of prosodic phrase break prediction but is also relevant to other machine learning and language engineering tasks. It supplements the existing record structure for wordform entries in CUVPlus with syntactic annotations from rival PoS-tagging schemes, mapped to fields for default closed and open-class word categories and for lexical stress patterns representing the rhythmic structure of wordforms and interpreted as potential new text-based features for automatic phrase break classifiers. The current version of the lexicon comes as a textfile of 104052 separate entries and is intended for distribution with the Natural Language ToolKit; it is therefore accompanied by supporting Python software for manipulating the data so that it can be used for Natural Language Processing (NLP) and corpus-based research in speech synthesis and speech recognition.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hengle-etal-2021-combining","url":"https:\/\/aclanthology.org\/2021.wanlp-1.46","title":"Combining Context-Free and Contextualized Representations for Arabic Sarcasm Detection and Sentiment Identification","abstract":"Since their inception, transformer-based language models have led to impressive performance gains across multiple natural language processing tasks. For Arabic, the current stateof-the-art results on most datasets are achieved by the AraBERT language model. Notwithstanding these recent advancements, sarcasm and sentiment detection persist to be challenging tasks in Arabic, given the language's rich morphology, linguistic disparity and dialectal variations. This paper proffers team SPPU-AASM's submission for the WANLP ArSarcasm shared-task 2021, which centers around the sarcasm and sentiment polarity detection of Arabic tweets. The study proposes a hybrid model, combining sentence representations from AraBERT with static word vectors trained on Arabic social media corpora. The proposed system achieves a F1-sarcastic score of 0.62 and a F-PN score of 0.715 for the sarcasm and sentiment detection tasks, respectively. Simulation results show that the proposed system outperforms multiple existing approaches for both the tasks, suggesting that the amalgamation of context-free and contextdependent text representations can help capture complementary facets of word meaning in Arabic. The system ranked second and tenth in the respective sub-tasks of sarcasm detection and sentiment identification.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"langer-schulder-2020-collocations","url":"https:\/\/aclanthology.org\/2020.signlang-1.21","title":"Collocations in Sign Language Lexicography: Towards Semantic Abstractions for Word Sense Discrimination","abstract":"In general monolingual lexicography a corpus-based approach to word sense discrimination (WSD) is the current standard. Automatically generated lexical profiles such as Word Sketches provide an overview on typical uses in the form of collocate lists grouped by their part of speech categories and their syntactic dependency relations to the base item. Collocates are sorted by their typicality according to frequency-based rankings. With the advancement of sign language (SL) corpora, SL lexicography can finally be based on actual language use as reflected in corpus data. In order to use such data effectively and gain new insights on sign usage, automatically generated collocation profiles need to be developed under the special conditions and circumstances of the SL data available. One of these conditions is that many of the prerequesites for the automatic syntactic parsing of corpora are not yet available for SL. In this article we describe a collocation summary generated from DGS Corpus data which is used for WSD as well as in entry-writing. The summary works based on the glosses used for lemmatisation. In addition, we explore how other resources can be utilised to add an additional layer of semantic grouping to the collocation analysis. For this experimental approach we use glosses, concepts, and wordnet supersenses.","label_nlp4sg":1,"task":["Word Sense Discrimination"],"method":["collocation analysis"],"goal1":"Reduced Inequalities","goal2":null,"goal3":null,"acknowledgments":"This publication has been produced in the context of the joint research funding of the German Federal Government and Federal States in the Academies' Programme, with funding from the Federal Ministry of Education and Research and the Free and Hanseatic City of Hamburg. The Academies' Programme is coordinated by the Union of the Academies of Sciences and Humanities.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":1,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sokokov-etal-2013-boosting","url":"https:\/\/aclanthology.org\/D13-1175","title":"Boosting Cross-Language Retrieval by Learning Bilingual Phrase Associations from Relevance Rankings","abstract":"We present an approach to learning bilingual n-gram correspondences from relevance rankings of English documents for Japanese queries. We show that directly optimizing cross-lingual rankings rivals and complements machine translation-based cross-language information retrieval (CLIR). We propose an efficient boosting algorithm that deals with very large cross-product spaces of word correspondences. We show in an experimental evaluation on patent prior art search that our approach, and in particular a consensus-based combination of boosting and translation-based approaches, yields substantial improvements in CLIR performance. Our training and test data are made publicly available.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research presented in this paper was supported in part by DFG grant \"Cross-language Learning-to-Rank for Patent Retrieval\". We would like to thank Eugen Ruppert for his contribution to the ranking data construction.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"de-marneffe-etal-2021-universal","url":"https:\/\/aclanthology.org\/2021.cl-2.11","title":"Universal Dependencies","abstract":"Universal dependencies (UD) is a framework for morphosyntactic annotation of human language, which to date has been used to create treebanks for more than 100 languages. In this article, we outline the linguistic theory of the UD framework, which draws on a long tradition of typologically oriented grammatical theories. Grammatical relations between words are centrally used to explain how predicate-argument structures are encoded morphosyntactically in different languages while morphological features and part-of-speech classes give the properties of words. We argue that this theory is a good basis for crosslinguistically consistent annotation of typologically diverse languages in a way that supports computational natural language understanding as well as broader linguistic studies.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Many people have contributed to the development of UD, and we especially want to mention our colleagues in the UD core guidelines group, Filip Ginter, Yoav Goldberg, Jan Haji\u010d, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Sebastian Schuster, Natalia Silveira, Reut Tsarfaty, and Francis Tyers, as well as William Croft, Kim Gerdes, Sylvain Kahane, Nathan Schneider, and Amir Zeldes. We are grateful to Google for sponsoring the UD project in a number of ways, and to the Computational Linguistics reviewers for helpful suggestions. Daniel Zeman's and Joakim Nivre's contributions to this work were supported by grant GX20-16819X of the Czech Science Foundation and grant 2016-01817 of the Swedish Research Council, respectively.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hu-yang-2020-privnet","url":"https:\/\/aclanthology.org\/2020.findings-emnlp.404","title":"PrivNet: Safeguarding Private Attributes in Transfer Learning for Recommendation","abstract":"Transfer learning is an effective technique to improve a target recommender system with the knowledge from a source domain. Existing research focuses on the recommendation performance of the target domain while ignores the privacy leakage of the source domain. The transferred knowledge, however, may unintendedly leak private information of the source domain. For example, an attacker can accurately infer user demographics from their historical purchase provided by a source domain data owner. This paper addresses the above privacy-preserving issue by learning a privacyaware neural representation by improving target performance while protecting source privacy. The key idea is to simulate the attacks during the training for protecting unseen users' privacy in the future, modeled by an adversarial game, so that the transfer learning model becomes robust to attacks. Experiments show that the proposed PrivNet model can successfully disentangle the knowledge benefitting the transfer from leaking the privacy.","label_nlp4sg":1,"task":["Safeguarding Private Attributes"],"method":["adversarial game"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"We thank Dr. Yu Zhang for insightful discussion. We thank the new publication paradigm, i.e., \"Findings of ACL: EMNLP 2020\", which makes","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"kessler-kuhn-2013-detection","url":"https:\/\/aclanthology.org\/D13-1194","title":"Detection of Product Comparisons - How Far Does an Out-of-the-Box Semantic Role Labeling System Take You?","abstract":"This short paper presents a pilot study investigating the training of a standard Semantic Role Labeling (SRL) system on product reviews for the new task of detecting comparisons. An (opinionated) comparison consists of a comparative \"predicate\" and up to three \"arguments\": the entity evaluated positively, the entity evaluated negatively, and the aspect under which the comparison is made. In user-generated product reviews, the \"predicate\" and \"arguments\" are expressed in highly heterogeneous ways; but since the elements are textually annotated in existing datasets, SRL is technically applicable. We address the interesting question how well training an outof-the-box SRL model works for English data. We observe that even without any feature engineering or other major adaptions to our task, the system outperforms a reasonable heuristic baseline in all steps (predicate identification, argument identification and argument classification) and in three different datasets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work reported in this paper was supported by a Nuance Foundation Grant.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"martin-2010-creating","url":"https:\/\/aclanthology.org\/2010.tc-1.2","title":"Creating your own translation memory repository","abstract":"SAS is a major international software company specializing in data processing software. Products encompass a wide range of statistical, business intelligence and analytical products for large enterprises in all sectors. These products are localized to 21 languages. Project management, engineering and terminologyrelated tasks supporting the localization process are carried out in the Copenhagen and Tokyo based localization offices. The following paper describes work carried out over several years by myself, Ronan Martin, and my colleague, Krzysztof Jozefowicz, both based in the Copenhagen office. In this introduction I would like to take the opportunity to explain my reasons for submitting a request to present this paper. process designers Krzysztof Jozefowicz is the Localization Manager at ELC, SAS's European Localization Center. He has worked in the localization industry for 15 years and has an M.Sc. in Computer Science from the Jagiellonian University in Cracow.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"partanen-etal-2019-dialect","url":"https:\/\/aclanthology.org\/D19-5519","title":"Dialect Text Normalization to Normative Standard Finnish","abstract":"We compare different LSTMs and transformer models in terms of their effectiveness in normalizing dialectal Finnish into the normative standard Finnish. As dialect is the common way of communication for people online in Finnish, such a normalization is a necessary step to improve the accuracy of the existing Finnish NLP tools that are tailored for normative Finnish text. We work on a corpus consisting of dialectal data from 23 distinct Finnish dialect varieties. The best functioning BRNN approach lowers the initial word error rate of the corpus from 52.89 to 5.73.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"crouch-king-2008-type","url":"https:\/\/aclanthology.org\/W08-0502","title":"Type-checking in Formally Non-typed Systems","abstract":"Type checking defines and constrains system output and intermediate representations. We report on the advantages of introducing multiple levels of type checking in deep parsing systems, even with untyped formalisms.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jiang-lee-2017-carrier","url":"https:\/\/aclanthology.org\/W17-5903","title":"Carrier Sentence Selection for Fill-in-the-blank Items","abstract":"Fill-in-the-blank items are a common form of exercise in computer-assisted language learning systems. To automatically generate an effective item, the system must be able to select a high-quality carrier sentence that illustrates the usage of the target word. Previous approaches for carrier sentence selection have considered sentence length, vocabulary difficulty, the position of the target word and the presence of finite verbs. This paper investigates the utility of word co-occurrence statistics and lexical similarity as selection criteria. In an evaluation on generating fill-in-the-blank items for learning Chinese as a foreign language, we show that these two criteria can improve carrier sentence quality.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"moeller-hulden-2018-automatic","url":"https:\/\/aclanthology.org\/W18-4809","title":"Automatic Glossing in a Low-Resource Setting for Language Documentation","abstract":"Morphological analysis of morphologically rich and low-resource languages is important to both descriptive linguistics and natural language processing. Field efforts usually procure analyzed data in cooperation with native speakers who are capable of providing some level of linguistic information. Manually annotating such data is very expensive and the traditional process is arguably too slow in the face of language endangerment and loss. We report on a case study of learning to automatically gloss a Nakh-Daghestanian language, Lezgi, from a very small amount of seed data. We compare a conditional random field based sequence labeler and a neural encoder-decoder model and show that a nearly 0.9 F 1-score on labeled accuracy of morphemes can be achieved with 3,000 words of transcribed oral text. Errors are mostly limited to morphemes with high allomorphy. These results are potentially useful for developing rapid annotation and fieldwork tools to support documentation of other morphologically rich, endangered languages.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"huang-du-2019-self","url":"https:\/\/aclanthology.org\/D19-1037","title":"Self-Attention Enhanced CNNs and Collaborative Curriculum Learning for Distantly Supervised Relation Extraction","abstract":"Distance supervision is widely used in relation extraction tasks, particularly when large-scale manual annotations are virtually impossible to conduct. Although Distantly Supervised Relation Extraction (DSRE) benefits from automatic labelling, it suffers from serious mislabelling issues, i.e. some or all of the instances for an entity pair (head and tail entities) do not express the labelled relation. In this paper, we propose a novel model that employs a collaborative curriculum learning framework to reduce the effects of mislabelled data. Specifically, we firstly propose an internal self-attention mechanism between the convolution operations in convolutional neural networks (CNNs) to learn a better sentence representation from the noisy inputs. Then we define two sentence selection models as two relation extractors in order to collaboratively learn and regularise each other under a curriculum scheme to alleviate noisy effects, where the curriculum could be constructed by conflicts or small loss. Finally, experiments are conducted on a widely-used public dataset and the results indicate that the proposed model significantly outperforms baselines including the state-of-the-art in terms of P@N and PR curve metrics, thus evidencing its capability of reducing noisy effects for DSRE.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank co-authors Jingguang Han and Sha Liu (jingguang.han, sha.liu@ucd.ie) and anonymous reviewers for these insightful comments and suggestions. We would like to thank Emer Gilmartin (gilmare@tcd.ie) for helpful comments and presentation improvements. This research is funded by the Enterprise-Ireland Innovation Partnership Programme (Grant IP2017626).","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"aharoni-goldberg-2017-towards","url":"https:\/\/aclanthology.org\/P17-2021","title":"Towards String-To-Tree Neural Machine Translation","abstract":"We present a simple method to incorporate syntactic information about the target language in a neural machine translation system by translating into linearized, lexicalized constituency trees. Experiments on the WMT16 German-English news translation task shown improved BLEU scores when compared to a syntax-agnostic NMT baseline trained on the same dataset. An analysis of the translations from the syntax-aware system shows that it performs more reordering during translation in comparison to the baseline. A smallscale human evaluation also showed an advantage to the syntax-aware system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI), and The Israeli Science Foundation (grant number 1555\/15).","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ma-etal-2021-issues","url":"https:\/\/aclanthology.org\/2021.acl-short.99","title":"Issues with Entailment-based Zero-shot Text Classification","abstract":"The general format of natural language inference (NLI) makes it tempting to be used for zero-shot text classification by casting any target label into a sentence of hypothesis and verifying whether or not it could be entailed by the input, aiming at generic classification applicable on any specified label space. In this opinion piece, we point out a few overlooked issues that are yet to be discussed in this line of work. We observe huge variance across different classification datasets amongst standard BERT-based NLI models and surprisingly find that pre-trained BERT without any fine-tuning can yield competitive performance against BERT fine-tuned for NLI. With the concern that these models heavily rely on spurious lexical patterns for prediction, we also experiment with preliminary approaches for more robust NLI, but the results are in general negative. Our observations reveal implicit but challenging difficulties in entailmentbased zero-shot text classification.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank all the anonymous reviewers for their helpful comments on our submitted draft. The empirical studies conducted in this work were mostly based on the open-source repositories on GitHub from other papers as described earlier. We thank their original authors for sharing their implementation and we also publicly release our experimental scripts on GitHub 9 .","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"contractor-etal-2010-handling","url":"https:\/\/aclanthology.org\/D10-1009","title":"Handling Noisy Queries in Cross Language FAQ Retrieval","abstract":"Recent times have seen a tremendous growth in mobile based data services that allow people to use Short Message Service (SMS) to access these data services. In a multilingual society it is essential that data services that were developed for a specific language be made accessible through other local languages also. In this paper, we present a service that allows a user to query a Frequently-Asked-Questions (FAQ) database built in a local language (Hindi) using Noisy SMS English queries. The inherent noise in the SMS queries, along with the language mismatch makes this a challenging problem. We handle these two problems by formulating the query similarity over FAQ questions as a combinatorial search problem where the search space consists of combinations of dictionary variations of the noisy query and its top-N translations. We demonstrate the effectiveness of our approach on a real-life dataset.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"schwartz-2021-ensemble","url":"https:\/\/aclanthology.org\/2021.naacl-main.262","title":"Ensemble of MRR and NDCG models for Visual Dialog","abstract":"Assessing an AI agent that can converse in human language and understand visual content is challenging. Generation metrics, such as BLEU scores favor correct syntax over semantics. Hence a discriminative approach is often used, where an agent ranks a set of candidate options. The mean reciprocal rank (MRR) metric evaluates the model performance by taking into account the rank of a single humanderived answer. This approach, however, raises a new challenge: the ambiguity and synonymy of answers, for instance, semantic equivalence (e.g., 'yeah' and 'yes'). To address this, the normalized discounted cumulative gain (NDCG) metric has been used to capture the relevance of all the correct answers via dense annotations. However, the NDCG metric favors the usually applicable uncertain answers such as 'I don't know.' Crafting a model that excels on both MRR and NDCG metrics is challenging (Murahari et al., 2020). Ideally, an AI agent should answer a human-like reply and validate the correctness of any answer. To address this issue, we describe a twostep non-parametric ranking approach that can merge strong MRR and NDCG models. Using our approach, we manage to keep most MRR state-of-the-art performance (70.41% vs. 71.24%) and the NDCG state-of-the-art performance (72.16% vs. 75.35%). Moreover, our approach won the recent Visual Dialog 2020 challenge.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Yftah Ziser, Itai Gat, Alexander Schwing and Tamir Hazan for useful discussions.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-etal-2002-extracting","url":"https:\/\/aclanthology.org\/W02-1905","title":"Extracting Exact Answers to Questions Based on Structural Links","abstract":"This paper presents a novel approach to extracting phrase-level answers in a question answering system. This approach uses structural support provided by an integrated Natural Language Processing (NLP) and Information Extraction (IE) system. Both questions and the sentence-level candidate answer strings are parsed by this NLP\/IE system into binary dependency structures. Phrase-level answer extraction is modelled by comparing the structural similarity involving the question-phrase and the candidate answerphrase. There are two types of structural support. The first type involves predefined, specific entity associa tions such as Affiliation, Position, Age for a person entity. If a question asks about one of these associations, the answer-phrase can be determined as long as the system decodes such pre-defined dependency links correctly, despite the syntactic difference used in expressions between the question and the candidate answer string. The second type involves generic grammatical relationships such as V-S (verb-subject), V-O (verbobject). Preliminary experimental results show an improvement in both precision and recall in extracting phrase-level answers, compared with a baseline system which only uses Named Entity constraints. The proposed methods are particularly effective in cases where the question-phrase does not correspond to a known named entity type and in cases where there are multiple candidate answer-phrases satisfying the named entity constraints.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors wish to thank Walter Gadz and Carrie Pine of AFRL for supporting this work. Thanks also go to anonymous reviewers for their valuable comments.","year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mast-etal-2010-impact","url":"https:\/\/aclanthology.org\/W10-4320","title":"The Impact of Dimensionality on Natural Language Route Directions in Unconstrained Dialogue","abstract":"In this paper we examine the influence of dimensionality on natural language route directions in dialogue. Specifically, we show that giving route instructions in a quasi-3d environment leads to experiential descriptive accounts, as manifested by a higher proportion of location descriptions, lack of chunking, use of 1st person singular personal pronouns, and more frequent use of temporal and spatial deictic terms. 2d scenarios lead to informative instructions, as manifested by a frequent use of motion expressions, chunking of route elements, and use of mainly 2nd person singular personal pronouns.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Funding by the DFG for the SFB\/TR 8, project I5-[DiaSpace], is gratefully acknowledged. We thank the students who participated in our study, as well as Robert Porzel and Elena Andonova for their helpful advice.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"barnes-etal-2021-nordial","url":"https:\/\/aclanthology.org\/2021.nodalida-main.51","title":"NorDial: A Preliminary Corpus of Written Norwegian Dialect Use","abstract":"Norway has a large amount of dialectal variation, as well as a general tolerance to its use in the public sphere. There are, however, few available resources to study this variation and its change over time and in more informal areas, e.g. on social media. In this paper, we propose a first step to creating a corpus of dialectal variation of written Norwegian. We collect a small corpus of tweets and manually annotate them as Bokm\u00e5l, Nynorsk, any dialect, or a mix. We further perform preliminary experiments with state-of-the-art models, as well as an analysis of the data to expand this corpus in the future. Finally, we make the annotations and models available for future work. 'jaei g\u00e5r', 'e g\u00e5'.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"liew-etal-2016-emotweet","url":"https:\/\/aclanthology.org\/L16-1183","title":"EmoTweet-28: A Fine-Grained Emotion Corpus for Sentiment Analysis","abstract":"This paper describes EmoTweet-28, a carefully curated corpus of 15,553 tweets annotated with 28 emotion categories for the purpose of training and evaluating machine learning models for emotion classification. EmoTweet-28 is, to date, the largest tweet corpus annotated with fine-grained emotion categories. The corpus contains annotations for four facets of emotion: valence, arousal, emotion category and emotion cues. We first used small-scale content analysis to inductively identify a set of emotion categories that characterize the emotions expressed in microblog text. We then expanded the size of the corpus using crowdsourcing. The corpus encompasses a variety of examples including explicit and implicit expressions of emotions as well as tweets containing multiple emotions. EmoTweet-28 represents an important resource to advance the development and evaluation of more emotion-sensitive systems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the annotators who volunteered in performing the annotation task. We are immensely grateful to Christine Larsen who partially funded the data collection under the Liddy Fellowship.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lai-etal-2016-better","url":"https:\/\/aclanthology.org\/W16-1405","title":"Better Together: Combining Language and Social Interactions into a Shared Representation","abstract":"Despite the clear inter-dependency between analyzing the interactions in social networks, and analyzing the natural language content of these interactions, these aspects are typically studied independently. In this paper we present a first step towards finding a joint representation, by embedding the two aspects into a single vector space. We show that the new representation can help improve performance in two social relations prediction tasks.","label_nlp4sg":1,"task":["Combining Language and Social Interactions"],"method":["embedding"],"goal1":"Partnership for the goals","goal2":null,"goal3":null,"acknowledgments":"We thank the reviewers for their insightful comments. This research is supported by NSF under contract number IIS-1149789.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":1} {"ID":"evert-etal-2004-identifying","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/519.pdf","title":"Identifying Morphosyntactic Preferences in Collocations","abstract":"In this paper, we describe research that aims to make evidence on the morphosyntactic preferences of collocations available to lexicographers. Our methods for the extraction of appropriate frequency data and its statistical analysis are applied to the number and case preferences of German adjective+noun combinations in a small case study.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-etal-2003-phrasenet","url":"https:\/\/aclanthology.org\/W03-0412","title":"Phrasenet: towards context sensitive lexical semantics","abstract":"This paper introduces PhraseNet, a contextsensitive lexical semantic knowledge base system. Based on the supposition that semantic proximity is not simply a relation between two words in isolation, but rather a relation between them in their context, English nouns and verbs, along with contexts they appear in, are organized in PhraseNet into Consets; Consets capture the underlying lexical concept, and are connected with several semantic relations that respect contextually sensitive lexical information. PhraseNet makes use of WordNet as an important knowledge source. It enhances a WordNet synset with its contextual information and refines its relational structure by maintaining only those relations that respect contextual constraints. The contextual information allows for supporting more functionalities compared with those of WordNet. Natural language researchers as well as linguists and language learners can gain from accessing PhraseNet with a word token and its context, to retrieve relevant semantic information. We describe the design and construction of PhraseNet and give preliminary experimental evidence to its usefulness for NLP researches. * Research supported by NSF grants IIS-99-84168, ITR-IIS-00-85836 and an ONR MURI award. Names of authors are listed alphabetically. prepositional phrase attachment (Pantel and Lin, 2000; Stetina and Nagao, 1997), co-reference resolution (Ng and Cardie, 2002) to text summarization (Saggion and Lapalme, 2002), semantic information is a necessary component in the inference, by providing a level of abstraction that is necessary for robust decisions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sheshadri-etal-2021-wer","url":"https:\/\/aclanthology.org\/2021.eacl-main.320","title":"WER-BERT: Automatic WER Estimation with BERT in a Balanced Ordinal Classification Paradigm","abstract":"Automatic Speech Recognition (ASR) systems are evaluated using Word Error Rate (WER), which is calculated by comparing the number of errors between the ground truth and the transcription of the ASR system. This calculation, however, requires manual transcription of the speech signal to obtain the ground truth. Since transcribing audio signals is a costly process, Automatic WER Evaluation (e-WER) methods have been developed to automatically predict the WER of a speech system by only relying on the transcription and the speech signal features. While WER is a continuous variable, previous works have shown that positing eWER as a classification problem is more effective than regression. However, while converting to a classification setting, these approaches suffer from heavy class imbalance. In this paper, we propose a new balanced paradigm for eWER in a classification setting. Within this paradigm, we also propose WER-BERT, a BERT based architecture with speech features for eWER. Furthermore, we introduce a distance loss function to tackle the ordinal nature of eWER classification. The proposed approach and paradigm are evaluated on the Librispeech dataset and a commercial (black box) ASR system, Google Cloud's Speech-to-Text API. The results and experiments demonstrate that WER-BERT establishes a new state-of-the-art in automatic WER estimation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"henderson-2020-unstoppable","url":"https:\/\/aclanthology.org\/2020.acl-main.561","title":"The Unstoppable Rise of Computational Linguistics in Deep Learning","abstract":"In this paper, we trace the history of neural networks applied to natural language understanding tasks, and identify key contributions which the nature of language has made to the development of neural network architectures. We focus on the importance of variable binding and its instantiation in attention-based models, and argue that Transformer is not a sequence model but an induced-structure model. This perspective leads to predictions of the challenges facing research in deep learning architectures for natural language understanding.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Paola Merlo, Suzanne Stevenson, Ivan Titov, members of the Idiap NLU group, and the anonymous reviewers for their comments and suggestions.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bokaie-hosseini-etal-2020-identifying","url":"https:\/\/aclanthology.org\/2020.privatenlp-1.3","title":"Identifying and Classifying Third-party Entities in Natural Language Privacy Policies","abstract":"App developers often raise revenue by contracting with third party ad networks, which serve targeted ads to end-users. To this end, a free app may collect data about its users and share it with advertising companies for targeting purposes. Regulations such as General Data Protection Regulation (GDPR) require transparency with respect to the recipients (or categories of recipients) of user data. These regulations call for app developers to have privacy policies that disclose those third party recipients of user data. Privacy policies provide users transparency into what data an app will access, collect, shared, and retain. Given the size of app marketplaces, verifying compliance with such regulations is a tedious task. This paper aims to develop an automated approach to extract and categorize third party data recipients (i.e., entities) declared in privacy policies. We analyze 100 privacy policies associated with most downloaded apps in the Google Play Store. We crowdsource the collection and annotation of app privacy policies to establish the ground truth with respect to third party entities. From this, we train various models to extract third party entities automatically. Our best model achieves average F1 score of 66% when compared to crowdsourced annotations.","label_nlp4sg":1,"task":["Identifying and Classifying Third - party Entities"],"method":[],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"We thank\u00c1lvaro Feal Fajardo for his participation as an expert in the pilot study. We also thank the Usable Security and Privacy Group at ICSI for their constructive feedback.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"mahabal-etal-2019-text","url":"https:\/\/aclanthology.org\/N19-1319","title":"Text Classification with Few Examples using Controlled Generalization","abstract":"Training data for text classification is often limited in practice, especially for applications with many output classes or involving many related classification problems. This means classifiers must generalize from limited evidence, but the manner and extent of generalization is task dependent. Current practice primarily relies on pre-trained word embeddings to map words unseen in training to similar seen ones. Unfortunately, this squishes many components of meaning into highly restricted capacity. Our alternative begins with sparse pre-trained representations derived from unlabeled parsed corpora; based on the available training data, we select features that offers the relevant generalizations. This produces taskspecific semantic vectors; here, we show that a feed-forward network over these vectors is especially effective in low-data scenarios, compared to existing state-of-the-art methods. By further pairing this network with a convolutional neural network, we keep this edge in low data scenarios and remain competitive when using full training sets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank our anonymous reviewers and the Google AI Language team, especially Rahul Gupta, Tania Bedrax-Weiss and Emily Pitler, for the insightful comments that contributed to this paper.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dalessio-etal-1998-category","url":"https:\/\/aclanthology.org\/W98-1508","title":"Category Levels in Hierarchical Text Categorization","abstract":"We consider the problem of assigning level numbers (weights) to hierarchically organized categories during text categorization. These levels control the ability of the categories to attract documents during the categorization process. The levels are adjusted to obtain a balance between recall and precision for each category. If a category's recall exceeds its precision, the category is too strong and its level is reduced. Conversely, a category's level is increased to strengthen it if its prelision exceeds its recall. \u2022' The categorization algorithm used is a su~ervised learning procedure that uses a linear classifier hewed on the category levels. We are given a set of categories: organized hierarchically. \\Ve are also given a training corpus of documents already placed in one or more categories. From these, we extract vocabulary, words that appear with high frequency within a given category, characterizing each subject area. Each node 1 s vocabulary is filtered and its words assigned weights with respect to the specific category. Tben, test documents are scanned and categories ranked based on the presence of vocabulary terms. Documents are assigned to categories based on these rankings. We demonstrate that precision and recall can be significantly improved by solving the categorization problem taking hierarchy into account. Specifically, we show that by adjusting the category levels in a principled way, that precision can be significantly improved, from 84% to 91%, on the much-studied Reuters-21578 corpus organized in a three-level hierarchy of categories.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"fourtassi-etal-2019-development","url":"https:\/\/aclanthology.org\/W19-2914","title":"The Development of Abstract Concepts in Children's Early Lexical Networks","abstract":"How do children learn abstract concepts such as animal vs. artifact? Previous research has suggested that such concepts can partly be derived using cues from the language children hear around them. Following this suggestion, we propose a model where we represent the children's developing lexicon as an evolving network. The nodes of this network are based on vocabulary knowledge as reported by parents, and the edges between pairs of nodes are based on the probability of their co-occurrence in a corpus of child-directed speech. We found that several abstract categories can be identified as the dense regions in such networks. In addition, our simulations suggest that these categories develop simultaneously, rather than sequentially, thanks to the children's word learning trajectory which favors the exploration of the global conceptual space.","label_nlp4sg":1,"task":["Children learning"],"method":["evolving network"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"vulic-mrksic-2018-specialising","url":"https:\/\/aclanthology.org\/N18-1103","title":"Specialising Word Vectors for Lexical Entailment","abstract":"We present LEAR (Lexical Entailment Attract-Repel), a novel post-processing method that transforms any input word vector space to emphasise the asymmetric relation of lexical entailment (LE), also known as the IS-A or hyponymy-hypernymy relation. By injecting external linguistic constraints (e.g., WordNet links) into the initial vector space, the LE specialisation procedure brings true hyponymyhypernymy pairs closer together in the transformed Euclidean space. The proposed asymmetric distance measure adjusts the norms of word vectors to reflect the actual WordNetstyle hierarchy of concepts. Simultaneously, a joint objective enforces semantic similarity using the symmetric cosine distance, yielding a vector space specialised for both lexical relations at once. LEAR specialisation achieves state-of-the-art performance in the tasks of hypernymy directionality, hypernymy detection, and graded lexical entailment, demonstrating the effectiveness and robustness of the proposed asymmetric specialisation model.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the three anonymous reviewers for their insightful comments and suggestions. We are also grateful to the TakeLab research group at the University of Zagreb for offering support to computationally intensive experiments in our hour of need. This work is supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909).","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lavie-1994-integrated","url":"https:\/\/aclanthology.org\/P94-1045","title":"An Integrated Heuristic Scheme for Partial Parse Evaluation","abstract":"GLR* is a recently developed robust version of the Generalized LR Parser [Tomita, 1986], that can parse almost any input sentence by ignoring unrecognizable parts of the sentence. On a given input sentence, the parser returns a collection of parses that correspond to maximal, or close to maximal, parsable subsets of the original input. This paper describes recent work on developing an integrated heuristic scheme for selecting the parse that is deemed \"best\" from such a collection. We describe the heuristic measures used and their combination scheme. Preliminary results from experiments conducted on parsing speech recognized spontaneous speech are also reported.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ahmed-etal-2011-challenges","url":"https:\/\/aclanthology.org\/W11-3501","title":"Challenges in Designing Input Method Editors for Indian Lan-guages: The Role of Word-Origin and Context","abstract":"Back-transliteration based Input Method Editors are very popular for Indian Languages. In this paper we evaluate two such Indic language systems to help understand the challenge of designing a back-transliteration based IME. Through a detailed error-analysis of Hindi, Bangla and Telugu data, we study the role of phonological features of Indian scripts that are reflected as variations and ambiguity in the transliteration. The impact of word-origin on back-transliteration is discussed in the context of codeswitching. We also explore the role of word-level context to help overcome some of these challenges.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"harris-1995-translators","url":"https:\/\/aclanthology.org\/1995.tc-1.5","title":"Translators, the Internet and standing still","abstract":"It is cheap and easy to do work directly, using the Telnet service, on your customer's computer from wherever you may be in the world to wherever your customer may be in the world.\nd. Allows easy, direct access to your customer's on-line style books, reference material and personnel. You can also \"talk\", using interactive typing, with a colleague. Voice systems for real conversations over the Internet do exist but only one person may speak at a time and the audio quality is not as good as that of conventional telephones. Both speakers must have the same software and equipment installed and running at the time.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1995,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-etal-2014-joint-inference","url":"https:\/\/aclanthology.org\/D14-1205","title":"Joint Inference for Knowledge Base Population","abstract":"Populating Knowledge Base (KB) with new knowledge facts from reliable text resources usually consists of linking name mentions to KB entities and identifying relationship between entity pairs. However, the task often suffers from errors propagating from upstream entity linkers to downstream relation extractors. In this paper, we propose a novel joint inference framework to allow interactions between the two subtasks and find an optimal assignment by addressing the coherence among preliminary local predictions: whether the types of entities meet the expectations of relations explicitly or implicitly, and whether the local predictions are globally compatible. We further measure the confidence of the extracted triples by looking at the details of the complete extraction process. Experiments show that the proposed framework can significantly reduce the error propagations thus obtain more reliable facts, and outperforms competitive baselines with state-of-the-art relation extraction models.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Heng Ji, Kun Xu, Dong Wang and Junyang Rao for their helpful discussions and the anonymous reviewers for their insightful comments that improved the work considerably. This work was supported by the National High Technology R&D Program of China (Grant No. 2012AA011101, 2014AA015102), National Natural Science Foundation of China (Grant No. 61272344, 61202233, 61370055) and the joint project with IBM Research. Any correspondence please refer to Yansong Feng.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"murakami-etal-2009-annotating","url":"https:\/\/aclanthology.org\/W09-3027","title":"Annotating Semantic Relations Combining Facts and Opinions","abstract":"As part of the STATEMENT MAP project, we are constructing a Japanese corpus annotated with the semantic relations bridging facts and opinions that are necessary for online information credibility evaluation. In this paper, we identify the semantic relations essential to this task and discuss how to efficiently collect valid examples from Web documents by splitting complex sentences into fundamental units of meaning called \"statements\" and annotating relations at the statement level. We present a statement annotation scheme and examine its reliability by annotating around 1,500 pairs of statements. We are preparing the corpus for release this winter.","label_nlp4sg":1,"task":["Annotating Semantic Relations"],"method":["Japanese corpus","annotation scheme"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"This work is supported by the National Institute of Information and Communications Technology Japan.","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"howald-abramson-2012-use","url":"https:\/\/aclanthology.org\/S12-1006","title":"The Use of Granularity in Rhetorical Relation Prediction","abstract":"We present the results of several machine learning tasks designed to predict rhetorical relations that hold between clauses in discourse. We demonstrate that organizing rhetorical relations into different granularity categories (based on relative degree of detail) increases average prediction accuracy from 58% to 70%. Accuracy further increases to 80% with the inclusion of clause types. These results, which are competitive with existing systems, hold across several modes of written discourse and suggest that features of information structure are an important consideration in the machine learnability of discourse.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Thank you to Jeff Ondich and Ultralingua for facilitating this research and to four anonymous *SEM reviewers for insightful and constructive comments.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chernyak-etal-2019-char","url":"https:\/\/aclanthology.org\/W19-1404","title":"Char-RNN for Word Stress Detection in East Slavic Languages","abstract":"We explore how well a sequence labeling approach, namely, recurrent neural network, is suited for the task of resource-poor and POS tagging free word stress detection in the Russian, Ukranian, Belarusian languages. We present new datasets, annotated with the word stress, for the three languages and compare several RNN models trained on three languages and explore possible applications of the transfer learning for the task. We show that it is possible to train a model in a cross-lingual setting and that using additional languages improves the quality of the results.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yang-etal-2015-domain","url":"https:\/\/aclanthology.org\/Q15-1020","title":"Domain Adaptation for Syntactic and Semantic Dependency Parsing Using Deep Belief Networks","abstract":"In current systems for syntactic and semantic dependency parsing, people usually define a very high-dimensional feature space to achieve good performance. But these systems often suffer severe performance drops on outof-domain test data due to the diversity of features of different domains. This paper focuses on how to relieve this domain adaptation problem with the help of unlabeled target domain data. We propose a deep learning method to adapt both syntactic and semantic parsers. With additional unlabeled target domain data, our method can learn a latent feature representation (LFR) that is beneficial to both domains. Experiments on English data in the CoNLL 2009 shared task show that our method largely reduced the performance drop on out-of-domain test data. Moreover, we get a Macro F1 score that is 2.32 points higher than the best system in the CoNLL 2009 shared task in out-of-domain tests.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research work has been partially funded by the Natural Science Foundation of China under Grant No.61333018 and supported by the West Light Foundation of Chinese Academy of Sciences under Grant No.LHXZ201301. We thank the three anonymous reviewers and the Action Editor for their helpful comments and suggestions.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nasr-etal-2011-macaon","url":"https:\/\/aclanthology.org\/P11-4015","title":"MACAON An NLP Tool Suite for Processing Word Lattices","abstract":"MACAON is a tool suite for standard NLP tasks developed for French. MACAON has been designed to process both human-produced text and highly ambiguous word-lattices produced by NLP tools. MACAON is made of several native modules for common tasks such as a tokenization, a part-of-speech tagging or syntactic parsing, all communicating with each other through XML files. In addition, exchange protocols with external tools are easily definable. MACAON is a fast, modular and open tool, distributed under GNU Public License.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mihaylov-frank-2016-discourse","url":"https:\/\/aclanthology.org\/K16-2014","title":"Discourse Relation Sense Classification Using Cross-argument Semantic Similarity Based on Word Embeddings","abstract":"This paper describes our system for the CoNLL 2016 Shared Task's supplementary task on Discourse Relation Sense Classification. Our official submission employs a Logistic Regression classifier with several cross-argument similarity features based on word embeddings and performs with overall F-scores of 64.13 for the Dev set, 63.31 for the Test set and 54.69 for the Blind set, ranking first in the Overall ranking for the task. We compare the feature-based Logistic Regression classifier to different Convolutional Neural Network architectures. After the official submission we enriched our model for Non-Explicit relations by including similarities of explicit connectives with the relation arguments, and part of speech similarities based on modal verbs. This improved our Non-Explicit result by 1.46 points on the Dev set and by 0.36 points on the Blind set.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgments. This work is supported by the German Research Foundation as part of the Research Training Group \"Adaptive Preparation of Information from Heterogeneous Sources\" (AIPHES) under grant No. GRK 1994\/1.We thank Ana Marasovi\u0107 for her advice in the implementation of CNN models.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mcdonald-etal-2018-deep","url":"https:\/\/aclanthology.org\/D18-1211","title":"Deep Relevance Ranking Using Enhanced Document-Query Interactions","abstract":"We explore several new models for document relevance ranking, building upon the Deep Relevance Matching Model (DRMM) of Guo et al. (2016). Unlike DRMM, which uses context-insensitive encodings of terms and query-document term interactions, we inject rich context-sensitive encodings throughout our models, inspired by PACRR's (Hui et al., 2017) convolutional n-gram matching features, but extended in several ways including multiple views of query and document inputs. We test our models on datasets from the BIOASQ question answering challenge (Tsatsaronis et al., 2015) and TREC ROBUST 2004 (Voorhees, 2005), showing they outperform BM25-based baselines, DRMM, and PACRR.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the reviewers for their constructive feedback that greatly improved this work. Oscar T\u00e4ckstr\u00f6m gave thorough input on an early draft of this work. Finally, AUEB's NLP group provided many suggestions over the course of the work.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yao-etal-2010-practical","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/675_Paper.pdf","title":"Practical Evaluation of Speech Recognizers for Virtual Human Dialogue Systems","abstract":"We perform a large-scale evaluation of multiple off-the-shelf speech recognizers across diverse domains for virtual human dialogue systems. Our evaluation is aimed at speech recognition consumers and potential consumers with limited experience with readily available recognizers. We focus on practical factors to determine what levels of performance can be expected from different available recognizers in various projects featuring different types of conversational utterances. Our results show that there is no single recognizer that outperforms all other recognizers in all domains. The performance of each recognizer may vary significantly depending on the domain, the size and perplexity of the corpus, the out-of-vocabulary rate, and whether acoustic and language model adaptation has been used or not. We expect that our evaluation will prove useful to other speech recognition consumers, especially in the dialogue community, and will shed some light on the key problem in spoken dialogue systems of selecting the most suitable available speech recognition system for a particular application, and what impact training will have.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nn-1992-mt","url":"https:\/\/aclanthology.org\/1992.tmi-1.24","title":"MT News International no. 3","abstract":"The Japan Association for Machine Translation (JAMT) has agreed to extend its activities to Asian and Pacific areas after its one year's activity which included seven issues of JAMT Journal, a symposium (MT World 92) and an MT Workshop. The new association, established 17 June 1992, is to be called the Asian Pacific Association for Machine Translation (to be abbreviated as AAMT). The association has been formed with the full support of MT and NLP researchers of several countries in Asian Pacific areas including China, Korea, Thai, Malaysia, Indonesia, Singapore, India, and Taiwan area. AAMT will be starting soon to distribute membership registration forms and the first issue of AAMT Journal throughout the Asian Pacific areas with the aim of attracting as many members as possible. The journal will retain the essential features of the past issues of JAMT Journal. With the foundation of AAMT, the Japan association has brought its activities to an end chairmanship of the President. The languages of AAMT will be English and Japanese, and its secretariat will be based, initially at least, in Japan with a Secretary-General appointed by the President. Further details may be obtained from the present JAMT Secretariat (Mrs Megumi Okita),","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"recasens-etal-2010-typology","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/160_Paper.pdf","title":"A Typology of Near-Identity Relations for Coreference (NIDENT)","abstract":"The task of coreference resolution requires people or systems to decide when two referring expressions refer to the 'same' entity or event. In real text, this is often a difficult decision because identity is never adequately defined, leading to contradictory treatment of cases in previous work. This paper introduces the concept of 'near-identity', a middle ground category between identity and non-identity, to handle such cases systematically. We present a typology of Near-Identity Relations (NIDENT) that includes fifteen types-grouped under four main families-that capture a wide range of ways in which (near-)coreference relations hold between discourse entities. We validate the theoretical model by annotating a small sample of real data and showing that inter-annotator agreement is high enough for stability (K= 0.58, and up to K= 0.65 and K= 0.84 when leaving out one and two outliers, respectively). This work enables subsequent creation of the first internally consistent language resource of this type through larger annotation efforts.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to Jerry Hobbs for his insight and to the annotators: David Halpern, Peggy Ho, Justin James, and Rita Zaragoza. This work was supported in part by FPU Grant AP2006-00994 from the Spanish Ministry of Education, and TEXT-MESS 2.0 (TIN2009-13391-C04-04) Project.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"murty-etal-2021-dreca","url":"https:\/\/aclanthology.org\/2021.naacl-main.88","title":"DReCa: A General Task Augmentation Strategy for Few-Shot Natural Language Inference","abstract":"Meta-learning promises few-shot learners that quickly adapt to new distributions by repurposing knowledge acquired from previous training. However, we believe meta-learning has not yet succeeded in NLP due to the lack of a well-defined task distribution, leading to attempts that treat datasets as tasks. Such an ad hoc task distribution causes problems of quantity and quality. Since there's only a handful of datasets for any NLP problem, meta-learners tend to overfit their adaptation mechanism and, since NLP datasets are highly heterogeneous, many learning episodes have poor transfer between their support and query sets, which discourages the meta-learner from adapting. To alleviate these issues, we propose DRECA (Decomposing datasets into Reasoning Categories), a simple method for discovering and using latent reasoning categories in a dataset, to form additional high quality tasks. DRECA works by splitting examples into label groups, embedding them with a finetuned BERT model and then clustering each group into reasoning categories. Across four few-shot NLI problems, we demonstrate that using DRECA improves the accuracy of meta-learners by 1.5-4%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to Eric Mitchell, Robin Jia, Alex Tamkin, John Hewitt, Pratyusha Sharma and the anonymous reviewers for helpful comments. The authors would also like to thank other members of the Stanford NLP group for feedback on an early draft of the paper. This work has been partially supported by JD.com American Technologies Corporation (\"JD\") under the SAIL-JD AI Research Initiative and partially by Toyota Research Institute (\"TRI\"). This article solely reflects the opinions and conclusions of its authors and not JD, any entity associated with JD.com, TRI, or any other Toyota entity. Christopher Manning is a CIFAR Fellow.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"voorhees-2008-contradictions","url":"https:\/\/aclanthology.org\/P08-1008","title":"Contradictions and Justifications: Extensions to the Textual Entailment Task","abstract":"The third PASCAL Recognizing Textual Entailment Challenge (RTE-3) contained an optional task that extended the main entailment task by requiring a system to make three-way entailment decisions (entails, contradicts, neither) and to justify its response. Contradiction was rare in the RTE-3 test set, occurring in only about 10% of the cases, and systems found accurately detecting it difficult. Subsequent analysis of the results shows a test set must contain many more entailment pairs for the three-way decision task than the traditional two-way task to have equal confidence in system comparisons. Each of six human judges representing eventual end users rated the quality of a justification by assigning \"understandability\" and \"correctness\" scores. Ratings of the same justification across judges differed significantly, signaling the need for a better characterization of the justification task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The extended task of RTE-3 was supported by the Disruptive Technology Office (DTO) AQUAINT program. Thanks to fellow coordinators of the task, Chris Manning and Dan Moldovan, and to the participants for making the task possible.","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"prabhumoye-etal-2019-towards","url":"https:\/\/aclanthology.org\/N19-1269","title":"Towards Content Transfer through Grounded Text Generation","abstract":"Recent work in neural generation has attracted significant interest in controlling the form of text, such as style, persona, and politeness. However, there has been less work on controlling neural text generation for content. This paper introduces the notion of Content Transfer for long-form text generation, where the task is to generate a next sentence in a document that both fits its context and is grounded in a content-rich external textual source such as a news story. Our experiments on Wikipedia data show significant improvements against competitive baselines. As another contribution of this paper, we release a benchmark dataset of 640k Wikipedia referenced sentences paired with the source articles to encourage exploration of this new task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to the anonymous reviewers, as well as Alan W. Black, Chris Brockett, Bill Dolan, Sujay Jauhar, Michael Gamon, Jianfeng Gao, Dheeraj Rajagopal, and Xuchao Zhang for their helpful comments and suggestions on this work. We also thank Emily Ahn, Khyati Chandu, Ankush Das, Priyank Lathwal, and Dheeraj Rajagopal for their help with the human evaluation.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"vossen-etal-2014-newsreader","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/436_Paper.pdf","title":"NewsReader: recording history from daily news streams","abstract":"The European project NewsReader develops technology to process daily news streams in 4 languages, extracting what happened, when, where and who was involved. NewsReader does not just read a single newspaper but massive amounts of news coming from thousands of sources. It compares the results across sources to complement information and determine where they disagree. Furthermore, it merges news of today with previous news, creating a long-term history rather than separate events. The result is stored in a KnowledgeStore, that cumulates information over time, producing an extremely large knowledge graph that is visualized using new techniques to provide more comprehensive access. We present the first version of the system and the results of processing first batches of data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been supported by the EC within the 7th framework programme under grant agreement nr. FP7-IST-316040.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"park-etal-2015-question","url":"https:\/\/aclanthology.org\/N15-3023","title":"Question Answering System using Multiple Information Source and Open Type Answer Merge","abstract":"This paper presents a multi-strategy and multisource question answering (QA) system that can use multiple strategies to both answer natural language (NL) questions and respond to keywords. We use multiple information sources including curated knowledge base, raw text, auto-generated triples, and NL processing results. We develop open semantic answer type detector for answer merging and improve previous developed single QA modules such as knowledge base based QA, information retrieval based QA.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the ICT R&D program of MSIP\/IITP [R0101-15-0176, Development of Core Technology for Human-like Selftaught Learning based on a Symbolic Approach].","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"finkel-etal-2005-incorporating","url":"https:\/\/aclanthology.org\/P05-1045","title":"Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling","abstract":"Most current statistical natural language processing models use only local features so as to permit dynamic programming in inference, but this makes them unable to fully account for the long distance structure that is prevalent in language use. We show how to solve this dilemma with Gibbs sampling, a simple Monte Carlo method used to perform approximate inference in factored probabilistic models. By using simulated annealing in place of Viterbi decoding in sequence models such as HMMs, CMMs, and CRFs, it is possible to incorporate non-local structure while preserving tractable inference. We use this technique to augment an existing CRF-based information extraction system with long-distance dependency models, enforcing label consistency and extraction template consistency constraints. This technique results in an error reduction of up to 9% over state-of-the-art systems on two established information extraction tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by the Advanced Researchand Development Activity (ARDA)'s Advanced Question Answeringfor Intelligence (AQUAINT) Program. Additionally, we would like to that our reviewers for their helpful comments.","year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yin-etal-2014-parse","url":"https:\/\/aclanthology.org\/W14-0125","title":"Parse Ranking with Semantic Dependencies and WordNet","abstract":"In this paper, we investigate which features are useful for ranking semantic representations of text. We show that two methods of generalization improved results: extended grand-parenting and supertypes. The models are tested on a subset of SemCor that has been annotated with both Dependency Minimal Recursion Semantic representations and WordNet senses. Using both types of features gives a significant improvement in whole sentence parse selection accuracy over the baseline model.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors are grateful to Mathieu Morey and other members of the Deep Linguistic Processing with HPSG Initiative along with other members of their research groups for many extremely helpful discussions.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rognvaldsson-etal-2012-icelandic","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/440_Paper.pdf","title":"The Icelandic Parsed Historical Corpus (IcePaHC)","abstract":"We describe the background for and building of IcePaHC, a one million word parsed historical corpus of Icelandic which has just been finished. This corpus which is completely free and open contains fragments of 60 texts ranging from the late 12 th century to the present. We describe the text selection and text collecting process and discuss the quality of the texts and their conversion to modern Icelandic spelling. We explain why we choose to use a phrase structure Penn style annotation scheme and briefly describe the syntactic annotation process. We also describe a spin-off project which is only in its beginning stages: a parsed historical corpus of Faroese. Finally, we advocate the importance of an open source policy as regards language resources.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Thanks are due to several colleagues who generously gave us access to unpublished texts that they are editing. Thanks are also due to authors of copyrighted material who allowed us to use and distribute their texts. Thanks to Hrafn Loftsson who wrote most of the IceNLP software, to Brynhildur Stef\u00e1nsd\u00f3ttir and Hulda \u00d3lad\u00f3ttir who assisted in parsing the texts, and to several students who keyed in a number of texts. Thanks to Victoria Ros\u00e9n, Koenraad de Smedt and Paul Meurer at the University of Bergen for making IcePaHC a part of the INESS repository. Thanks to anonymous reviewers for useful comments. Much of this material has previously been published in , and IcePaHC has been presented at various occasions, such as the RILiVS workshop in Oslo in September 2009 , talks at the University of Pennsylvania in Philadelphia, the University of Massachusetts at Amherst and New York University in May 2010, the annual conferences of the Institute of Humanities at the University of Iceland in Reykjav\u00edk in March 2011 and 2012, the MENOTA general assembly in Reykjav\u00edk in August 2011, the ACRH workshop in Heidelberg in January 2012, etc. We thank the audiences at these occasions for valuable discussion and comments. Last but not least, we would like to thank our collaborators at the University of Pennsylvania, especially Tony Kroch and Beatrice Santorini, for their invaluable contributions to this work.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"guerini-etal-2013-sentiment","url":"https:\/\/aclanthology.org\/D13-1125","title":"Sentiment Analysis: How to Derive Prior Polarities from SentiWordNet","abstract":"Assigning a positive or negative score to a word out of context (i.e. a word's prior polarity) is a challenging task for sentiment analysis. In the literature, various approaches based on SentiWordNet have been proposed. In this paper, we compare the most often used techniques together with newly proposed ones and incorporate all of them in a learning framework to see whether blending them can further improve the estimation of prior polarity scores. Using two different versions of Sen-tiWordNet and testing regression and classification models across tasks and datasets, our learning approach consistently outperforms the single metrics, providing a new state-ofthe-art approach in computing words' prior polarity for sentiment analysis. We conclude our investigation showing interesting biases in calculated prior polarity scores when word Part of Speech and annotator gender are considered.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors thanks Jos\u00e9 Camargo De Souza for his help with feature selection. This work has been partially supported by the Trento RISE PerTe project.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"honnibal-johnson-2014-joint","url":"https:\/\/aclanthology.org\/Q14-1011","title":"Joint Incremental Disfluency Detection and Dependency Parsing","abstract":"We present an incremental dependency parsing model that jointly performs disfluency detection. The model handles speech repairs using a novel non-monotonic transition system, and includes several novel classes of features. For comparison, we evaluated two pipeline systems, using state-of-the-art disfluency detectors. The joint model performed better on both tasks, with a parse accuracy of 90.5% and 84.0% accuracy at disfluency detection. The model runs in expected linear time, and processes over 550 tokens a second.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank the anonymous reviewers for their valuable comments. This research was supported under the Australian Research Council's Discovery Projects funding scheme (project numbers DP110102506 and DP110102593).","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhang-etal-2018-tmu","url":"https:\/\/aclanthology.org\/Y18-3008","title":"TMU Japanese-Chinese Unsupervised NMT System for WAT 2018 Translation Task","abstract":"This paper describes the unsupervised neural machine translation system of Tokyo Metropolitan University for the WAT 2018 translation task, focusing on Chinese-Japanese translation. Neural machine translation (NMT) has recently achieved impressive performance on some language pairs, although the lack of large parallel corpora poses a major practical problem for its training. In this work, only monolingual data are used to train the NMT system through an unsupervised approach. This system creates synthetic parallel data through back-translation and leverages language models trained on both source and target domains. To enhance the shared information in the bilingual word embeddings further, a decomposed ideograph and stroke dataset for ASPEC Chinese-Japanese Language pairs was also created. BLEU scores of 32.99 for ZH-JA and 26.39 for JA-ZH translation were recorded, respectively (both using stroke data). 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partially supported by JSPS Grant-in-Aid for Young Scientists ","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"loatman-1991-prc","url":"https:\/\/aclanthology.org\/M91-1029","title":"PRC Inc: Description of the PAKTUS System Used for MUC-3","abstract":"The PRC Adaptive Knowledge-based Text Understanding System (PAKTUS) has been under development as a n Independent Research and Development project at PRC since 1984. The objective is a generic system of tools , including a core English lexicon, grammar, and concept representations, for building natural language processin g (NLP) systems for text understanding. Systems built with PAKTUS are intended to generate input to knowledg e based systems or data base systems. Input to the NLP system is typically derived from an existing electroni c message stream, such as a news wire. PAKTUS supports the adaptation of the generic core to a variety of domains : JINTACCS messages, RAINFORM messages, news reports about a specific type of event, such as financia l transfers or terrorist acts, etc., by acquiring sublanguage and domain-specific grammar, words, conceptual mappings , and discourse patterns. The long-term goal is a system that can support the processing of relatively long discourse s in domains that are fairly broad with a high rate of success. APPROACH PAKTUS may be viewed from two perspectives. In one view it is seen as a generic environment for buildin g NLP systems, incorporating modules for lexical acquisition, grammar building, and conceptual templat e specification. The other perspective focuses on the grammar, lexicon, concept templates, and parser alread y embedded within it, and views it as an NLP system itself. The early emphasis in developing PAKTUS was on thos e components supporting the former view. The grammar and lexicon that form the common core of English, as wel l as the stock of generic conceptual templates, entered PAKTUS primarily as a side effect of the testing of extension s to the NLP system development environment. More recent work has focused on extending the linguistic knowledg e within the overall architecture, such as prepositional phrase attachment, compound nominals, temporal analysis, an d metaphorical usage, and on adapting the core to particular domains, such as RAINFORM messages or news reports. The first step in this project was an evaluation of existing techniques for NLP, as of 1984. This evaluatio n included implementing rapid prototypes using techniques as in [1], [2], [3], and [4]. Judging that no one technique was adequate for a full treatment of the NLP problem, we adopted a hybrid approach, breaking the text understandin g process into specialized modules for text stream preprocessing, lexical analysis, including morphology, syntacti c analysis of clauses, conceptual analysis, domain-specific pattern matching based on an entire discourse (e .g., a new s report), and final output-record generation. Knowledge about word morphology was drawn from [5] and is represented as a semantic network, as is lexica l and semantic knowledge in general. The grammar specification has been based on our analysis of message text, an d draws from [5], [6], and [7]. It was first implemented as an augmented transition network (ATN), using a linguisti c notation similar to that in [4]. This implementation relies on an interactive graphic interface to specify and debug grammar rules. More recent investigations focus on alternative formalisms .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1991,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zeman-2021-date","url":"https:\/\/aclanthology.org\/2021.udw-1.15","title":"Date and Time in Universal Dependencies","abstract":"We attempt to shed some light on the various ways how languages specify date and time, and on the options we have when trying to annotate them uniformly across Universal Dependencies. Examples from several language families are discussed, and their annotation is proposed. Our hope is to eventually make this (or similar) proposal an integral part of the UD annotation guidelines, which would help improve consistency of the UD treebanks. The current annotations are far from consistent, as can be seen from the survey we provide in appendices to this paper.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the grants 20-16819X (LUSyD) of the Czech Science Foundation\u037e and LM2018101 (LINDAT\/CLARIAH-CZ) of the Ministry of Education, Youth, and Sports of the Czech Republic.We thank the anonymous reviewers for very useful comments.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kawano-etal-2019-neural","url":"https:\/\/aclanthology.org\/W19-8627","title":"Neural Conversation Model Controllable by Given Dialogue Act Based on Adversarial Learning and Label-aware Objective","abstract":"Building a controllable neural conversation model (NCM) is an important task. In this paper, we focus on controlling the responses of NCMs by using dialogue act labels of responses as conditions. We introduce an adversarial learning framework for the task of generating conditional responses with a new objective to a discriminator, which explicitly distinguishes sentences by using labels. This change strongly encourages the generation of label-conditioned sentences. We compared the proposed method with some existing methods for generating conditional responses. The experimental results show that our proposed method has higher controllability for dialogue acts even though it has higher or comparable naturalness to existing methods.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research and development work was supported by the JST PRESTO (JPMJPR165B) and JST CREST (JPMJCR1513).","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sumita-etal-1993-example","url":"https:\/\/aclanthology.org\/1993.tmi-1.7","title":"An Example-Based Disambiguation of Prepositional Phrase Attachment","abstract":"Spoken language translation is a challenging new application that differs from written language translation in several ways, for instance, 1) human intervention (pre-edit or post-edit) should be avoided; 2) a real-time response is desirable for success. Example-based approaches meet these requirements, that is, they realize accurate structural disambiguation and target word selection, and respond quickly. This paper concentrates on structural disambiguation, particularly English prepositional. phrase attachment (pp-attachment). Usually, a pp-attachment is hard to determine by syntactic analysis alone and many candidates remain. In machine translation, if a pp-attachment is not likely, the translation of the preposition, indeed, the whole translation, is not likely. In order to select the most likely attachment from many candidates, various methods have been proposed. This paper proposes a new method, Example-Based Disambiguation (EBD) of pp-attachment, which 1) collects examples (prepositional phrase-attachment pairs) from a corpus; 2) computes the semantic distance between an input expression and examples; 3) selects the most likely attachment based on the minimum-distance examples. Through experiments contrasting EBD and conventional methods, the authors show the EBD's superiority from the standpoint of success rates.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1993,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"huang-etal-2020-coda","url":"https:\/\/aclanthology.org\/2020.nlpcovid19-acl.6","title":"CODA-19: Using a Non-Expert Crowd to Annotate Research Aspects on 10,000+ Abstracts in the COVID-19 Open Research Dataset","abstract":"This paper introduces CODA-19 1 , a humanannotated dataset that codes the Background, Purpose, Method, Finding\/Contribution, and Other sections of 10,966 English abstracts in the COVID-19 Open Research Dataset. CODA-19 was created by 248 crowd workers from Amazon Mechanical Turk within 10 days, and achieved labeling quality comparable to that of experts. Each abstract was annotated by nine different workers, and the final labels were acquired by majority vote. The inter-annotator agreement (Cohen's kappa) between the crowd and the biomedical expert (0.741) is comparable to inter-expert agreement (0.788). CODA-19's labels have an accuracy of 82.2% when compared to the biomedical expert's labels, while the accuracy between experts was 85.0%. Reliable human annotations help scientists access and integrate the rapidly accelerating coronavirus literature, and also serve as the battery of AI\/NLP research, but obtaining expert annotations can be slow. We demonstrated that a non-expert crowd can be rapidly employed at scale to join the fight against COVID-19.","label_nlp4sg":1,"task":["Data collection"],"method":["Annotation"],"goal1":"Good Health and Well-Being","goal2":"Industry, Innovation and Infrastructure","goal3":null,"acknowledgments":"This project was supported by the Huck Institutes of the Life Sciences' Coronavirus Research Seed Fund (CRSF) and the College of IST COVID-19 Seed Fund at Penn State University. We thank the crowd workers for participating in this project and providing useful feedback. We thank VoiceBunny Inc. for granting a 60% discount for the voiceover for the worker tutorial video in support of projects relevant to COVID-19. We also thank Tiffany Knearem, Shih-Hong (Alan) Huang, Joseph Chee Chang, and Frank Ritter for great discussion and useful feedback.","year":2020,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"huber-etal-2020-supervised","url":"https:\/\/aclanthology.org\/2020.lifelongnlp-1.2","title":"Supervised Adaptation of Sequence-to-Sequence Speech Recognition Systems using Batch-Weighting","abstract":"When training speech recognition systems, one often faces the situation that sufficient amounts of training data for the language in question are available but only small amounts of data for the domain in question. This problem is even bigger for end-to-end speech recognition systems that only accept transcribed speech as training data, which is harder and more expensive to obtain than text data. In this paper we present experiments in adapting end-to-end speech recognition systems by a method which is called batch-weighting and which we contrast against regular fine-tuning, i.e., to continue to train existing neural speech recognition models on adaptation data. We perform experiments using theses techniques in adapting to topic, accent and vocabulary, showing that batch-weighting consistently outperforms fine-tuning. In order to show the generalization capabilities of batch-weighting we perform experiments in several languages, i.e., Arabic, English and German. Due to its relatively small computational requirements batch-weighting is a suitable technique for supervised lifelong learning during the lifetime of a speech recognition system, e.g., from user corrections.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"modi-titov-2014-inducing","url":"https:\/\/aclanthology.org\/W14-1606","title":"Inducing Neural Models of Script Knowledge","abstract":"Induction of common sense knowledge about prototypical sequence of events has recently received much attention (e.g., Chambers and Jurafsky (2008); Regneri et al. (2010)). Instead of inducing this knowledge in the form of graphs, as in much of the previous work, in our method, distributed representations of event realizations are computed based on distributed representations of predicates and their arguments, and then these representations are used to predict prototypical event orderings. The parameters of the compositional process for computing the event representations and the ranking component of the model are jointly estimated. We show that this approach results in a substantial boost in performance on the event ordering task with respect to the previous approaches, both on natural and crowdsourced texts.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Thanks to Lea Frermann, Michaela Regneri and Manfred Pinkal for suggestions and help with the data. This work is partially supported by the MMCI Cluster of Excellence at the Saarland University.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"di-etal-2019-modelling","url":"https:\/\/aclanthology.org\/U19-1005","title":"Modelling Tibetan Verbal Morphology","abstract":"The Tibetan language, despite being spoken by 8 million people, is a lowresource language in NLP terms, and research to develop NLP tools and resources for the language has only just begun. In this paper, we focus on Tibetan verbal morphology-which is known to be quite irregular-and introduce a novel dataset for Tibetan verbal paradigms, comprising 1,433 lemmas with corresponding inflected forms. This enables the largest-scale NLP investigation to date on Tibetan morphological reinflection, wherein we compare the performance of several state-of-the-art models for morphological reinflection, and conduct an extensive error analysis. We show that 84% of errors are due to the irregularity of the Tibetan language.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-etal-2007-automatic","url":"https:\/\/aclanthology.org\/O07-6005","title":"Automatic Pronunciation Assessment for Mandarin Chinese: Approaches and System Overview","abstract":"This paper presents the algorithms used in a prototypical software system for automatic pronunciation assessment of Mandarin Chinese. The system uses forced alignment of HMM (Hidden Markov Models) to identify each syllable and the corresponding log probability for phoneme assessment, through a ranking-based confidence measure. The pitch vector of each syllable is then sent to a GMM (Gaussian Mixture Model) for tone recognition and assessment. We also compute the similarity of scores for intensity and rhythm between the target and test utterances. All four scores for phoneme, tone, intensity, and rhythm are parametric functions with certain free parameters. The overall scoring function was then formulated as a linear combination of these four scoring functions of phoneme, tone, intensity, and rhythm. Since there are both linear and nonlinear parameters involved in the overall scoring function, we employ the downhill Simplex search to fine-tune these parameters in order to approximate the scoring results obtained from a human expert. The experimental results demonstrate that the system can give consistent scores that are close to those of a human's subjective evaluation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wities-etal-2017-consolidated","url":"https:\/\/aclanthology.org\/W17-0902","title":"A Consolidated Open Knowledge Representation for Multiple Texts","abstract":"We propose to move from Open Information Extraction (OIE) ahead to Open Knowledge Representation (OKR), aiming to represent information conveyed jointly in a set of texts in an open textbased manner. We do so by consolidating OIE extractions using entity and predicate coreference, while modeling information containment between coreferring elements via lexical entailment. We suggest that generating OKR structures can be a useful step in the NLP pipeline, to give semantic applications an easy handle on consolidated information across multiple texts.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by grants from the MAGNET program of the Israeli Office of the Chief Scientist (OCS) and the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600\/1-1), and by Contract HR0011-15-2-0025 with the US Defense Advanced Research Projects Agency (DARPA).","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"geng-etal-2020-selective","url":"https:\/\/aclanthology.org\/2020.acl-main.269","title":"How Does Selective Mechanism Improve Self-Attention Networks?","abstract":"Self-attention networks (SANs) with selective mechanism has produced substantial improvements in various NLP tasks by concentrating on a subset of input words. However, the underlying reasons for their strong performance have not been well explained. In this paper, we bridge the gap by assessing the strengths of selective SANs (SSANs), which are implemented with a flexible and universal Gumbel-Softmax. Experimental results on several representative NLP tasks, including natural language inference, semantic role labelling, and machine translation, show that SSANs consistently outperform the standard SANs. Through well-designed probing experiments, we empirically validate that the improvement of SSANs can be attributed in part to mitigating two commonly-cited weaknesses of SANs: word order encoding and structure modeling. Specifically, the selective mechanism improves SANs by paying more attention to content words that contribute to the meaning of the sentence. The code and data are released at https:\/\/github.com\/xwgeng\/SSAN.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their insightful comments. We also thank Xiaocheng Feng, Heng Gong, Zhangyin Feng, and Xiachong Feng for helpful discussion. This work was supported by the National Key R&D Program of China via grant 2018YFB1005103 and National Natural Science Foundation of China (NSFC) via grant 61632011 and 61772156.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cohen-etal-2021-repgraph","url":"https:\/\/aclanthology.org\/2021.emnlp-demo.10","title":"RepGraph: Visualising and Analysing Meaning Representation Graphs","abstract":"We present RepGraph, an open source visualisation and analysis tool for meaning representation graphs. Graph-based meaning representations provide rich semantic annotations, but visualising them clearly is more challenging than for fully lexicalized representations. Our application provides a seamless, unifying interface with which to visualise, manipulate and analyse semantically parsed graph data represented in a JSON-based serialisation format. RepGraph visualises graphs in multiple formats, with an emphasis on showing the relation between nodes and their corresponding token spans, whilst keeping the representation compact. Additionally, the web-based tool provides NLP researchers with a clear, visually intuitive way of interacting with these graphs, and includes a number of graph analysis features. The tool currently supports the DMRS, EDS, PTG, UCCA, and AMR semantic frameworks. A live demo is available at https:\/\/repgraph.vercel.app\/.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gillick-etal-1992-rapid","url":"https:\/\/aclanthology.org\/H92-1066","title":"Rapid Match Training for Large Vocabularies","abstract":"This paper describes a new algorithm for building rapid match models for use in Dragon's continuous speech recognizer. Rather than working from a single representative token for each word, the new procedure works directly from a set of trained hidden Markov models. By simulated traversals of the HMMs, we generate a collection of sample tokens for each word which are then averaged together to build new rapid match models. This method enables us to construct models which better reflect the true variation in word occurrences and which no longer require the extensive adaptation needed in our original method. In this preliminary report, we outline this new procedure for building rapid match models and report results from initial testing on the Wall Street Journal recognition task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"denecke-2002-signatures","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2002\/pdf\/343.pdf","title":"Signatures, Typed Feature Structures and RDFS","abstract":"In this paper, we examine how attribute logic signatures and typed feature structures can be serialized using emerging semantic web standards RDF and RDFS. Inversely, we also consider to which degree the logic of typed feature structure is capable of representing and drawing inferences over RDF and RDFS documents.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"van-cranenburgh-ketzan-2021-stylometric","url":"https:\/\/aclanthology.org\/2021.latechclfl-1.21","title":"Stylometric Literariness Classification: the Case of Stephen King","abstract":"This paper applies stylometry to quantify the literariness of 73 novels and novellas by American author Stephen King, chosen as an extraordinary case of a writer who has been dubbed both \"high\" and \"low\" in literariness in critical reception. We operationalize literariness using a measure of stylistic distance (Cosine Delta) based on the 1000 most frequent words in two bespoke comparison corpora used as proxies for literariness: one of popular genre fiction, another of National Book Award-winning authors. We report that a supervised model is highly effective in distinguishing the two categories, with 94.6% macro average in a binary classification. We define two subsets of texts by King-\"high\" and \"low\" literariness works as suggested by critics and ourselves-and find that a predictive model does identify King's Dark Tower series and novels such as Dolores Claiborne as among his most \"literary\" texts, consistent with critical reception, which has also ascribed postmodern qualities to the Dark Tower novels. Our results demonstrate the efficacy of Cosine Delta-based stylometry in quantifying the literariness of texts, while also highlighting the methodological challenges of literariness, especially in the case of Stephen King.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to Corina Koolen, Karina van Dalen-Oskam, and three anonymous reviewers for their feedback.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mausam-etal-2012-open","url":"https:\/\/aclanthology.org\/D12-1048","title":"Open Language Learning for Information Extraction","abstract":"Open Information Extraction (IE) systems extract relational tuples from text, without requiring a pre-specified vocabulary, by identifying relation phrases and associated arguments in arbitrary sentences. However, stateof-the-art Open IE systems such as REVERB and WOE share two important weaknesses-(1) they extract only relations that are mediated by verbs, and (2) they ignore context, thus extracting tuples that are not asserted as factual. This paper presents OLLIE, a substantially improved Open IE system that addresses both these limitations. First, OLLIE achieves high yield by extracting relations mediated by nouns, adjectives, and more. Second, a context-analysis step increases precision by including contextual information from the sentence in the extractions. OLLIE obtains 2.7 times the area under precision-yield curve (AUC) compared to REVERB and 1.9 times the AUC of WOE parse. 1. \"After winning the Superbowl, the Saints are now the top dogs of the NFL.\" O: (the Saints; win; the Superbowl) 2. \"There are plenty of taxis available at Bali airport.\" O: (taxis; be available at; Bali airport) 3. \"Microsoft co-founder Bill Gates spoke at ...\" O: (Bill Gates; be co-founder of; Microsoft) 4. \"Early astronomers believed that the earth is the center of the universe.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by NSF grant IIS-0803481, ONR grant N00014-08-1-0431, DARPA contract FA8750-09-C-0179 and the Intelligence Advanced Research Projects Activity (IARPA) via Air Force Research Laboratory (AFRL) contract number FA8650-10-C-7058. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, AFRL, or the U.S. Government. This research is carried out at the University of Washington's Turing Center.We thank Fei Wu and Dan Weld for providing WOE's code and Anthony Fader for releasing REVERB's code. Peter Clark, Alan Ritter, and Luke Zettlemoyer provided valuable feedback on the research and Dipanjan Das helped us with state-of-theart SRL systems. We also thank the anonymous reviewers for their comments on an earlier draft.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"shinzato-etal-2008-tsubaki","url":"https:\/\/aclanthology.org\/I08-1025","title":"TSUBAKI: An Open Search Engine Infrastructure for Developing New Information Access Methodology","abstract":"As the amount of information created by human beings is explosively grown in the last decade, it is getting extremely harder to obtain necessary information by conventional information access methods. Hence, creation of drastically new technology is needed. For developing such new technology, search engine infrastructures are required. Although the existing search engine APIs can be regarded as such infrastructures, these APIs have several restrictions such as a limit on the number of API calls. To help the development of new technology, we are running an open search engine infrastructure, TSUBAKI, on a high-performance computing environment. In this paper, we describe TSUBAKI infrastructure.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chiril-etal-2020-annotated","url":"https:\/\/aclanthology.org\/2020.lrec-1.175","title":"An Annotated Corpus for Sexism Detection in French Tweets","abstract":"Social media networks have become a space where users are free to relate their opinions and sentiments which may lead to a large spreading of hatred or abusive messages which have to be moderated. This paper presents the first French corpus annotated for sexism detection composed of about 12,000 tweets. In a context of offensive content mediation on social media now regulated by European laws, we think that it is important to be able to detect automatically not only sexist content but also to identify if a message with a sexist content is really sexist (i.e. addressed to a woman or describing a woman or women in general) or is a story of sexism experienced by a woman. This point is the novelty of our annotation scheme. We also propose some preliminary results for sexism detection obtained with a deep learning approach. Our experiments show encouraging results.","label_nlp4sg":1,"task":["Sexism Detection"],"method":["Annotated Corpus","deep learning"],"goal1":"Gender Equality","goal2":null,"goal3":null,"acknowledgments":"This work is funded by the Institut Carnot Cognition under the project SESAME.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":1,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dandapat-groves-2014-mtwatch","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/272_Paper.pdf","title":"MTWatch: A Tool for the Analysis of Noisy Parallel Data","abstract":"State-of-the-art statistical machine translation (SMT) technique requires a good quality parallel data to build a translation model. The availability of large parallel corpora has rapidly increased over the past decade. However, often these newly developed parallel data contains contain significant noise. In this paper, we describe our approach for classifying good quality parallel sentence pairs from noisy parallel data. We use 10 different features within a Support Vector Machine (SVM)-based model for our classification task. We report a reasonably good classification accuracy and its positive effect on overall MT accuracy.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wu-ong-2021-pragmatically","url":"https:\/\/aclanthology.org\/2021.scil-1.53","title":"Pragmatically Informative Color Generation by Grounding Contextual Modifiers","abstract":"Grounding language in contextual information is crucial for fine-grained natural language understanding. One important task that involves grounding contextual modifiers is color generation. Given a reference color \"green\", and a modifier \"bluey\", how does one generate a color that could represent \"bluey green\"? We propose a computational pragmatics model that formulates this color generation task as a recursive game between speakers and listeners. In our model, a pragmatic speaker reasons about the inferences that a listener would make, and thus generates a modified color that is maximally informative to help the listener recover the original referents. In this paper, we show that incorporating pragmatic information provides significant improvements in performance compared with other state-of-the-art deep learning models where pragmatic inference and flexibility in representing colors from a large continuous space are lacking.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jun-etal-2022-anna","url":"https:\/\/aclanthology.org\/2022.repl4nlp-1.13","title":"ANNA'':'' Enhanced Language Representation for Question Answering","abstract":"Pre-trained language models have brought significant improvements in performance in a variety of natural language processing tasks. Most existing models performing state-of-theart results have shown their approaches in the separate perspectives of data processing, pretraining tasks, neural network modeling, or fine-tuning. In this paper, we demonstrate how the approaches affect performance individually, and that the language model performs the best results on a specific question answering task when those approaches are jointly considered in pre-training models. In particular, we propose an extended pre-training task, and a new neighbor-aware mechanism that attends neighboring tokens more to capture the richness of context for pre-training language modeling. Our best model achieves new state-of-the-art results of 95.7% F1 and 90.6% EM on SQuAD 1.1 and also outperforms existing pre-trained language models such as RoBERTa, ALBERT, ELECTRA, and XLNet on the SQuAD 2.0 benchmark.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bies-2015-balancing","url":"https:\/\/aclanthology.org\/W15-1615","title":"Balancing the Existing and the New in the Context of Annotating Non-Canonical Language","abstract":"The importance of balancing linguistic considerations, annotation practicalities, and end user needs in developing language annotation guidelines is discussed. Maintaining a clear view of the various goals and fostering collaboration and feedback across levels of annotation and between corpus creators and corpus users is helpful in determining this balance. Annotating non-canonical language brings additional challenges that serve to highlight the necessity of keeping these goals in mind when creating corpora.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This material is based upon research supported by the Defense Advanced Research Projects Agency (DARPA) Contract No. HR0011-11-C-0145 and Air Force Research Laboratory agreement number FA8750-13-2-0045. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Air Force Research Laboratory and Defense Advanced Research Projects Agency or the U.S. Government. Portions of this work were supported by a gift from Google, Inc.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jin-etal-2018-unsupervised","url":"https:\/\/aclanthology.org\/Q18-1016","title":"Unsupervised Grammar Induction with Depth-bounded PCFG","abstract":"There has been recent interest in applying cognitively-or empirically-motivated bounds on recursion depth to limit the search space of grammar induction models (Ponvert et al., 2011; Noji and Johnson, 2016; Shain et al., 2016). This work extends this depthbounding approach to probabilistic contextfree grammar induction (DB-PCFG), which has a smaller parameter space than hierarchical sequence models, and therefore more fully exploits the space reductions of depthbounding. Results for this model on grammar acquisition from transcribed child-directed speech and newswire text exceed or are competitive with those of other models when evaluated on parse accuracy. Moreover, grammars acquired from this model demonstrate a consistent use of category labels, something which has not been demonstrated by other acquisition models.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank Cory Shain and William Bryce for their valuable input. We would like also to thank the Action Editor Xavier Carreras and the anonymous reviewers for insightful comments. Computations for this project were partly run on the Ohio Supercomputer Center (1987). This research was funded by the Defense Advanced Research Projects Agency award HR0011-15-2-0022. The content of the information does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bafna-sharma-2019-towards","url":"https:\/\/aclanthology.org\/2019.icon-1.18","title":"Towards Handling Verb Phrase Ellipsis in English-Hindi Machine Translation","abstract":"English-Hindi machine translation systems have difficulty interpreting verb phrase ellipsis (VPE) in English, and commit errors in translating sentences with VPE. We present a solution and theoretical backing for the treatment of English VPE, with the specific scope of enabling English-Hindi MT, based on an understanding of the syntactical phenomenon of verb-stranding verb phrase ellipsis in Hindi (VVPE). We implement a rule-based system to perform the following sub-tasks: 1) Verb ellipsis identification in the English source sentence, 2) Elided verb phrase head identification 3) Identification of verb segment which needs to be induced at the site of ellipsis 4) Modify input sentence; i.e. resolving VPE and inducing the required verb segment. This system is tested in two parts. It obtains 94.83 percent precision and 83.04 percent recall on subtask (1), tested on 3900 sentences from the BNC corpus (Leech, 1992). This is competitive with state-of-the-art results. We measure accuracy of subtasks (2) and (3) together, and obtain a 91 percent accuracy on 200 sentences taken from the WSJ corpus(Paul and Baker, 1992). Finally, in order to indicate the relevance of ellipsis handling to MT, we carried out a manual analysis of the MT outputs of 100 sentences after passing it through our system. We set up a basic metric (1-5) for this evaluation, where 5 indicates drastic improvement, and obtained an average of 3.55.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"alexandersson-etal-1995-robust","url":"https:\/\/aclanthology.org\/E95-1026","title":"A Robust and Efficient Three-Layered Dialogue Component for a Speech-to-Speech Translation System","abstract":"We present the dialogue component of the speech-to-speech translation system VERBMOBIL. In contrast to conventional dialogue systems it mediates the dialogue while processing maximally 50% of the dialogue in depth. Special requirements (robustness and efficiency) lead to a 3-layered hybrid architecture for the dialogue module, using statistics, an automaton and a planner. A dialogue memory is constructed incrementally.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1995,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"navigli-2015-multilinguality","url":"https:\/\/aclanthology.org\/2015.jeptalnrecital-invite.1","title":"Multilinguality at Your Fingertips : BabelNet, Babelfy and Beyond !","abstract":"Multilinguality at Your Fingertips : BabelNet, Babelfy and Beyond ! Multilinguality is a key feature of today's Web, and it is this feature that we leverage and exploit in our research work at the Sapienza University of Rome's Linguistic Computing Laboratory, which I am going to overview and showcase in this talk. I will start by presenting BabelNet 3.0, available at http:\/\/babelnet.org, a very large multilingual encyclopedic dictionary and semantic network, which covers 271 languages and provides both lexicographic and encyclopedic knowledge for all the open-class parts of speech, thanks to the seamless integration of WordNet, Wikipedia, Wiktionary, OmegaWiki, Wikidata and the Open Multilingual WordNet. Next, I will present Babelfy, available at http:\/\/babelfy.org, a unified approach that leverages BabelNet to jointly perform word sense disambiguation and entity linking in arbitrary languages, with performance on both tasks on a par with, or surpassing, those of task-specific state-of-the-art supervised systems. Finally I will describe the Wikipedia Bitaxonomy, available at http:\/\/wibitaxonomy.org, a new approach to the construction of a Wikipedia bitaxonomy, that is, the largest and most accurate currently available taxonomy of Wikipedia pages and taxonomy of categories, aligned to each other. I will also give an outline of future work on multilingual resources and processing, including state-of-the-art semantic similarity with sense embeddings.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bai-zhou-2020-deepyang","url":"https:\/\/aclanthology.org\/2020.semeval-1.63","title":"DEEPYANG at SemEval-2020 Task 4: Using the Hidden Layer State of BERT Model for Differentiating Common Sense","abstract":"Introducing common sense to natural language understanding systems has received increasing research attention. To facilitate the researches on common sense reasoning, the SemEval-2020 Task 4 Commonsense Validation and Explanation(ComVE) is proposed. We participate in sub-task A and try various methods including traditional machine learning methods, deep learning methods, and also recent pre-trained language models. Finally, we concatenate the original output of BERT and the output vector of BERT hidden layer state to obtain more abundant semantic information features, and obtain competitive results. Our model achieves an accuracy of 0.8510 in the final test data and ranks 25th among all the teams.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the Natural Science Foundations of China under Grants 61463050, the NSF of Yunnan Province under Grant 2015FB113.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"radford-curran-2013-joint","url":"https:\/\/aclanthology.org\/P13-2118","title":"Joint Apposition Extraction with Syntactic and Semantic Constraints","abstract":"Appositions are adjacent NPs used to add information to a discourse. We propose systems exploiting syntactic and semantic constraints to extract appositions from OntoNotes. Our joint log-linear model outperforms the state-of-the-art Favre and Hakkani-T\u00fcr (2009) model by \u223c10% on Broadcast News, and achieves 54.3% Fscore on multiple genres.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank the anonymous reviewers for their suggestions. Thanks must also go to Benoit Favre for his clear writing and help answering our questions as we replicated his dataset and system. This work has been supported by ARC Discovery grant DP1097291 and the Capital Markets CRC Computable News project.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"stallard-2001-evaluation","url":"https:\/\/aclanthology.org\/H01-1023","title":"Evaluation Results for the Talk'n'Travel System","abstract":"We describe and present evaluation results for Talk'n'Travel, a spoken dialogue language system for making air travel plans over the telephone. Talk'n'Travel is a fully conversational, mixedinitiative system that allows the user to specify the constraints on his travel plan in arbitrary order, ask questions, etc., in general spoken English. The system was independently evaluated as part of the DARPA Communicator program and achieved a high success rate. .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was sponsored by DARPA and monitored by SPAWAR Systems Center under Contract No. N66001-99-D-8615.","year":2001,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tsai-etal-2019-multimodal","url":"https:\/\/aclanthology.org\/P19-1656","title":"Multimodal Transformer for Unaligned Multimodal Language Sequences","abstract":"Human language is often multimodal, which comprehends a mixture of natural language, facial gestures, and acoustic behaviors. However, two major challenges in modeling such multimodal human language time-series data exist: 1) inherent data non-alignment due to variable sampling rates for the sequences from each modality; and 2) long-range dependencies between elements across modalities. In this paper, we introduce the Multimodal Transformer (MulT) to generically address the above issues in an end-to-end manner without explicitly aligning the data. At the heart of our model is the directional pairwise crossmodal attention, which attends to interactions between multimodal sequences across distinct time steps and latently adapt streams from one modality to another. Comprehensive experiments on both aligned and non-aligned multimodal time-series show that our model outperforms state-of-the-art methods by a large margin. In addition, empirical analysis suggests that correlated crossmodal signals are able to be captured by the proposed crossmodal attention mechanism in MulT.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by DARPA HR00111990016, AFRL FA8750-18-C-0014, NSF IIS1763562 #1750439 #1722822, Apple, Google focused award, and Samsung. We would also like to acknowledge NVIDIA's GPU support.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-etal-1998-test-environment","url":"https:\/\/aclanthology.org\/P98-2126","title":"A Test Environment for Natural Language Understanding Systems","abstract":"The Natural Language Understanding Engine Test Environment (ETE) is a GUI software tool that aids in the development and maintenance of large, modular, natural language understanding (NLU) systems. Natural language understanding systems are composed of modules (such as partof-speech taggers, parsers and semantic analyzers) which are difficult to test individually because of the complexity of their output data structures. Not only are the output data structures of the internal modules complex, but also many thousands of test items (messages or sentences) are required to provide a reasonable sample of the linguistic structures of a single human language, even if the language is restricted to a particular domain. The ETE assists in the management and analysis of the thousands of complex data structures created during natural language processing of a large corpus using relational database technology in a network environment.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hatori-etal-2011-incremental","url":"https:\/\/aclanthology.org\/I11-1136","title":"Incremental Joint POS Tagging and Dependency Parsing in Chinese","abstract":"We address the problem of joint part-of-speech (POS) tagging and dependency parsing in Chinese. In Chinese, some POS tags are often hard to disambiguate without considering longrange syntactic information. Also, the traditional pipeline approach to POS tagging and dependency parsing may suffer from the problem of error propagation. In this paper, we propose the first incremental approach to the task of joint POS tagging and dependency parsing, which is built upon a shift-reduce parsing framework with dynamic programming. Although the incremental approach encounters difficulties with underspecified POS tags of look-ahead words, we overcome this issue by introducing so-called delayed features. Our joint approach achieved substantial improvements over the pipeline and baseline systems in both POS tagging and dependency parsing task, achieving the new state-of-the-art performance on this joint task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"trost-1990-application","url":"https:\/\/aclanthology.org\/C90-2064","title":"The application of two-level morphology to non-concatenative German morphology","abstract":"In this paper 2 we describe a hybrid system for morphological analysis and synthesis. We call it hybrid because it consists of two separate parts interacting with each other in a welldefined way. The treatment of morphonology and nonoconcatenative morphology is based on the two-level approach originally proposed by Koskenniemi (1983) . For the concatenative part of morphosyntax (i.e. affixation) we make use of a grammar based on feature-unification. tloth parts rely on the same morph lexicon.\nCombinations of two-level morphology with t'eature-based morphosyntactic grammars have already been proposed by several authors (c.f. llear 1988a, Carson 1988 , G6rz & Paulus 1988 , Schiller & Steffens 1990 to overcome the shortcomings of the continuation-classes originally proposed by Koskenniemi (1983) and Karttunen (1983) for the description of morphosyntax. But up to now no linguistically ~;atisfying solution has been proposed for the treatment of non-concatenative morphology in :such a framework. In this paper we describe an extension to the model which will allow for the description of such phenomena. Namely we propose to restrict the applicability of two-level rules by providing them with filters in the form of feature structures. We demonstrate how a well-known problem of German morphology, so-called \"Umlautung\", can be described in our approach in a linguistically motivated and efficient way.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1990,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"walker-etal-2010-evaluating","url":"https:\/\/aclanthology.org\/D10-1024","title":"Evaluating Models of Latent Document Semantics in the Presence of OCR Errors","abstract":"Models of latent document semantics such as the mixture of multinomials model and Latent Dirichlet Allocation have received substantial attention for their ability to discover topical semantics in large collections of text. In an effort to apply such models to noisy optical character recognition (OCR) text output, we endeavor to understand the effect that character-level noise can have on unsupervised topic modeling. We show the effects both with document-level topic analysis (document clustering) and with word-level topic analysis (LDA) on both synthetic and real-world OCR data. As expected, experimental results show that performance declines as word error rates increase. Common techniques for alleviating these problems, such as filtering low-frequency words, are successful in enhancing model quality, but exhibit failure trends similar to models trained on unprocessed OCR output in the case of LDA. To our knowledge, this study is the first of its kind.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the Fulton Supercomputing Center at BYU for providing the computing resources required for experiments reported here.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"somasundaran-etal-2014-lexical","url":"https:\/\/aclanthology.org\/C14-1090","title":"Lexical Chaining for Measuring Discourse Coherence Quality in Test-taker Essays","abstract":"This paper presents an investigation of lexical chaining (Morris and Hirst, 1991) for measuring discourse coherence quality in test-taker essays. We hypothesize that attributes of lexical chains, as well as interactions between lexical chains and explicit discourse elements, can be harnessed for representing coherence. Our experiments reveal that performance achieved by our new lexical chain features is better than that of previous discourse features used for this task, and that the best system performance is achieved when combining lexical chaining features with complementary discourse features, such as those provided by a discourse parser based on rhetorical structure theory, and features that reflect errors in grammar, word usage, and mechanics. This work is licensed under a Creative Commons Attribution 4.0 International Licence.","label_nlp4sg":1,"task":["Measuring Discourse Coherence Quality"],"method":["investigation","lexical chain features"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kuhlen-1984-intonational","url":"https:\/\/aclanthology.org\/P84-1115","title":"An Intonational Delphi Poll on Future Trends in ``Information Linguistics''","abstract":"The results of an international Delphi poll on information linguistics which was carried out between 1982 and 1983 are presented.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1984,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sato-etal-2018-addressee","url":"https:\/\/aclanthology.org\/C18-1308","title":"Addressee and Response Selection for Multilingual Conversation","abstract":"Developing conversational systems that can converse in many languages is an interesting challenge for natural language processing. In this paper, we introduce multilingual addressee and response selection. In this task, a conversational system predicts an appropriate addressee and response for an input message in multiple languages. A key to developing such multilingual responding systems is how to utilize high-resource language data to compensate for low-resource language data. We present several knowledge transfer methods for conversational systems. To evaluate our methods, we create a new multilingual conversation dataset. Experiments on the dataset demonstrate the effectiveness of our methods.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"itamar-itai-2008-using","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2008\/pdf\/76_paper.pdf","title":"Using Movie Subtitles for Creating a Large-Scale Bilingual Corpora","abstract":"This paper presents a method for compiling a large-scale bilingual corpus from a database of movie subtitles. To create the corpus, we propose an algorithm based on Gale and Church's sentence alignment algorithm(1993). However, our algorithm not only relies on character length information, but also uses subtitle-timing information, which is encoded in the subtitle files. Timing is highly correlated between subtitles in different versions (for the same movie), since subtitles that match should be displayed at the same time. However, the absolute time values can't be used for alignment, since the timing is usually specified by frame numbers and not by real time, and converting it to real time values is not always possible, hence we use normalized subtitle duration instead. This results in a significant reduction in the alignment error rate.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"boytcheva-etal-2012-automatic","url":"https:\/\/aclanthology.org\/E12-2016","title":"Automatic Analysis of Patient History Episodes in Bulgarian Hospital Discharge Letters","abstract":"This demo presents Information Extraction from discharge letters in Bulgarian language. The Patient history section is automatically split into episodes (clauses between two temporal markers); then drugs, diagnoses and conditions are recognised within the episodes with accuracy higher than 90%. The temporal markers, which refer to absolute or relative moments of time, are identified with precision 87% and recall 68%. The direction of time for the episode starting point: backwards or forward (with respect to certain moment orienting the episode) is recognised with precision 74.4%.","label_nlp4sg":1,"task":["Information Extraction"],"method":["Automatic Analysis"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"This work is supported by grant DO\/02-292 EV-TIMA funded by the Bulgarian National Science Fund in 2009-2012. The anonymised EHRs are delivered by the University Specialised Hospital of Endocrinology, Medical University -Sofia.","year":2012,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"huang-etal-2016-well","url":"https:\/\/aclanthology.org\/P16-1084","title":"How well do Computers Solve Math Word Problems? Large-Scale Dataset Construction and Evaluation","abstract":"Recently a few systems for automatically solving math word problems have reported promising results. However, the datasets used for evaluation have limitations in both scale and diversity. In this paper, we build a large-scale dataset which is more than 9 times the size of previous ones, and contains many more problem types. Problems in the dataset are semiautomatically obtained from community question-answering (CQA) web pages. A ranking SVM model is trained to automatically extract problem answers from the answer text provided by CQA users, which significantly reduces human annotation cost. Experiments conducted on the new dataset lead to interesting and surprising results.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the annotators for their efforts in annotating the math problems in our dataset. Thanks to the anonymous reviewers for their helpful comments and suggestions.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ramesh-etal-2020-investigating","url":"https:\/\/aclanthology.org\/2020.loresmt-1.15","title":"Investigating Low-resource Machine Translation for English-to-Tamil","abstract":"Statistical machine translation (SMT) which was the dominant paradigm in machine translation (MT) research for nearly three decades has re cently been superseded by the endtoend deep learning approaches to MT. Although deep neu ral models produce stateoftheart results in many translation tasks, they are found to under perform on resourcepoor scenarios. Despite some success, none of the presentday bench marks that have tried to overcome this prob lem can be regarded as a universal solution to the problem of translation of many lowresource languages. In this work, we investigate the performance of phrasebased SMT (PBSMT) and neural MT (NMT) on a rarelytested low resource languagepair, EnglishtoTamil, tak ing a specialised data domain (software localisa tion) into consideration. In particular, we pro duce rankings of our MT systems via a social media platformbased human evaluation scheme, and demonstrate our findings in the lowresource domainspecific text translation task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The ADAPT Centre for Digital Content Technol ogy is funded under the Science Foundation Ire land (SFI) Research Centres Programme (Grant No. 13\/RC\/2106) and is cofunded under the European Regional Development Fund. This project has par tially received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\u0142odowskaCurie grant agreement No. 713567, and the publication has emanated from research supported in part by a research grant from SFI under Grant Number 13\/RC\/2077.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ro-lien-1981-wycliffes","url":"https:\/\/aclanthology.org\/W81-0116","title":"Wycliffes Bibeltekster p\\aa RA2 (Wycliffe's Bible texts in RA2) [In Norwegian]","abstract":"(2) Inng\u00e5ende studier av ca. 230 bibelh\u00e5ndskrifter for om mulig \u00e5 tilveiebringe mer presis informasjon om oversettelsesarbeidet som tilkjennes den engelske reformator John Wycllffe og hans laeresvenner.\nEn fullstendig utgave av The Wycllffite Bible, basert p\u00e5 ca. 170 h\u00e5ndskrifter, ble utgitt i 1850 av J. Forshall og F. Madden, som hadde brukt 20 \u00e5r p\u00e5 \u00e5 unders\u00f8ke h\u00e5ndskrifter lokalisert i Storbritannia og Irland. Forshall og Madden fant at h\u00e5ndskriftene kunne inndeles i to grupper; den ene gruppen oppviste en nesten ordrett oversettelse fra den latinske originalversjonen, den andre gruppen presenterte en over settelse der Idiom og syntaks samsvarte med engelsk spr\u00e5kbruk rundt 1400.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1981,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hurriyetoglu-etal-2021-challenges","url":"https:\/\/aclanthology.org\/2021.case-1.1","title":"Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021): Workshop and Shared Task Report","abstract":"This workshop is the fourth issue of a series of workshops on automatic extraction of sociopolitical events from news, organized by the Emerging Market Welfare Project, with the support of the Joint Research Centre of the European Commission and with contributions from many other prominent scholars in this field. The purpose of this series of workshops is to foster research and development of reliable, valid, robust, and practical solutions for automatically detecting descriptions of sociopolitical events, such as protests, riots, wars and armed conflicts, in text streams. This year workshop contributors make use of the stateof-the-art NLP technologies, such as Deep Learning, Word Embeddings and Transformers and cover a wide range of topics from text classification to news bias detection. Around 40 teams have registered and 15 teams contributed to three tasks that are i) multilingual protest news detection, ii) fine-grained classification of socio-political events, and iii) discovering Black Lives Matter protest events. The workshop also highlights two keynote and four invited talks about various aspects of creating event data sets and multi-and cross-lingual machine learning in few-and zero-shot settings.","label_nlp4sg":1,"task":["Automated Extraction of Socio - political Events"],"method":["Transformers","Word Embeddings"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"The authors from Koc University were funded by the European Research Council (ERC) Starting Grant 714868 awarded to Dr. Erdem Y\u00f6r\u00fck for his project Emerging Welfare.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"srivastava-singh-2021-hinge","url":"https:\/\/aclanthology.org\/2021.eval4nlp-1.20","title":"HinGE: A Dataset for Generation and Evaluation of Code-Mixed Hinglish Text","abstract":"Text generation is a highly active area of research in the computational linguistic community. The evaluation of the generated text is a challenging task and multiple theories and metrics have been proposed over the years. Unfortunately, text generation and evaluation are relatively understudied due to the scarcity of high-quality resources in code-mixed languages where the words and phrases from multiple languages are mixed in a single utterance of text and speech. To address this challenge, we present a corpus (HinGE) for a widely popular code-mixed language Hinglish (code-mixing of Hindi and English languages). HinGE has Hinglish sentences generated by humans as well as two rule-based algorithms corresponding to the parallel Hindi-English sentences. In addition, we demonstrate the inefficacy of widely-used evaluation metrics on the code-mixed data. The HinGE dataset will facilitate the progress of natural language generation research in code-mixed languages.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hong-etal-2009-bridging","url":"https:\/\/aclanthology.org\/P09-2059","title":"Bridging Morpho-Syntactic Gap between Source and Target Sentences for English-Korean Statistical Machine Translation","abstract":"Often, Statistical Machine Translation (SMT) between English and Korean suffers from null alignment. Previous studies have attempted to resolve this problem by removing unnecessary function words, or by reordering source sentences. However, the removal of function words can cause a serious loss in information. In this paper, we present a possible method of bridging the morpho-syntactic gap for English-Korean SMT. In particular, the proposed method tries to transform a source sentence by inserting pseudo words, and by reordering the sentence in such a way that both sentences have a similar length and word order. The proposed method achieves 2.4 increase in BLEU score over baseline phrase-based system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by Microsoft Research Asia. Any opinions, findings, and conclusions or recommendations expressed above are those of the authors and do not necessarily reflect the views of the sponsor.","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kucera-stluka-2014-corpus","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/300_Paper.pdf","title":"Corpus of 19th-century Czech Texts: Problems and Solutions","abstract":"Although the Czech language of the 19th century represents the roots of modern Czech and many features of the 20 th-and 21 st-century language cannot be properly understood without this historical background, the 19th-century Czech has not been thoroughly and consistently researched so far. The long-term project of a corpus of 19th-century Czech printed texts, currently in its third year, is intended to stimulate the research as well as to provide a firm material basis for it. The reason why, in our opinion, the project is worth mentioning is that it is faced with an unusual concentration of problems following mostly from the fact that the 19 th century was arguably the most tumultuous period in the history of Czech, as well as from the fact that Czech is a highly inflectional language with a long history of sound changes, orthography reforms and rather discontinuous development of its vocabulary. The paper will briefly characterize the general background of the problems and present the reasoning behind the solutions that have been implemented in the ongoing project.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rimkute-etal-2007-morphological","url":"https:\/\/aclanthology.org\/W07-1713","title":"Morphological Annotation of the Lithuanian Corpus","abstract":"As the development of information technologies makes progress, large morphologically annotated corpora become a necessity, as they are necessary for moving onto higher levels of language computerisation (e. g. automatic syntactic and semantic analysis, information extraction, machine translation). Research of morphological disambiguation and morphological annotation of the 100 million word Lithuanian corpus are presented in the article. Statistical methods have enabled to develop the automatic tool of morphological annotation for Lithuanian, with the disambiguation precision of 94%. Statistical data about the distribution of parts of speech, most frequent wordforms, and lemmas, in the annotated Corpus of The Contemporary Lithuanian Language is also presented.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is a part of the project \"Preservation of the Lithuanian Language under Conditions of Globalization: annotated corpus of the Lithuanian language (ALKA)\", which was financed by the Lithuanian State Science and Study Foundation.","year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bansal-etal-2022-r3","url":"https:\/\/aclanthology.org\/2022.dialdoc-1.17","title":"R3 : Refined Retriever-Reader pipeline for Multidoc2dial","abstract":"In this paper, we present our submission to the DialDoc shared task based on the Multi-Doc2Dial dataset. MultiDoc2Dial is a conversational question answering dataset that grounds dialogues in multiple documents. The task involves grounding a user's query in a document followed by generating an appropriate response. We propose several improvements over the baseline's retriever-reader architecture to aid in modeling goal-oriented dialogues grounded in multiple documents. Our proposed approach employs sparse representations for passage retrieval, a passage re-ranker, the fusion-in-decoder architecture for generation, and a curriculum learning training paradigm. Our approach shows a 12 point improvement in BLEU score compared to the baseline RAG model.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nogaito-iida-1988-noun","url":"https:\/\/aclanthology.org\/1988.tmi-1.18","title":"Noun phrase identification in dialogue and its application","abstract":"Noun identifications are studied as 'anaphora'. Noun-noun relationships are ambiguous, as are noun-pronoun relations. Generally, nouns must match more 'antecedent' information than pronouns. But noun's 'antecedent' can be more remote. Therefore, the analysis scope of a noun-noun relationship will be larger than that of a noun-pronoun relationship. This expanded scope of analysis may result in errors. The nearest noun which a satisfies conditions is not always the 'antecedent'. A noun phrase identification model must be more comprehensive to deal with this problem. In general, noun-noun anaphora are difficult to translate into another language because of the difference in the meaning of words. A machine translation system for dialogue must be able to identify noun phrases. In this paper, a noun phrase identification model for understanding and translating the dialogue through use of the domain knowledge and a plan recognition model is presented. The presented model determines an area of analysis for dis-ambiguity. A noun phrase identification model for understanding and translating dialogue through the use of domain knowledge and a plan recognition model is presented.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to Thank Dr. Akira Kurematsu, president of ATR interpreting Telephony Laboratories, Dr. Teruaki Aizawa, head of Natural Language Understanding Department, and our other colleagues for their encouragement.","year":1988,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"xie-etal-2019-shakespeare","url":"https:\/\/aclanthology.org\/U19-1002","title":"From Shakespeare to Li-Bai: Adapting a Sonnet Model to Chinese Poetry","abstract":"In this paper, we adapt Deep-speare, a joint neural network model for English sonnets, to Chinese poetry. We illustrate the characteristics of Chinese quatrain and explain our architecture as well as training and generation procedure, which differs from Shakespeare sonnets in several aspects. We analyse the generated poetry and find that the adapted model works well for Chinese poetry, as it can: (1) generate coherent 4-line quatrains of different topics; and (2) capture rhyme automatically to a certain extent.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"perez-rosas-mihalcea-2014-cross","url":"https:\/\/aclanthology.org\/P14-2072","title":"Cross-cultural Deception Detection","abstract":"In this paper, we address the task of cross-cultural deception detection. Using crowdsourcing, we collect three deception datasets, two in English (one originating from United States and one from India), and one in Spanish obtained from speakers from Mexico. We run comparative experiments to evaluate the accuracies of deception classifiers built for each culture, and also to analyze classification differences within and across cultures. Our results show that we can leverage cross-cultural information, either through translation or equivalent semantic categories, and build deception classifiers with a performance ranging between 60-70%.","label_nlp4sg":1,"task":["Deception Detection"],"method":["classifiers","crowdsourcing"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"This material is based in part upon work supported by National Science Foundation awards #1344257 and #1355633 and by DARPA-BAA-12-47 DEFT grant #12475008. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or the Defense Advanced Research Projects Agency.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"zhang-sumita-2007-boosting","url":"https:\/\/aclanthology.org\/P07-2046","title":"Boosting Statistical Machine Translation by Lemmatization and Linear Interpolation","abstract":"Data sparseness is one of the factors that degrade statistical machine translation (SMT). Existing work has shown that using morphosyntactic information is an effective solution to data sparseness. However, fewer efforts have been made for Chinese-to-English SMT with using English morpho-syntactic analysis. We found that while English is a language with less inflection, using English lemmas in training can significantly improve the quality of word alignment that leads to yield better translation performance. We carried out comprehensive experiments on multiple training data of varied sizes to prove this. We also proposed a new effective linear interpolation method to integrate multiple homologous features of translation models.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hana-etal-2011-low","url":"https:\/\/aclanthology.org\/W11-1502","title":"A low-budget tagger for Old Czech","abstract":"The paper describes a tagger for Old Czech (1200-1500 AD), a fusional language with rich morphology. The practical restrictions (no native speakers, limited corpora and lexicons, limited funding) make Old Czech an ideal candidate for a resource-light crosslingual method that we have been developing (e.g. Hana et al., 2004; Feldman and Hana, 2010). We use a traditional supervised tagger. However, instead of spending years of effort to create a large annotated corpus of Old Czech, we approximate it by a corpus of Modern Czech. We perform a series of simple transformations to make a modern text look more like a text in Old Czech and vice versa. We also use a resource-light morphological analyzer to provide candidate tags. The results are worse than the results of traditional taggers, but the amount of language-specific work needed is minimal.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was generously supported by the Grant Agency Czech Republic (project ID: P406\/10\/P328) and by the U.S. NSF grants #0916280, #1033275, and #1048406. We would like to thank Alena M.\u010cern\u00e1 and Boris Lehe\u010dka for annotating the testing corpus and for answering questions about Old Czech. We also thank Institute of Czech Language of the Academy of Sciences of the Czech Republic for the plain text corpus of Old Czech. Finally, we thank anonymous reviewers for their insightful comments. All mistakes are ours.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-etal-2016-neural","url":"https:\/\/aclanthology.org\/C16-1027","title":"A Neural Attention Model for Disfluency Detection","abstract":"In this paper, we study the problem of disfluency detection using the encoder-decoder framework. We treat disfluency detection as a sequence-to-sequence problem and propose a neural attentionbased model which can efficiently model the long-range dependencies between words and make the resulting sentence more likely to be grammatically correct. Our model firstly encodes the source sentence with a bidirectional Long Short-Term Memory (BI-LSTM) and then uses the neural attention as a pointer to select an ordered subsequence of the input as the output. Experiments show that our model achieves the state-of-the-art f-score of 86.7% on the commonly used English Switchboard test set. We also evaluate the performance of our model on the in-house annotated Chinese data and achieve a significantly higher f-score compared to the baseline of CRF-based approach.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their valuable suggestions. This work was supported by the National Key Basic Research Program of China via grant 2014CB340503 and the National Natural Science Foundation of China (NSFC) via grant 61370164 and 61632011.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"johnston-busa-1996-qualia","url":"https:\/\/aclanthology.org\/W96-0309","title":"Qualia Structure and the Compositional Interpretation of Compounds","abstract":"The analysis of nominal compound constructions has proven to be a recalcitrant problem for linguistic semantics and poses serious challenges for natural language processing systems. We argue for a compositional treatment of compound constructions which limits the need for listing of compounds in the lexicon. We argue that the development of a practical model of compound interpretation crucially depends on issues of lexicon design. The Generative Lexicon (Pustejovsky 1995) provides us with a model of the lexicon which couples sufficiently expressive lexical semantic representations with mechanisms which capture the relationship between those representations and their syntactic expression. In our approach, the qualia structures of the nouns in a compound provide relational structure enabling compositional interpretation of the modification of the head noun by the modifying noun. This brings compound interpretation under the same rubric as other forms of composition in natural language, including argument selection, adjectival modification, and type coercion (Pustejovsky (1991,1995), Bouillon 1995). We examine data from both English and Italian and develop analyses for both languages which use phrase structure schemata to account for the connections between lexical semantic representation and syntactic expression. In addition to applications in natural language understanding, machine translation, and generation, the model of compound interpretation developed here can be applied to multilingual information extraction tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"harabagiu-maiorano-2000-acquisition","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2000\/pdf\/347.pdf","title":"Acquisition of Linguistic Patterns for Knowledge-based Information Extraction","abstract":"In this paper we present a new method of automatic acquisition of linguistic patterns for Information Extraction, as implemented in the CICERO system. Our approach combines lexico-semantic information available from the WordNet database with collocating data extracted from training corpora. Due to the open-domain nature of the WordNet information and the immediate availability of large collections of texts, our method can be easily ported to open-domain Information Extraction.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"su-markert-2009-subjectivity","url":"https:\/\/aclanthology.org\/N09-1001","title":"Subjectivity Recognition on Word Senses via Semi-supervised Mincuts","abstract":"We supplement WordNet entries with information on the subjectivity of its word senses. Supervised classifiers that operate on word sense definitions in the same way that text classifiers operate on web or newspaper texts need large amounts of training data. The resulting data sparseness problem is aggravated by the fact that dictionary definitions are very short. We propose a semi-supervised minimum cut framework that makes use of both WordNet definitions and its relation structure. The experimental results show that it outperforms supervised minimum cut as well as standard supervised, non-graph classification, reducing the error rate by 40%. In addition, the semi-supervised approach achieves the same results as the supervised framework with less than 20% of the training data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mcinnes-etal-2011-using","url":"https:\/\/aclanthology.org\/W11-0317","title":"Using Second-order Vectors in a Knowledge-based Method for Acronym Disambiguation","abstract":"In this paper, we introduce a knowledge-based method to disambiguate biomedical acronyms using second-order co-occurrence vectors. We create these vectors using information about a long-form obtained from the Unified Medical Language System and Medline. We evaluate this method on a dataset of 18 acronyms found in biomedical text. Our method achieves an overall accuracy of 89%. The results show that using second-order features provide a distinct representation of the long-form and potentially enhances automated disambiguation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the National Institute of Health, National Library of Medicine Grant #R01LM009623-01.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"grishman-etal-1991-new-york","url":"https:\/\/aclanthology.org\/M91-1028","title":"New York University: Description of the PROTEUS System as Used for MUC-3","abstract":"The PROTEUS system which we have used for MUC-3 has three main components : a syntactic analyzer, a semantic analyzer, and a template generator. The PROTEUS Syntactic Analyzer was developed starting in the fall of 1984 as a common base for all th e applications of the PROTEUS Project. Many aspects of its design reflect its heritage in the Linguistic Strin g Parser, previously developed and still in use at New York University. The current system, including the Restriction Language compiler, the lexical analyzer, and the parser proper, comprise approximately 4500 lines of Common Lisp. The Semantic Analyzer was initially developed in 1987 for the MUCK-I (RAINFORMs) application , extended for the MUCK-II (OPREPs) application, and further revised for the current evaluation. It currently consists of about 3000 lines of Common Lisp (excluding the domain-specific information). The Template Generator was written from scratch for the MUC-3 task; it is about 1200 lines of Commo n Lisp. .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1991,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ramesh-kashyap-etal-2022-different","url":"https:\/\/aclanthology.org\/2022.acl-long.32","title":"So Different Yet So Alike! Constrained Unsupervised Text Style Transfer","abstract":"Automatic transfer of text between domains has become popular in recent times. One of its aims is to preserve the semantic content of text being translated from source to target domain. However, it does not explicitly maintain other attributes between the source and translated text, for e.g., text length and descriptiveness. Maintaining constraints in transfer has several downstream applications, including data augmentation and de-biasing. We introduce a method for such constrained unsupervised text style transfer by introducing two complementary losses to the generative adversarial network (GAN) family of models. Unlike the competing losses used in GANs, we introduce cooperative losses where the discriminator and the generator cooperate and reduce the same loss. The first is a contrastive loss and the second is a classification loss-aiming to regularize the latent space further and bring similar sentences across domains closer together. We demonstrate that such training retains lexical, syntactic, and domain-specific constraints between domains for multiple benchmark datasets, including ones where more than one attribute change. We show that the complementary cooperative losses improve text quality, according to both automated and human evaluation measures. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their useful suggestions. We would also like to acknowledge the support of the NExT research grant funds, supported by the National Research Foundation, Prime Ministers Office, Singapore under its IRC@ SG Funding Initiative, and to gratefully acknowledge the support of NVIDIA Corporation with the donation of the GeForce GTX Titan XGPU used in this research. The work is also supported by the project no. T2MOE2008 titled CSK-NLP: Leveraging Commonsense Knowledge for NLP awarded by Singapore's Ministry of Education under its Tier-2 grant scheme.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"blanchon-etal-2004-towards","url":"https:\/\/aclanthology.org\/2004.iwslt-evaluation.3","title":"Towards fairer evaluations of commercial MT systems on basic travel expressions corpora","abstract":"We compare the performance of several SYSTRAN systems on the BTEC corpus. Two language pairs: Chinese to English and Japanese to English are used. Whenever it is possible the system will be used \"off the shelf\" and then tuned. The first system we use is freely available on the web. The second system, SYSTRAN Premium, is commercial. It is used in two ways: (1) choosing and ordering available original dictionaries and setting parameters, (2) same + user dictionaries. As far as the evaluation is concerned, we competed in the unlimited data track.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kennedy-etal-2017-technology","url":"https:\/\/aclanthology.org\/W17-3011","title":"Technology Solutions to Combat Online Harassment","abstract":"This work is part of a new initiative to use machine learning to identify online harassment in social media and comment streams. Online harassment goes underreported due to the reliance on humans to identify and report harassment, reporting that is further slowed by requirements to fill out forms providing context. In addition, the time for moderators to respond and apply human judgment can take days, but response times in terms of minutes are needed in the online context. Though some of the major social media companies have been doing proprietary work in automating the detection of harassment, there are few tools available for use by the public. In addition, the amount of labeled online harassment data and availability of cross platform online harassment datasets is limited. We present the methodology used to create a harassment dataset and classifier and the dataset used to help the system learn what harassment looks like.","label_nlp4sg":1,"task":["identify online harassment"],"method":["classifier","harassment dataset"],"goal1":"Good Health and Well-Being","goal2":"Peace, Justice and Strong Institutions","goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"shifroni-ornan-1992-acquiring","url":"https:\/\/aclanthology.org\/A92-1042","title":"Acquiring and Exploiting the User's Knowledge in Guidance Interactions","abstract":"This paper presents a model for Flexible Interactive Guidance System (FIGS) that provides people with instructions about natural tasks. The model is developed on the basis of a phenomenological analysis of human guidance and illustrated by a system that gives directions in geographical domains. The instructions are provided through a dialog adapted both in form and content to user's needs. The main problem addressed is how to provide a user-adapted guidance during the normal course of the guidance dialog, without introducing a special time consuming sub-dialog to gain information about the user's state of knowledge.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rashkin-etal-2020-plotmachines","url":"https:\/\/aclanthology.org\/2020.emnlp-main.349","title":"PlotMachines: Outline-Conditioned Generation with Dynamic Plot State Tracking","abstract":"We propose the task of outline-conditioned story generation: given an outline as a set of phrases that describe key characters and events to appear in a story, the task is to generate a coherent narrative that is consistent with the provided outline. This task is challenging as the input only provides a rough sketch of the plot, and thus, models need to generate a story by interweaving the key points provided in the outline. This requires the model to keep track of the dynamic states of the latent plot, conditioning on the input outline while generating the full story. We present PLOTMACHINES, a neural narrative model that learns to transform an outline into a coherent story by tracking the dynamic plot states. In addition, we enrich PLOTMACHINES with high-level discourse structure so that the model can learn different writing styles corresponding to different parts of the narrative. Comprehensive experiments over three fiction and non-fiction datasets demonstrate that large-scale language models, such as GPT-2 and GROVER, despite their impressive generation performance, are not sufficient in generating coherent narratives for the given outline, and dynamic plot state tracking is important for composing narratives with tighter, more consistent plots.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank anonymous reveiwers for their insightful feedback. We also thank Rowan Zellers and Ari Holtzman for their input on finetuning GROVER and other language models, Maarten Sap for his feedback on human evaluations, and Elizabeth Clark for consulting on baselines and related work. We would also like to thank various members of the MSR AI and UW NLP communities who provided feedback on various other aspects of this work. This research was supported in part by DARPA under the CwC program through the ARO (W911NF-15-1-0543), DARPA under the MCS program through NIWC Pacific (N66001-19-2-4031), and the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1256082.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kim-etal-2000-phrase","url":"https:\/\/aclanthology.org\/P00-1005","title":"Phrase-Pattern-based Korean to English Machine Translation using Two Level Translation Pattern Selection","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"teh-2006-hierarchical","url":"https:\/\/aclanthology.org\/P06-1124","title":"A Hierarchical Bayesian Language Model Based On Pitman-Yor Processes","abstract":"We propose a new hierarchical Bayesian n-gram model of natural languages. Our model makes use of a generalization of the commonly used Dirichlet distributions called Pitman-Yor processes which produce power-law distributions more closely resembling those in natural languages. We show that an approximation to the hierarchical Pitman-Yor language model recovers the exact formulation of interpolated Kneser-Ney, one of the best smoothing methods for n-gram language models. Experiments verify that our model gives cross entropy results superior to interpolated Kneser-Ney and comparable to modified Kneser-Ney.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"I wish to thank the Lee Kuan Yew Endowment Fund for funding, Joshua Goodman for answering many questions regarding interpolated Kneser-Ney and smoothing techniques, John Blitzer and Yoshua Bengio for help with datasets, Anoop Sarkar for interesting discussion, and Hal Daume III, Min Yen Kan and the anonymous reviewers for","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"han-etal-2013-language","url":"https:\/\/aclanthology.org\/2013.mtsummit-posters.3","title":"Language-independent Model for Machine Translation Evaluation with Reinforced Factors","abstract":"The conventional machine translation evaluation metrics tend to perform well on certain language pairs but weak on other language pairs. Furthermore, some evaluation metrics could only work on certain language pairs not language-independent. Finally, no considering of linguistic information usually leads the metrics result in low correlation with human judgments while too many linguistic features or external resources make the metrics complicated and difficult in replicability. To address these problems, a novel language-independent evaluation metric is proposed in this work with enhanced factors and optional linguistic information (part-of-speech, n-grammar) but not very much. To make the metric perform well on different language pairs, extensive factors are designed to reflect the translation quality and the assigned parameter weights are tunable according to the special characteristics of focused language pairs. Experiments show that this novel evaluation metric yields better performances compared with several classic evaluation metrics (including BLEU, TER and METEOR) and two state-of-the-art ones including ROSE and MPF.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-2019-lijunyi","url":"https:\/\/aclanthology.org\/S19-2212","title":"Lijunyi at SemEval-2019 Task 9: An attention-based LSTM and ensemble of different models for suggestion mining from online reviews and forums","abstract":"In this paper, we describe a suggestion mining system that participated in SemEval 2019 Task 9, SubTask A-Suggestion Mining from Online Reviews and Forums. Given some suggestions from online reviews and forums that can be classified into suggestion and nonsuggestion classes. In this task, we combine the attention mechanism with the LSTM model, which is the final system we submitted. The final submission achieves 14th place in Task 9, SubTask A with the accuracy of 0.6776. After the challenge, we train a series of neural network models such as convolutional neural network(CNN), TextCNN, long short-term memory(LSTM) and C-LSTM. Finally, we make an ensemble on the predictions of these models and get a better result.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"johannessen-etal-2010-enhancing","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/308_Paper.pdf","title":"Enhancing Language Resources with Maps","abstract":"We will look at how maps can be integrated in research resources, such as language databases and language corpora. By using maps, search results can be illustrated in a way that immediately gives the user information that words or numbers on their own would not give. We will illustrate with two different resources, into which we have now added a Google Maps application: The Nordic Dialect Corpus (Johannessen et al. 2009) and The Nordic Syntactic Judgments Database (Lindstad et al. 2009). We have integrated Google Maps into these applications. The database contains some hundred syntactic test sentences that have been evaluated by four speakers in more than hundred locations in Norway and Sweden. Searching for the evaluations of a particular sentence gives a list of several hundred judgments, which are difficult for a human researcher to assess. With the map option, isoglosses are immediately visible. We show in the paper that both with the maps depicting corpus hits and with the maps depicting database results, the map visualizations actually show clear geographical differences that would be very difficult to spot just by reading concordance lines or database tables.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work reported in this paper owes a lot to a number of research networks, funding bodies and individuals. First we should mention the Scandinavian Dialect Syntax research network (ScanDiaSyn), and the Nordic Centre of Excellence in Microcomparative Syntax (NORMS), under whose umbrellas the present work has been conceived, planned and developed. We would further like to thank those who have given us non-Norwegian material for the corpus and database, viz. Swedia 2000 (Anders Eriksson) for Swedish, H\u00e1skoli Islands (\u00c1sta Svavarsd\u00f3ttir) for Icelandic, DanDiaSyn (Henrik J\u00f8rgensen), and who helped us to get recordings from Faroese (Zakaris Svabo Hansen). Also, we would like to thank our former colleague Arne Martinus Lindstad for the work he has done towards the linguistic categorisation of the Nordic Syntactic Judgments Database. One central person in the overall project to be mentioned especially is \u00d8ystein Alexander Vangsnes. Numerous other people have contributed to data collection, recording, transcription, tagging and preparation of data in different ways. We refer to the Nordic Dialect Homepage for more details on these. Finally, a number of funding bodies have contributed directly to the development of the corpus and database: The Research Council of Norway, The University of Oslo and the Nordic Research Councils NOS-HS and Nordforsk.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"grover-tobin-2006-rule","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2006\/pdf\/457_pdf.pdf","title":"Rule-Based Chunking and Reusability","abstract":"In this paper we discuss a rule-based approach to chunking implemented using the LT-XML2 and LT-TTT2 tools. We describe the tools and the pipeline and grammars that have been developed for the task of chunking. We show that our rule-based approach is easy to adapt to different chunking styles and that the markup of further linguistic information such as nominal and verbal heads can be added to the rules at little extra cost. We evaluate our chunker against the CoNLL 2000 data and discuss discrepancies between our output and the CoNLL markup as well as discrepancies within the CoNLL data itself. We contrast our results with the higher scores obtained using machine learning and argue that the portability and flexibility of our approach still make it a more practical solution.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by a Scottish Enterprise Edinburgh-Stanford Link Grant (R37588), as part of the EASIE project. We would like to thank Ewan Klein for comments on the chunker output and on drafts of this paper. This work has built on the efforts of all those involved in the development of earlier versions of our software, LT-XML, LT-TTT and LT-CHUNK, in particular Andrei Mikheev.","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pacheco-etal-2012-feasibility","url":"https:\/\/aclanthology.org\/N12-1082","title":"On The Feasibility of Open Domain Referring Expression Generation Using Large Scale Folksonomies","abstract":"Generating referring expressions has received considerable attention in Natural Language Generation. In recent years we start seeing deployments of referring expression generators moving away from limited domains with custom-made ontologies. In this work, we explore the feasibility of using large scale noisy ontologies (folksonomies) for open domain referring expression generation, an important task for summarization by re-generation. Our experiments on a fully annotated anaphora resolution training set and a larger, volunteersubmitted news corpus show that existing algorithms are efficient enough to deal with large scale ontologies but need to be extended to deal with undefined values and some measure for information salience.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers as well as Annie Ying and Victoria Reggiardo.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"arkhangelskiy-2021-low","url":"https:\/\/aclanthology.org\/2021.iwclul-1.5","title":"Low-Resource ASR with an Augmented Language Model","abstract":"It is widely known that a good language model (LM) can dramatically improve the quality of automatic speech recognition (ASR). However, when dealing with a lowresource language, it is often the case that not only aligned audio data is scarce, but there are also not enough texts to train a good LM. This is the case of Beserman, an unwritten dialect of Udmurt (Uralic > Permic). With about 10 hours of aligned audio and about 164K words of texts available for training, the word error rate of a Deepspeech model with the best set of parameters equals 56.4%. However, there are other linguistic resources available for Beserman, namely a bilingual Beserman-Russian dictionary and a rule-based morphological analyzer. The goal of this paper is to explore whether and how these additional resources can be exploited to improve the ASR quality. Specifically, I attempt to use them in order to expand the existing LM by generating a large number of fake sentences that in some way look like genuine Beserman text. It turns out that a sophisticated enough augmented LM generator can indeed improve the ASR quality. Nevertheless, the improvement is far from dramatic, with about 5% decrease in word error rate (WER) and 2% decrease in character error rate (CER).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -Project ID 428175960.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"concordia-etal-2020-store","url":"https:\/\/aclanthology.org\/2020.lr4sshoc-1.1","title":"Store Scientific Workflows Data in SSHOC Repository","abstract":"Today scientific workflows are used by scientists as a way to define automated, scalable, and portable in-silico experiments. Having a formal description of an experiment can improve replicability and reproducibility of the experiment. However, simply publishing the workflow may be not enough to achieve reproducibility and re-usability, in particular workflow description should be enriched with provenance data generated during the workflow life cycle. This paper presents a software framework being designed and developed in the context of the Social Sciences and Humanities Open Cloud (SSHOC) project, whose overall objective is to realise the social sciences and humanities' part of European Open Science Cloud initiative. The framework will implement functionalities to use the SSHOC Repository service as a cloud repository for scientific workflows.","label_nlp4sg":1,"task":["Store Scientific Workflows"],"method":["software framework"],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"eacl-1985-european","url":"https:\/\/aclanthology.org\/E85-1000","title":"Second Conference of the European Chapter of the Association for Computational Linguistics","abstract":"It was a great pleasure to be able to act as hosts for the Second Conference of the European Chapter of the Association for Computational Linguistics. Most of the credit for the success of the conference must go to those who submitted papers. There were almost twice as many as we were able to accept, thus allowing us to arrange a programme which, in its variety and quality, reflected the wide-ranging interests of European scholars in the field, although drawing upon the international computational linguistic community for its composition. The success of this meeting, and of its predecessor in Pisa, confirms that the establishment of a European Chapter to act in concert with its American-based older brother was indeed worthwhile. The growing membership and the large attendance at the Conference show that the Chapter is serving one of its purposes in stimulating research in the area.\nThanks are due to all those who contributed to the success of the meeting: to the members of the programme committee, whose job was by no means trivial, since each member took individual responsibility for a specific area of the programme; to the referees; and to all those who helped with the organisation. Special thanks should go to Mike Rosner and to Martine Vermeire, for all the energy and enthusiasm they put in to seeing that the practical arrangements went smoothly; to Kirsten Falkedal for her invaluable help in arranging the programme; and to Don Walker and Eva Hajicova for holding our hands when we needed them to. Finally, let me repeat that the most important support comes from the members of the association, whose enthusiasm has ensured that the European Chapter has grown from conception to a living and flourishing organisation in so short a time. May it continue to flourish. ","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1985,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"fossum-etal-2008-using","url":"https:\/\/aclanthology.org\/W08-0306","title":"Using Syntax to Improve Word Alignment Precision for Syntax-Based Machine Translation","abstract":"Word alignments that violate syntactic correspondences interfere with the extraction of string-to-tree transducer rules for syntaxbased machine translation. We present an algorithm for identifying and deleting incorrect word alignment links, using features of the extracted rules. We obtain gains in both alignment quality and translation quality in Chinese-English and Arabic-English translation experiments relative to a GIZA++ union baseline.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Steven DeNeefe and Wei Wang for assistance with experiments, and Alexander Fraser and Liang Huang for helpful discussions. This research was supported by DARPA (contract HR0011-06-C-0022) and by a fellowship from AT&T Labs.","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"odonnell-2008-demonstration","url":"https:\/\/aclanthology.org\/P08-4004","title":"Demonstration of the UAM CorpusTool for Text and Image Annotation","abstract":"This paper introduced the main features of the UAM CorpusTool, software for human and semi-automatic annotation of text and images. The demonstration will show how to set up an annotation project, how to annotate text files at multiple annotation levels, how to automatically assign tags to segments matching lexical patterns, and how to perform crosslayer searches of the corpus.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The development of UAM CorpusTool was partially funded by the Spanish Ministry of Education and Science (MEC) under grant number HUM2005-01728\/FILO (the WOSLAC project).","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-etal-2020-evaluating","url":"https:\/\/aclanthology.org\/2020.bionlp-1.11","title":"Evaluating the Utility of Model Configurations and Data Augmentation on Clinical Semantic Textual Similarity","abstract":"In this paper, we apply pre-trained language models to the Semantic Textual Similarity (STS) task, with a specific focus on the clinical domain. In low-resource setting of clinical STS, these large models tend to be impractical and prone to overfitting. Building on BERT, we study the impact of a number of model design choices, namely different fine-tuning and pooling strategies. We observe that the impact of domain-specific fine-tuning on clinical STS is much less than that in the general domain, likely due to the concept richness of the domain. Based on this, we propose two data augmentation techniques. Experimental results on N2C2-STS 1 demonstrate substantial improvements, validating the utility of the proposed methods.","label_nlp4sg":1,"task":["Clinical Semantic Textual Similarity"],"method":["language models","BERT"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"vaidhya-kaushal-2020-iitkgp","url":"https:\/\/aclanthology.org\/2020.wnut-1.34","title":"IITKGP at W-NUT 2020 Shared Task-1: Domain specific BERT representation for Named Entity Recognition of lab protocol","abstract":"Supervised models trained to predict properties from representations, have been achieving high accuracy on a variety of tasks. For instance, the BERT family seems to work exceptionally well on the downstream task from NER tagging to the range of other linguistic tasks. But the vocabulary used in the medical field contains a lot of different tokens used only in the medical industry such as the name of different diseases, devices, organisms, medicines, etc. that makes it difficult for traditional BERT model to create contextualized embedding. In this paper, we are going to illustrate the System for Named Entity Tagging based on Bio-Bert. Experimental results show that our model gives substantial improvements over the baseline and stood the fourth runner up in terms of F1 score, and first runner up in terms of Recall with just 2.21 F1 score behind the best one. 1","label_nlp4sg":1,"task":["Named Entity Recognition of lab protocol"],"method":["Bio - Bert"],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":"We would like to thank the Computer Science and Engineering Department of Indian Institute of Technology, Kharagpur for providing us the computational resources required for performing various experiments.We are very grateful for the invaluable suggestions given by T.Y.S.S. santosh 5 and Aarushi Gupta. We also thank the organizers of the Shared Task-1 at WNUT, EMNLP-2020.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"moradi-etal-2014-graph","url":"https:\/\/aclanthology.org\/W14-1102","title":"A Graph-Based Analysis of Medical Queries of a Swedish Health Care Portal","abstract":"Today web portals play an increasingly important role in health care allowing information seekers to learn about diseases and treatments, and to administrate their care. Therefore, it is important that the portals are able to support this process as well as possible. In this paper, we study the search logs of a public Swedish health portal to address the questions if health information seeking differs from other types of Internet search and if there is a potential for utilizing network analysis methods in combination with semantic annotation to gain insights into search behaviors. Using a semantic-based method and a graph-based analysis of word cooccurrences in queries, we show there is an overlap among the results indicating a potential role of these types of methods to gain insights and facilitate improved information search. In addition we show that samples, windows of a month, of search logs may be sufficient to obtain similar results as using larger windows. We also show that medical queries share the same structural properties found for other types of information searches, thereby indicating an ability to re-use existing analysis methods for this type of search data.","label_nlp4sg":1,"task":["health information seeking"],"method":["semantic - based method","Graph - Based Analysis"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"We are thankful to Adam Blomberg, CTO, Euroling AB for providing the log data. We are also thankful for the support by the Centre for Language Technology (http:\/\/clt.gu.se).","year":2014,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"drutsa-etal-2021-crowdsourcing","url":"https:\/\/aclanthology.org\/2021.naacl-tutorials.6","title":"Crowdsourcing Natural Language Data at Scale: A Hands-On Tutorial","abstract":"In this tutorial, we present a portion of unique industry experience in efficient natural language data annotation via crowdsourcing shared by both leading researchers and engineers from Yandex. We will make an introduction to data labeling via public crowdsourcing marketplaces and will present the key components of efficient label collection. This will be followed by a practical session, where participants address a real-world language resource production task, experiment with selecting settings for the labeling process, and launch their label collection project on one of the largest crowdsourcing marketplaces. The projects will be run on real crowds within the tutorial session and we will present useful quality control techniques and provide the attendees with an opportunity to discuss their own annotation ideas.\nTutorial Type: Introductory","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"shalev-etal-2021-randomized","url":"https:\/\/aclanthology.org\/2021.maiworkshop-1.2","title":"On Randomized Classification Layers and Their Implications in Natural Language Generation","abstract":"In natural language generation tasks, a neural language model is used for generating a sequence of words forming a sentence. The topmost weight matrix of the language model, known as the classification layer, can be viewed as a set of vectors, each representing a target word from the target dictionary. The target word vectors, along with the rest of the model parameters, are learned and updated during training. In this paper, we analyze the properties encoded in the target vectors and question the necessity of learning these vectors. We suggest to randomly draw the target vectors and set them as fixed so that no weights updates are being made during training. We show that by excluding the vectors from the optimization, the number of parameters drastically decreases with a marginal effect on the performance. We demonstrate the effectiveness of our method in image-captioning and machine-translation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"de-kok-etal-2017-pp","url":"https:\/\/aclanthology.org\/E17-2050","title":"PP Attachment: Where do We Stand?","abstract":"Prepositional phrase (PP) attachment is a well known challenge to parsing. In this paper, we combine the insights of different works, namely: (1) treating PP attachment as a classification task with an arbitrary number of attachment candidates; (2) using auxiliary distributions to augment the data beyond the hand-annotated training set; (3) using topological fields to get information about the distribution of PP attachment throughout clauses and (4) using state-of-the-art techniques such as word embeddings and neural networks. We show that jointly using these techniques leads to substantial improvements. We also conduct a qualitative analysis to gauge where the ceiling of the task is in a realistic setup.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Financial support for the research reported in this paper was provided by the German Research Foundation (DFG) as part of the Collaborative Research Center \"The Construction of Meaning\" (SFB 833), project A3.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hubkova-etal-2020-czech","url":"https:\/\/aclanthology.org\/2020.lrec-1.549","title":"Czech Historical Named Entity Corpus v 1.0","abstract":"As the number of digitized archival documents increases very rapidly, named entity recognition (NER) in historical documents has become very important for information extraction and data mining. For this task an annotated corpus is needed, which has up to now been missing for Czech. In this paper we present a new annotated data collection for historical NER, composed of Czech historical newspapers. This corpus is freely available for research purposes at http:\/\/chnec.kiv.zcu.cz\/. For this corpus, we have defined relevant domain-specific named entity types and created an annotation manual for corpus labelling. We further conducted some experiments on this corpus using recurrent neural networks in order to in order to show baseline results on this dataset. We experimented with randomly initialized embeddings and static and dynamic fastText word embeddings. We achieved 0.73 F1 score with a bidirectional LSTM model using static fastText embeddings.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been partly supported by Cross-border Cooperation Program Czech Republic -Free State of Bavaria ETS Objective 2014-2020 (project no. 211) and by Grant No. SGS-2019-018 Processing of heterogeneous data and its specialized applications.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"templeton-2021-word","url":"https:\/\/aclanthology.org\/2021.blackboxnlp-1.12","title":"Word Equations: Inherently Interpretable Sparse Word Embeddings through Sparse Coding","abstract":"Word embeddings are a powerful natural language processing technique, but they are extremely difficult to interpret. To enable interpretable NLP models, we create vectors where each dimension is inherently interpretable. By inherently interpretable, we mean a system where each dimension is associated with some human-understandable hint that can describe the meaning of that dimension. In order to create more interpretable word embeddings, we transform pretrained dense word embeddings into sparse embeddings. These new embeddings are inherently interpretable: each of their dimensions is created from and represents a natural language word or specific grammatical concept. We construct these embeddings through sparse coding, where each vector in the basis set is itself a word embedding. Therefore, each dimension of our sparse vectors corresponds to a natural language word. We also show that models trained using these sparse embeddings can achieve good performance and are more interpretable in practice, including through human evaluations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Thanks to Duane Bailey for his extensive support and advice, and for advising the thesis on which this paper is based.. Thanks to Andrea Danyluk for her guidance as the second reader of that thesis.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kupiec-1989-probabilistic","url":"https:\/\/aclanthology.org\/H89-1054","title":"Probabilistic Models of Short and Long Distance Word Dependencies in Running Text","abstract":"This article describes two complementary models that represent dependencies between words in loca\/ and non-local contexts. The type of local dependencies considered are sequences of part of speech categories for words. The non-local context of word dependency considered here is that of word recurrence, which is typical in a text. Both are models of phenomena that are to a reasonable extent domain independent, and thus are useful for doing prediction in systems using large vocabularies. Modeling Part of Speech Sequences A common method for modeling local word dependencies is by means of second order Markov models (also known as trigram models). In such a model the context for predicting word wi at position i in a text consists","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"I would like to thank Jan Pedersen of Xerox PARC, for fruitful discussion and his comments. This work was sponsored in part by the Defense Advanced Research Projects Agency (DOD), under the Information Science and Technology Office, contract #N00140-86-C-8996.","year":1989,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-etal-2006-comparison","url":"https:\/\/aclanthology.org\/P06-1069","title":"A Comparison and Semi-Quantitative Analysis of Words and Character-Bigrams as Features in Chinese Text Categorization","abstract":"Words and character-bigrams are both used as features in Chinese text processing tasks, but no systematic comparison or analysis of their values as features for Chinese text categorization has been reported heretofore. We carry out here a full performance comparison between them by experiments on various document collections (including a manually word-segmented corpus as a golden standard), and a semi-quantitative analysis to elucidate the characteristics of their behavior; and try to provide some preliminary clue for feature term choice (in most cases, character-bigrams are better than words) and dimensionality setting in text categorization systems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yang-etal-2011-topic","url":"https:\/\/aclanthology.org\/W11-1513","title":"Topic Modeling on Historical Newspapers","abstract":"In this paper, we explore the task of automatic text processing applied to collections of historical newspapers, with the aim of assisting historical research. In particular, in this first stage of our project, we experiment with the use of topical models as a means to identify potential issues of interest for historians.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kashefi-hwa-2021-contrapositive","url":"https:\/\/aclanthology.org\/2021.wnut-1.41","title":"Contrapositive Local Class Inference","abstract":"Certain types of classification problems may be performed at multiple levels of granularity; for example, we might want to know the sentiment polarity of a document or a sentence, or a phrase. Often, the prediction at a greater-context (e.g., sentences or paragraphs) may be informative for a more localized prediction at a smaller semantic unit (e.g., words or phrases). However, directly inferring the most salient local features from the global prediction may overlook the semantics of this relationship. This work argues that inference along the contraposition relationship of the local prediction and the corresponding global prediction makes an inference framework that is more accurate and robust to noise. We show how this contraposition framework can be implemented as a transfer function that rewrites a greater-context from one class to another and demonstrate how an appropriate transfer function can be trained from a noisy user-generated corpus. The experimental results validate our insight that the proposed contrapositive framework outperforms the alternative approaches on resource-constrained problem domains. 1 * Embedding: GloVe + \"all the news\" * \u03bb 1 = 1e \u2212 2 * \u03bb 2 = 2\u03bb 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their helpful comments. This material is based upon work supported by the National Science Foundation under Grant Number 1735752.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"somasundaran-etal-2009-opinion","url":"https:\/\/aclanthology.org\/W09-3210","title":"Opinion Graphs for Polarity and Discourse Classification","abstract":"This work shows how to construct discourse-level opinion graphs to perform a joint interpretation of opinions and discourse relations. Specifically, our opinion graphs enable us to factor in discourse information for polarity classification, and polarity information for discourse-link classification. This interdependent framework can be used to augment and improve the performance of local polarity and discourse-link classifiers.","label_nlp4sg":1,"task":["Discourse Classification"],"method":["Opinion Graphs"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"chen-etal-2019-self","url":"https:\/\/aclanthology.org\/N19-1255","title":"Self-Discriminative Learning for Unsupervised Document Embedding","abstract":"Unsupervised document representation learning is an important task providing pre-trained features for NLP applications. Unlike most previous work which learn the embedding based on self-prediction of the surface of text, we explicitly exploit the inter-document information and directly model the relations of documents in embedding space with a discriminative network and a novel objective. Extensive experiments on both small and large public datasets show the competitiveness of the proposed method. In evaluations on standard document classification, our model has errors that are relatively 5 to 13% lower than state-ofthe-art unsupervised embedding models. The reduction in error is even more pronounced in scarce label setting.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This material is based upon work supported by Microsoft Research Asia (MSRA) grant, and by Taiwan Ministry of Science and Technology (MOST) under grant number 108-2634-F-002 -019.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"koleva-etal-2015-impact","url":"https:\/\/aclanthology.org\/P15-2133","title":"The Impact of Listener Gaze on Predicting Reference Resolution","abstract":"We investigate the impact of listener's gaze on predicting reference resolution in situated interactions. We extend an existing model that predicts to which entity in the environment listeners will resolve a referring expression (RE). Our model makes use of features that capture which objects were looked at and for how long, reflecting listeners' visual behavior. We improve a probabilistic model that considers a basic set of features for monitoring listeners' movements in a virtual environment. Particularly, in complex referential scenes, where more objects next to the target are possible referents, gaze turns out to be beneficial and helps deciphering listeners' intention. We evaluate performance at several prediction times before the listener performs an action, obtaining a highly significant accuracy gain.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was funded by the Cluster of Excellence on \"Multimodal Computing and Interaction\" of the German Excellence Initiative and the SFB 632 \"Information Structure\".","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"antoine-etal-2017-temporal","url":"https:\/\/aclanthology.org\/W17-7413","title":"Temporal@ODIL project: Adapting ISO-TimeML to syntactic treebanks for the temporal annotation of spoken speech","abstract":"This paper presents Temporal@ODIL, a project that aims at building the largest corpus annotated with temporal information on spoken French. The annotation is based on an adaptation of the ISO-TimeML standard that consists in grounding the annotation on a treebank and not on raw text.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The Temporal@ODIL project will end in spring 2018. The resulting corpus, providing a 100,000 words syntactic annotation layer, and a 20,000 words temporal annotation layer, will be freely available from June 2018 under Creative Commons CC-BY-SA license 5 .","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"su-etal-1995-corpus","url":"https:\/\/aclanthology.org\/1995.tmi-1.27","title":"A Corpus-based Two-Way Design for Parameterized MT Systems: Rationale, Architecture and Training Issues","abstract":"In many conventional MT systems, the translation output of a machine translation system is strongly affected by the sentence patterns of the source language due to the one-way processing steps from analysis to transfer and then to generation, which tends to produce literal translation that is not natural to the native speakers. The literal translation, however, is usually not suitable for direct publication to the public unless a great deal of post-editing efforts is made. In this paper, we will propose a training paradigm for acquiring the transfer and translation knowledge in a corpus-based parameterized MT system from a bilingual corpus with a two-way training method. In such a training paradigm, the knowledge is acquired from both the source sentences and the target sentences. It is thus possible to avoid the translated output from being affected by the source sentence patterns. Training methods for adapting the parameter set to the various specific user styles are also suggested for the particular needs in restricted domains. Because it provides a flexible way to adapt the system to the various domains (or sublanguages), it is expected to be a promising paradigm for producing high quality translation according to user preferred styles.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1995,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"xiong-litman-2011-understanding","url":"https:\/\/aclanthology.org\/W11-1402","title":"Understanding Differences in Perceived Peer-Review Helpfulness using Natural Language Processing","abstract":"Identifying peer-review helpfulness is an important task for improving the quality of feedback received by students, as well as for helping students write better reviews. As we tailor standard product review analysis techniques to our peer-review domain, we notice that peerreview helpfulness differs not only between students and experts but also between types of experts. In this paper, we investigate how different types of perceived helpfulness might influence the utility of features for automatic prediction. Our feature selection results show that certain low-level linguistic features are more useful for predicting student perceived helpfulness, while high-level cognitive constructs are more effective in modeling experts' perceived helpfulness.","label_nlp4sg":1,"task":["Identifying peer - review helpfulness"],"method":["feature selection"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":"This work was supported by the Learning Research and Development Center at the University of Pittsburgh. We thank Melissa Patchan and Chris Schunn for generously providing the manually annotated peer-review corpus. We are also grateful to Michael Lipschultz and Chris Schunn for their feedback while writing this paper.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"garoufi-koller-2011-combining","url":"https:\/\/aclanthology.org\/W11-2815","title":"Combining symbolic and corpus-based approaches for the generation of successful referring expressions","abstract":"We present an approach to the generation of referring expressions (REs) which computes the unique RE that it predicts to be fastest for the hearer to resolve. The system operates by learning a maximum entropy model for referential success from a corpus and using the model's weights as costs in a metric planning problem. Our system outperforms the baselines both on predicted RE success and on similarity to human-produced successful REs. A task-based evaluation in the context of the GIVE-2.5 Challenge on Generating Instructions in Virtual Environments verifies the higher RE success scores of the system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are thankful to Ivan Titov and Verena Rieser for fruitful discussions about the maxent model, and to our reviewers for their many thoughtful comments.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"amason-etal-2019-harvey","url":"https:\/\/aclanthology.org\/S19-2166","title":"Harvey Mudd College at SemEval-2019 Task 4: The D.X. Beaumont Hyperpartisan News Detector","abstract":"We use the 600 hand-labelled articles from Se-mEval Task 4 (Kiesel et al., 2019) to handtune a classifier with 3000 features for the Hyperpartisan News Detection task. Our final system uses features based on bag-of-words (BoW), analysis of the article title, language complexity, and simple sentiment analysis in a naive Bayes classifier. We trained our final system on the 600,000 articles labelled by publisher. Our final system has an accuracy of 0.653 on the hand-labeled test set. The most effective features are the Automated Readability Index and the presence of certain words in the title. This suggests that hyperpartisan writing uses a distinct writing style, especially in the title.","label_nlp4sg":1,"task":["Hyperpartisan News Detection"],"method":["bag - of - words","naive Bayes classifier"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"We would like to thank our stalwart grutor (a Harvey Mudd portmanteau of grader and tutor!), Jonah Rubin, for his help at all hours on our coursework during the semester that led to this system submission.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"nakov-ng-2009-improved","url":"https:\/\/aclanthology.org\/D09-1141","title":"Improved Statistical Machine Translation for Resource-Poor Languages Using Related Resource-Rich Languages","abstract":"We propose a novel language-independent approach for improving statistical machine translation for resource-poor languages by exploiting their similarity to resource-rich ones. More precisely, we improve the translation from a resourcepoor source language X 1 into a resourcerich language Y given a bi-text containing a limited number of parallel sentences for X 1-Y and a larger bi-text for X 2-Y for some resource-rich language X 2 that is closely related to X 1. The evaluation for Indonesian\u2192English (using Malay) and Spanish\u2192English (using Portuguese and pretending Spanish is resource-poor) shows an absolute gain of up to 1.35 and 3.37 Bleu points, respectively, which is an improvement over the rivaling approaches, while using much less additional data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by research grant POD0713875.","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zheng-yu-2015-identifying","url":"https:\/\/aclanthology.org\/W15-2615","title":"Identifying Key Concepts from EHR Notes Using Domain Adaptation","abstract":"Linking electronic health records (EHRs) to relevant education materials can provide patient-centered tailored education which can potentially improve patients' medical knowledge, self-management and clinical outcome. It is shown that EHR query generation using key concept identification improves retrieval of education materials. In this study, we explored domain adaptation approaches to improve key concept identification. Our experiments show that a 20.7% improvement in the F1 measure can be achieved by leveraging data from Wikipedia. Queries generated from the best performing approach achieved a 20.6% and 27.8% improvement over the queries generated from the baseline approach.","label_nlp4sg":1,"task":["Identifying Key Concepts"],"method":["Domain Adaptation"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by the Award ","year":2015,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"aharodnik-etal-2013-automatic","url":"https:\/\/aclanthology.org\/I13-1200","title":"Automatic Identification of Learners' Language Background Based on Their Writing in Czech","abstract":"The goal of this study is to investigate whether learners' written data in highly inflectional Czech can suggest a consistent set of clues for automatic identification of the learners' L1 background. For our experiments, we use texts written by learners of Czech, which have been automatically and manually annotated for errors. We define two classes of learners: speakers of Indo-European languages and speakers of non-Indo-European languages. We use an SVM classifier to perform the binary classification. We show that non-content based features perform well on highly inflectional data. In particular, features reflecting errors in orthography are the most useful, yielding about 89% precision and the same recall. A detailed discussion of the best performing features is provided.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the native speakers of Czech for their participation in our experiment and to Jan \u0160t\u011bp\u00e1nek for tailoring his questionnaire system to our needs. We would also like to thank Jing Peng and the anonymous reviewers for their comments. This material is based in part upon work supported by the Grant Agency of the Czech Republic P406\/10\/P328 and National Science Foundation under Grant Numbers 0916280 and 1048406. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"goldberg-etal-2008-em","url":"https:\/\/aclanthology.org\/P08-1085","title":"EM Can Find Pretty Good HMM POS-Taggers (When Given a Good Start)","abstract":"We address the task of unsupervised POS tagging. We demonstrate that good results can be obtained using the robust EM-HMM learner when provided with good initial conditions, even with incomplete dictionaries. We present a family of algorithms to compute effective initial estimations p(t|w). We test the method on the task of full morphological disambiguation in Hebrew achieving an error reduction of 25% over a strong uniform distribution baseline. We also test the same method on the standard WSJ unsupervised POS tagging task and obtain results competitive with recent state-ofthe-art methods, while using simple and efficient learning methods. * This work is supported in part by the Lynn and William Frankel Center for Computer Science.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bouma-etal-2018-expletives","url":"https:\/\/aclanthology.org\/W18-6003","title":"Expletives in Universal Dependency Treebanks","abstract":"Although treebanks annotated according to the guidelines of Universal Dependencies (UD) now exist for many languages, the goal of annotating the same phenomena in a crosslinguistically consistent fashion is not always met. In this paper, we investigate one phenomenon where we believe such consistency is lacking, namely expletive elements. Such elements occupy a position that is structurally associated with a core argument (or sometimes an oblique dependent), yet are non-referential and semantically void. Many UD treebanks identify at least some elements as expletive, but the range of phenomena differs between treebanks, even for closely related languages, and sometimes even for different treebanks for the same language. In this paper, we present criteria for identifying expletives that are applicable across languages and compatible with the goals of UD, give an overview of expletives as found in current UD treebanks, and present recommendations for the annotation of expletives so that more consistent annotation can be achieved in future releases.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to two anonymous reviewers for constructive comments on the first version of the paper. Most of the work described in this article was conducted during the authors' stays at the Center for Advanced Study at the Norwegian Academy of Science and Letters.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kurniawan-etal-2020-ir3218","url":"https:\/\/aclanthology.org\/2020.semeval-1.263","title":"IR3218-UI at SemEval-2020 Task 12: Emoji Effects on Offensive Language IdentifiCation","abstract":"In this paper, we present our approach and the results of our participation in OffensEval 2020. There are three sub-tasks in OffensEval 2020, namely offensive language identification (sub-task A), automatic categorization of offense types (sub-task B), and offense target identification (subtask C). We participated in sub-task A of English OffensEval 2020. Our approach emphasizes on how the emoji affects offensive language identification. Our model used LSTM combined with GloVe pre-trained word vectors to identify offensive language on social media. The best model obtained macro F1-score of 0.88428.","label_nlp4sg":1,"task":["Offensive Language IdentifiCation","automatic categorization of offense types","offense target identification"],"method":["LSTM","GloVe"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"The authors gratefully thank Universitas Indonesia for the International Publication (PUTI Prosiding) Grants No. NKB-877\/UN2.RST\/HKP.05.00\/2020 Year of 2020.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"nallani-etal-2020-fully","url":"https:\/\/aclanthology.org\/2020.wildre-1.8","title":"A Fully Expanded Dependency Treebank for Telugu","abstract":"Treebanks are an essential resource for syntactic parsing. The available Paninian dependency treebank(s) for Telugu is annotated only with inter-chunk dependency relations and not all words of a sentence are part of the parse tree. In this paper, we automatically annotate the intra-chunk dependencies in the treebank using a Shift-Reduce parser based on Context Free Grammar rules for Telugu chunks. We also propose a few additional intra-chunk dependency relations for Telugu apart from the ones used in Hindi treebank. Annotating intra-chunk dependencies finally provides a complete parse tree for every sentence in the treebank. Having a fully expanded treebank is crucial for developing end to end parsers which produce complete trees. We present a fully expanded dependency treebank for Telugu consisting of 3220 sentences. In this paper, we also convert the treebank annotated with Anncorra part-of-speech tagset to the latest BIS tagset. The BIS tagset is a hierarchical tagset adopted as a unified part-of-speech standard across all Indian Languages. The final treebank is made publicly available.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Himanshu Sharma for making the Hindi tagset converter code available and Parameshwari Krishnamurthy and Pruthwik Mishra for providing relevant input. We also thank all the reviewers for their insightful comments.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-etal-2022-shot","url":"https:\/\/aclanthology.org\/2022.acl-long.43","title":"Few-Shot Class-Incremental Learning for Named Entity Recognition","abstract":"Previous work of class-incremental learning for Named Entity Recognition (NER) relies on the assumption that there exists abundance of labeled data for the training of new classes. In this work, we study a more challenging but practical problem, i.e., few-shot classincremental learning for NER, where an NER model is trained with only few labeled samples of the new classes, without forgetting knowledge of the old ones. To alleviate the problem of catastrophic forgetting in few-shot classincremental learning, we generate synthetic data of the old classes using the trained NER model, augmenting the training of new classes. We further develop a framework that distills from the NER model from previous steps with both synthetic data, and real data from the current training set. Experimental results show that our approach achieves significant improvements over existing baselines.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was carried out during an internship at Adobe Research. Further, it was supported by NIH (NINDS 1R61NS120246), DARPA (FA8650-18-2-7832-P00009-12) and ONR (N00014-18-1-2871-P00002-3). We thank all the researchers involved from Adobe Research and the support from Duke University.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-2014-extraction","url":"https:\/\/aclanthology.org\/W14-6830","title":"Extraction system for Personal Attributes Extraction of CLP2014","abstract":"This paper presents the design and implementation of our extraction system for Personal Attributes Extraction in Chinese Text (task 4 of CLP2014). The objective of this task is to extract attribute values of the given personal name. Our extraction system employs a linguistic analysis following by a dependency patterns matching technique.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would linke to thank the National Research Agency, the project for reference ANR-09-CSOSG-08-01, for their help in producing this work.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"huang-etal-2017-addressing","url":"https:\/\/aclanthology.org\/I17-1019","title":"Addressing Domain Adaptation for Chinese Word Segmentation with Global Recurrent Structure","abstract":"Boundary features are widely used in traditional Chinese Word Segmentation (CWS) methods as they can utilize unlabeled data to help improve the Out-of-Vocabulary (OOV) word recognition performance. Although various neural network methods for CWS have achieved performance competitive with state-of-the-art systems, these methods, constrained by the domain and size of the training corpus, do not work well in domain adaptation. In this paper, we propose a novel BLSTMbased neural network model which incorporates a global recurrent structure designed for modeling boundary features dynamically. Experiments show that the proposed structure can effectively boost the performance of Chinese Word Segmentation, especially OOV-Recall, which brings benefits to domain adaptation. We achieved state-of-the-art results on 6 domains of CNKI articles, and competitive results to the best reported on the 4 domains of SIGHAN Bakeoff 2010 data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Our work is supported by National Natural Science Foundation of China (No.61370117 & No.61433015). The corresponding author of this paper is Houfeng Wang.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gros-etal-2006-si","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2006\/pdf\/111_pdf.pdf","title":"SI-PRON: A Pronunciation Lexicon for Slovenian","abstract":"We present the efforts involved in designing SI-PRON, a comprehensive machine-readable pronunciation lexicon for Slovenian. It has been built from two sources and contains all the lemmas from the Dictionary of Standard Slovenian (SSKJ), the most frequent inflected word forms found in contemporary Slovenian texts, and a first pass of inflected word forms derived from SSKJ lemmas. The lexicon file contains the orthography, corresponding pronunciations, lemmas and morphosyntactic descriptors of lexical entries in a format based on requirements defined by the W3C Voice Browser Activity. The current version of the SI-PRON pronunciation lexicon contains over 1.4 million lexical entries. The word list determination procedure, the generation and validation of phonetic transcriptions, and the lexicon format are described in the paper. Along with Onomastica, SI-PRON presents a valuable language resource for linguistic studies and research of speech technologies for Slovenian. The lexicon is already being used by the AlpSynth slovenian text-to-speech synthesis system and for generating audio samples of the SSKJ word list.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been funded in part by the Slovenian Research Agency and the Slovenian Ministry of Defense, under contracts no. L6-5405, no. V2-0896, and no. M2-0019.","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhang-etal-2022-subgraph","url":"https:\/\/aclanthology.org\/2022.acl-long.396","title":"Subgraph Retrieval Enhanced Model for Multi-hop Knowledge Base Question Answering","abstract":"Recent works on knowledge base question answering (KBQA) retrieve subgraphs for easier reasoning. The desired subgraph is crucial as a small one may exclude the answer but a large one might introduce more noises. However, the existing retrieval is either heuristic or interwoven with the reasoning, causing reasoning on the partial subgraphs, which increases the reasoning bias when the intermediate supervision is missing. This paper proposes a trainable subgraph retriever (SR) decoupled from the subsequent reasoning process, which enables a plug-and-play framework to enhance any subgraph-oriented KBQA model. Extensive experiments demonstrate SR achieves significantly better retrieval and QA performance than existing retrieval methods. Via weakly supervised pre-training as well as the end-to-end fine-tuning, SR achieves new state-of-the-art performance when combined with NSM (He et al., 2021), a subgraph-oriented reasoner, for embedding-based KBQA methods. Codes and datasets are available online 1 .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by National Natural Science Foundation of China (62076245, 62072460, 62172424); National Key Research & Develop Plan(2018YFB1004401); Beijing Natural Science Foundation (4212022); CCF-Tencent Open Fund.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jang-etal-2020-exploratory","url":"https:\/\/aclanthology.org\/2020.nlpcovid19-2.18","title":"Exploratory Analysis of COVID-19 Related Tweets in North America to Inform Public Health Institutes","abstract":"Social media is a rich source where we can learn about people's reactions to social issues. As COVID-19 has significantly impacted on people's lives, it is essential to capture how people react to public health interventions and understand their concerns. In this paper, we aim to investigate people's reactions and concerns about COVID-19 in North America, especially focusing on Canada. We analyze COVID-19 related tweets using topic modeling and aspect-based sentiment analysis, and interpret the results with public health experts. We compare timeline of topics discussed with timing of implementation of public health interventions for COVID-19. We also examine people's sentiment about COVID-19 related issues. We discuss how the results can be helpful for public health agencies when designing a policy for new interventions. Our work shows how Natural Language Processing (NLP) techniques could be applied to public health questions with domain expert involvement.","label_nlp4sg":1,"task":["Inform Public Health Institutes"],"method":["Exploratory Analysis","topic modeling"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"goel-sharma-2019-usf","url":"https:\/\/aclanthology.org\/S19-2139","title":"USF at SemEval-2019 Task 6: Offensive Language Detection Using LSTM With Word Embeddings","abstract":"In this paper, we present a system description for the SemEval-2019 Task 6 submitted by our team. For the task, our system takes tweet as an input and determine if the tweet is offensive or non-offensive (Sub-task A). In case a tweet is offensive, our system identifies if a tweet is targeted (insult or threat) or nontargeted like swearing (Sub-task B). In targeted tweets, our system identifies the target as an individual or group (Sub-task C). We used data pre-processing techniques like splitting hashtags into words, removing special characters, stop-word removal, stemming, lemmatization, capitalization, and offensive word dictionary. Later, we used keras tokenizer and word embeddings for feature extraction. For classification, we used the LSTM (Long shortterm memory) model of keras framework. Our accuracy scores for Sub-task A, B and C are 0.8128, 0.8167 and 0.3662 respectively. Our results indicate that fine-grained classification to identify offense target was difficult for the system. Lastly, in the future scope section, we will discuss the ways to improve system performance.","label_nlp4sg":1,"task":["Offensive Language Detection"],"method":["LSTM","Word Embeddings"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"mou-etal-2019-discreteness","url":"https:\/\/aclanthology.org\/D19-2005","title":"Discreteness in Neural Natural Language Processing","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jo-etal-2017-modeling","url":"https:\/\/aclanthology.org\/D17-1232","title":"Modeling Dialogue Acts with Content Word Filtering and Speaker Preferences","abstract":"We present an unsupervised model of dialogue act sequences in conversation. By modeling topical themes as transitioning more slowly than dialogue acts in conversation, our model de-emphasizes contentrelated words in order to focus on conversational function words that signal dialogue acts. We also incorporate speaker tendencies to use some acts more than others as an additional predictor of dialogue act prevalence beyond temporal dependencies. According to the evaluation presented on two dissimilar corpora, the CNET forum and NPS Chat corpus, the effectiveness of each modeling assumption is found to vary depending on characteristics of the data. De-emphasizing contentrelated words yields improvement on the CNET corpus, while utilizing speaker tendencies is advantageous on the NPS corpus. The components of our model complement one another to achieve robust performance on both corpora and outperform state-of-the-art baseline models.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by the Kwanjeong Educational Foundation, NIH grant R01HL122639, and NSF grant IIS-1546393.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sacaleanu-etal-2003-cross","url":"https:\/\/aclanthology.org\/E03-2012","title":"A Cross Language Document Retrieval System Based on Semantic Annotation","abstract":"The paper describes a cross-lingual document retrieval system in the medical domain that employs a controlled vocabulary (UMLS I) in constructing an XMLbased intermediary representation into which queries as well as documents are mapped. The system assists in the retrieval of English and German medical scientific abstracts relevant to a German query document (electronic patient record). The modularity of the system allows for deployment in other domains, given appropriate linguistic and semantic resources.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chandu-etal-2017-tackling","url":"https:\/\/aclanthology.org\/W17-2307","title":"Tackling Biomedical Text Summarization: OAQA at BioASQ 5B","abstract":"In this paper, we describe our participation in phase B of task 5b of the fifth edition of the annual BioASQ challenge, which includes answering factoid, list, yes-no and summary questions from biomedical data. We describe our techniques with an emphasis on ideal answer generation, where the goal is to produce a relevant, precise, non-redundant, query-oriented summary from multiple relevant documents. We make use of extractive summarization techniques to address this task and experiment with different biomedical ontologies and various algorithms including agglomerative clustering, Maximum Marginal Relevance (MMR) and sentence compression. We propose a novel word embedding based tf-idf similarity metric and a soft positional constraint which improve our system performance. We evaluate our techniques on test batch 4 from the fourth edition of the challenge. Our best system achieves a ROUGE-2 score of 0.6534 and ROUGE-SU4 score of 0.6536.","label_nlp4sg":1,"task":["Biomedical Text Summarization"],"method":["agglomerative clustering","word embedding based tf - idf similarity metric"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"This research was supported in parts by grants from Accenture PLC (PI: Anatole Gershman), NSF IIS 1546393 and NHLBI R01 HL122639.","year":2017,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"webber-2011-invited","url":"https:\/\/aclanthology.org\/W11-4603","title":"Invited Paper: Discourse Structures and Language Technologies","abstract":"I want to tell a story about computational approaches to discourse structure. Like all such stories, it takes some liberty with actual events and times, but I think stories put things into perspective, and make it easier to understand where we are and how we might progress.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bod-1995-problem","url":"https:\/\/aclanthology.org\/E95-1015","title":"The Problem of Computing the Most Probable Tree in Data-Oriented Parsing and Stochastic Tree Grammars","abstract":"We deal with the question as to whether there exists a polynomial time algorithm for computing the most probable parse tree of a sentence generated by a data-oriented parsing (DOP) model. (Scha, 1990; Bod, 1992, 1993a). Therefore we describe DOP as a stochastic tree-substitution grammar (STSG). In STSG, a tree can be generated by exponentially many derivations involving different elementary trees. The probability of a tree is equal to the sum of the probabilities of all its derivations. We show that in STSG, in contrast with stochastic context-free grammar, the Viterbi algorithm cannot be used for computing a most probable tree of a string. We propose a simple modification of Viterbi which allows by means of a \"select-random\" search to estimate the most probable tree of a string in polynomial time. Experiments with DOP on ATIS show that only in 68% of the cases, the most probable derivation of a string generates the most probable tree of that string. Therefore, the parse accuracy obtained by the most probable trees (96%) is dramatically higher than the parse accuracy obtained by the most probable derivations (65%). It is still an open question whether the most probable tree of a string can be deterministically computed in polynomial time.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The author is indebted to Remko Scha for valuable comments on an earlier version of this paper, and to Khalil Sima'an for useful discussions.","year":1995,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gui-etal-2019-attention","url":"https:\/\/aclanthology.org\/D19-1117","title":"Attention Optimization for Abstractive Document Summarization","abstract":"Attention plays a key role in the improvement of sequence-to-sequence-based document summarization models. To obtain a powerful attention helping with reproducing the most salient information and avoiding repetitions, we augment the vanilla attention model from both local and global aspects. We propose an attention refinement unit paired with local variance loss to impose supervision on the attention model at each decoding step, and a global variance loss to optimize the attention distributions of all decoding steps from the global perspective. The performances on the CNN\/Daily Mail dataset verify the effectiveness of our methods.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ebrahimi-etal-2018-hotflip","url":"https:\/\/aclanthology.org\/P18-2006","title":"HotFlip: White-Box Adversarial Examples for Text Classification","abstract":"We propose an efficient method to generate white-box adversarial examples to trick a character-level neural classifier. We find that only a few manipulations are needed to greatly decrease the accuracy. Our method relies on an atomic flip operation, which swaps one token for another, based on the gradients of the onehot input vectors. Due to efficiency of our method, we can perform adversarial training which makes the model more robust to attacks at test time. With the use of a few semantics-preserving constraints, we demonstrate that HotFlip can be adapted to attack a word-level classifier as well.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mathur-etal-2013-online","url":"https:\/\/aclanthology.org\/W13-2237","title":"Online Learning Approaches in Computer Assisted Translation","abstract":"We present a novel online learning approach for statistical machine translation tailored to the computer assisted translation scenario. With the introduction of a simple online feature, we are able to adapt the translation model on the fly to the corrections made by the translators. Additionally, we do online adaption of the feature weights with a large margin algorithm. Our results show that our online adaptation technique outperforms the static phrase based statistical machine translation system by 6 BLEU points absolute, and a standard incremental adaptation approach by 2 BLEU points absolute.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the MateCat project, which is funded by the EC under the 7 th Framework Programme.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gunawardana-1984-speech","url":"https:\/\/aclanthology.org\/1984.bcs-1.35","title":"Speech input and output technology","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1984,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"shim-etal-2021-synthetic","url":"https:\/\/aclanthology.org\/2021.wnut-1.29","title":"Synthetic Data Generation and Multi-Task Learning for Extracting Temporal Information from Health-Related Narrative Text","abstract":"Extracting temporal information is critical to process health-related text. Temporal information extraction is a challenging task for language models because it requires processing both texts and numbers. Moreover, the fundamental challenge is how to obtain a largescale training dataset. To address this, we propose a synthetic data generation algorithm. Also, we propose a novel multi-task temporal information extraction model and investigate whether multi-task learning can contribute to performance improvement by exploiting additional training signals with the existing training data. For experiments, we collected a custom dataset containing unstructured texts with temporal information of sleep-related activities. Experimental results show that utilising synthetic data can improve the performance when the augmentation factor is 3. The results also show that when multi-task learning is used with an appropriate amount of synthetic data, the performance can significantly improve from 82. to 88.6 and from 83.9 to 91.9 regarding micro-and macro-average exact match scores of normalised time prediction, respectively.","label_nlp4sg":1,"task":["Extracting Temporal Information from Health - Related Narrative Text"],"method":["Synthetic Data Generation","Multi - Task Learning"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"We thank anonymous reviewers for providing valuable feedback on this work. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\u0142odowska-Curie grant agreement No 766139. This article reflects only the author's view and the REA is not responsible for any use that may be made of the information it contains.","year":2021,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"elazar-etal-2021-back","url":"https:\/\/aclanthology.org\/2021.emnlp-main.819","title":"Back to Square One: Artifact Detection, Training and Commonsense Disentanglement in the Winograd Schema","abstract":"The Winograd Schema (WS) has been proposed as a test for measuring commonsense capabilities of models. Recently, pre-trained language model-based approaches have boosted performance on some WS benchmarks but the source of improvement is still not clear. This paper suggests that the apparent progress on WS may not necessarily reflect progress in commonsense reasoning. To support this claim, we first show that the current evaluation method of WS is sub-optimal and propose a modification that uses twin sentences for evaluation. We also propose two new baselines that indicate the existence of artifacts in WS benchmarks. We then develop a method for evaluating WS-like sentences in a zero-shot setting to account for the commonsense reasoning abilities acquired during the pretraining and observe that popular language models perform randomly in this setting when using our more strict evaluation. We conclude that the observed progress is mostly due to the use of supervision in training WS models, which is not likely to successfully support all the required commonsense reasoning skills and knowledge. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Vered Shwartz, Keisuke Sakaguchi, Rotem Dror, Niket Tandon, Vid Kocijan and Ernest Davis for helpful discussions and comments on early versions of this paper. We also thank the anonymous reviewers for their valuable suggestions.Yanai Elazar is grateful to be supported by the PBC fellowship for outstanding PhD candidates in Data Science and the Google PhD fellowship. This project has received funding from the Europoean Research Council (ERC) under the Europoean Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEX-TRACT) and from contract FA8750-19-2-1004 with the US Defense Advanced Research Projects Agency (DARPA).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"studnicki-etal-1982-research","url":"https:\/\/aclanthology.org\/C82-2067","title":"The Research Project ``ANAPHORA'' (In Its Present State of Advancement)","abstract":"1. The aim of the project is to work out a method of resolving automattca.~ly the anaphorio ~euses of s certain class, in particular those used in formulating interdocumentary croesorefsrenoes in primary leSal texts (statutory texts). By resolving an anaphoric clause of that class we mean the searching cut possibly all of its referends. The t,nplementaticn of the planned method should enable the users of the full text legal data banks to obtain in search operations, 8part from the doCumentS satisfying the requirements defined in the usual querriss, also such documents to which the former explicitly, or even implicitly refer. The project has been planned as one composed of three parts. A report on the results of part I was presented at the Fifth ICCH Conference in Ann Arbor in May 1981. The present text aims at showing the main outlines of the approach applied in part II. To make some aspects of that part clear, however, certain references must be made to part Iv Part llI is, as yet, at the eta~e of preliminary discussions.\n2. The general approach applied in the whole of the pro-~eot is of a semantic kind. It has been assumed, in particular, that at a certain level of generalization all elementary anaphoric clauses of the above class (let us call them the a--clauses) have in spite of the diversity of their types, an analogous semantic structure, which can be represented by the following diagremz the elementary a-clause the anaphoric functor the argument of the anaphoric functor the standard the specification of the argument of the argument","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1982,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"elson-etal-2010-extracting","url":"https:\/\/aclanthology.org\/P10-1015","title":"Extracting Social Networks from Literary Fiction","abstract":"We present a method for extracting social networks from literature, namely, nineteenth-century British novels and serials. We derive the networks from dialogue interactions, and thus our method depends on the ability to determine when two characters are in conversation. Our approach involves character name chunking, quoted speech attribution and conversation detection given the set of quotes. We extract features from the social networks and examine their correlation with one another, as well as with metadata such as the novel's setting. Our results provide evidence that the majority of novels in this time period do not fit two characterizations provided by literacy scholars. Instead, our results suggest an alternative explanation for differences in social networks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This material is based on research supported in part by the U.S. National Science Foundation (NSF) under IIS-0935360. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhang-ma-2020-dual","url":"https:\/\/aclanthology.org\/2020.coling-main.283","title":"Dual Attention Model for Citation Recommendation","abstract":"Based on an exponentially increasing number of academic articles, discovering and citing comprehensive and appropriate resources has become a non-trivial task. Conventional citation recommender methods suffer from severe information loss. For example, they do not consider the section of the paper that the user is writing and for which they need to find a citation, the relatedness between the words in the local context (the text span that describes a citation), or the importance on each word from the local context. These shortcomings make such methods insufficient for recommending adequate citations to academic manuscripts. In this study, we propose a novel embedding-based neural network called \"dual attention model for citation recommendation (DACR)\" to recommend citations during manuscript preparation. Our method adapts embedding of three semantic information: words in the local context, structural contexts 1 , and the section on which a user is working. A neural network model is designed to maximize the similarity between the embedding of the three input (local context words, section and structural contexts) and the target citation appearing in the context. The core of the neural network model is composed of self-attention and additive attention, where the former aims to capture the relatedness between the contextual words and structural context, and the latter aims to learn the importance of them. The experiments on real-world datasets demonstrate the effectiveness of the proposed approach.","label_nlp4sg":1,"task":["Citation Recommendation"],"method":["Dual Attention Model","embedding - based neural network"],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":"This research has been supported in part by JSPS KAKENSHI under Grant Number 19H04116 and by MIC SCOPE under Grant Numbers 201607008 and 172307001. ","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"stoyanov-etal-2012-context","url":"https:\/\/aclanthology.org\/W12-3012","title":"A Context-Aware Approach to Entity Linking","abstract":"Entity linking refers to the task of assigning mentions in documents to their corresponding knowledge base entities. Entity linking is a central step in knowledge base population. Current entity linking systems do not explicitly model the discourse context in which the communication occurs. Nevertheless, the notion of shared context is central to the linguistic theory of pragmatics and plays a crucial role in Grice's cooperative communication principle. Furthermore, modeling context facilitates joint resolution of entities, an important problem in entity linking yet to be addressed satisfactorily. This paper describes an approach to context-aware entity linking.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"grigonyte-etal-2011-experiments","url":"https:\/\/aclanthology.org\/W11-4612","title":"Experiments on Lithuanian Term Extraction","abstract":"This paper explores the problem of extracting domain specific terminology in the field of science and education from Lithuanian texts. Four different term extraction approaches have been applied and evaluated.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The presented research is funded by a grant (No. LIT-2-44) from the Research Council of Lithuania in the framework of the project \"\u0160vietimo ir mokslo termin\u0173 automatinis identifikavimas -\u0160IMTAI 2\" (Automatic Identification of Education and Science Terms). The authors would like to thank anonymous reviewers for their comments.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"shen-joshi-2003-snow","url":"https:\/\/aclanthology.org\/P03-1064","title":"A SNoW Based Supertagger with Application to NP Chunking","abstract":"Supertagging is the tagging process of assigning the correct elementary tree of LTAG, or the correct supertag, to each word of an input sentence 1. In this paper we propose to use supertags to expose syntactic dependencies which are unavailable with POS tags. We first propose a novel method of applying Sparse Network of Winnow (SNoW) to sequential models. Then we use it to construct a supertagger that uses long distance syntactical dependencies, and the supertagger achieves an accuracy of \u00a2 \u00a4 \u00a3 \u00a6 \u00a5 \u00a7 \u00a9. We apply the supertagger to NP chunking. The use of supertags in NP chunking gives rise to almost \u00a9 absolute increase (from \u00a2 \u00a4 \u00a3 \u00a6 \u00a5 to \u00a2 \u00a4 \u00a3 \u00a6 \u00a5 \u00a2 \u00a4) in F-score under Transformation Based Learning(TBL) frame. The surpertagger described here provides an effective and efficient way to exploit syntactic information.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Vasin Punyakanok for help on the use of SNoW in sequential inference, John Chen for help on dataset and evaluation methods and comments on the draft. We also thank Srinivas Bangalore and three anonymous reviews for helpful comments.","year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tian-etal-2016-learning","url":"https:\/\/aclanthology.org\/P16-1121","title":"Learning Semantically and Additively Compositional Distributional Representations","abstract":"This paper connects a vector-based composition model to a formal semantics, the Dependency-based Compositional Semantics (DCS). We show theoretical evidence that the vector compositions in our model conform to the logic of DCS. Experimentally, we show that vector-based composition brings a strong ability to calculate similar phrases as similar vectors, achieving near state-of-the-art on a wide range of phrase similarity tasks and relation classification; meanwhile, DCS can guide building vectors for structured queries that can be directly executed. We evaluate this utility on sentence completion task and report a new state-of-the-art. ban COMP drug ARG banned drugs projection intersection query vector: \u2022 dot products sorted drug marijuana cannabis trafficking thalidomide ...","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgments This work was supported by CREST, JST. We thank the anonymous reviewers for their valuable comments.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ginzburg-etal-2021-self","url":"https:\/\/aclanthology.org\/2021.findings-acl.272","title":"Self-Supervised Document Similarity Ranking via Contextualized Language Models and Hierarchical Inference","abstract":"We present a novel model for the problem of ranking a collection of documents according to their semantic similarity to a source (query) document. While the problem of document-todocument similarity ranking has been studied, most modern methods are limited to relatively short documents or rely on the existence of \"ground-truth\" similarity labels. Yet, in most common real-world cases, similarity ranking is an unsupervised problem as similarity labels are unavailable. Moreover, an ideal model should not be restricted by documents' length. Hence, we introduce SDR, a self-supervised method for document similarity that can be applied to documents of arbitrary length. Importantly, SDR can be effectively applied to extremely long documents, exceeding the 4, 096 maximal token limit of Longformer. Extensive evaluations on large documents datasets show that SDR significantly outperforms its alternatives across all metrics. To accelerate future research on unlabeled long document similarity ranking, and as an additional contribution to the community, we herein publish two humanannotated test-sets of long documents similarity evaluation. The SDR code and datasets are publicly available 1 .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"castro-ferreira-etal-2019-neural","url":"https:\/\/aclanthology.org\/D19-1052","title":"Neural data-to-text generation: A comparison between pipeline and end-to-end architectures","abstract":"Traditionally, most data-to-text applications have been designed using a modular pipeline architecture, in which non-linguistic input data is converted into natural language through several intermediate transformations. By contrast, recent neural models for data-to-text generation have been proposed as end-to-end approaches, where the non-linguistic input is rendered in natural language with much less explicit intermediate representations in between. This study introduces a systematic comparison between neural pipeline and endto-end data-to-text approaches for the generation of text from RDF triples. Both architectures were implemented making use of the encoder-decoder Gated-Recurrent Units (GRU) and Transformer, two state-of-the art deep learning methods. Automatic and human evaluations together with a qualitative analysis suggest that having explicit intermediate steps in the generation process results in better texts than the ones generated by end-to-end approaches. Moreover, the pipeline models generalize better to unseen inputs. Data and code are publicly available. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is part of the research program \"Discussion Thread Summarization for Mobile Devices\" (DISCOSUMO) which is financed by the Netherlands Organization for Scientific Research (NWO). We also acknowledge the three reviewers for their insightful comments.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mccarthy-etal-2004-finding","url":"https:\/\/aclanthology.org\/P04-1036","title":"Finding Predominant Word Senses in Untagged Text","abstract":"In word sense disambiguation (WSD), the heuristic of choosing the most common sense is extremely powerful because the distribution of the senses of a word is often skewed. The problem with using the predominant, or first sense heuristic, aside from the fact that it does not take surrounding context into account, is that it assumes some quantity of handtagged data. Whilst there are a few hand-tagged corpora available for some languages, one would expect the frequency distribution of the senses of words, particularly topical words, to depend on the genre and domain of the text under consideration. We present work on the use of a thesaurus acquired from raw textual corpora and the WordNet similarity package to find predominant noun senses automatically. The acquired predominant senses give a precision of 64% on the nouns of the SENSEVAL-2 English all-words task. This is a very promising result given that our method does not require any hand-tagged text, such as SemCor. Furthermore, we demonstrate that our method discovers appropriate predominant senses for words from two domainspecific corpora.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Siddharth Patwardhan ","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"halber-1998-tree","url":"https:\/\/aclanthology.org\/W98-0114","title":"Tree-grammar linear typing for unified super-tagging\/probabilistic parsing models","abstract":"We integrate super-tagging, guided-parsing and probabilistic parsing in the framework of an item-based LTAG chart parser. Items are based on a linear-typing of trees that encodes their expanding path, starting from their anchor.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rodrigues-etal-2018-semantic","url":"https:\/\/aclanthology.org\/L18-1513","title":"Semantic Equivalence Detection: Are Interrogatives Harder than Declaratives?","abstract":"Duplicate Question Detection (DQD) is a Natural Language Processing task under active research, with applications to fields like Community Question Answering and Information Retrieval. While DQD falls under the umbrella of Semantic Text Similarity (STS), these are often not seen as similar tasks of semantic equivalence detection, with STS being implicitly understood as concerning only declarative sentences. Nevertheless, approaches to STS have been applied to DQD and paraphrase detection, that is to interrogatives and declaratives, alike. We present a study that seeks to assess, under conditions of comparability, the possible different performance of state-of-the-art approaches to STS over different types of textual segments, including most notably declaratives and interrogatives. This paper contributes to a better understanding of current mainstream methods for semantic equivalence detection, and to a better appreciation of the different results reported in the literature when these are obtained from different data sets with different types of textual segments. Importantly, it contributes also with results concerning how data sets containing textual segments of a certain type can be used to leverage the performance of resolvers for segments of other types.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The present research was partly supported by the Infrastructure for the Science and Technology of the Portuguese Language (CLARIN L\u00edngua Portuguesa), by the National Infrastructure for Distributed Computing (INCD) of Portugal, and by the ANI\/3279\/2016 grant.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nguyen-etal-2007-subtree","url":"https:\/\/aclanthology.org\/N07-2032","title":"Subtree Mining for Relation Extraction from Wikipedia","abstract":"In this study, we address the problem of extracting relations between entities from Wikipedia's English articles. Our proposed method first anchors the appearance of entities in Wikipedia's articles using neither Named Entity Recognizer (NER) nor coreference resolution tool. It then classifies the relationships between entity pairs using SVM with features extracted from the web structure and subtrees mined from the syntactic structure of text. We evaluate our method on manually annotated data from actual Wikipedia articles.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"evert-lapesa-2021-fast","url":"https:\/\/aclanthology.org\/2021.conll-1.46","title":"FAST: A carefully sampled and cognitively motivated dataset for distributional semantic evaluation","abstract":"What is the first word that comes to your mind when you hear giraffe, or damsel, or freedom? Such free associations contain a huge amount of information on the mental representations of the corresponding concepts, and are thus an extremely valuable testbed for the evaluation of semantic representations extracted from corpora. In this paper, we present FAST (Free ASsociation Tasks), a free association dataset for English rigorously sampled from two standard free association norms collections (the Edinburgh Associative Thesaurus and the University of South Florida Free Association Norms), discuss two evaluation tasks, and provide baseline results. In parallel, we discuss methodological considerations concerning the desiderata for a proper evaluation of semantic representations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are very grateful to the anonymous reviewers for their feedback. Gabriella Lapesa acknowledges funding by the Bundesministeriumfur Bildung und Forschung (BMBF) through the project E-DELIB (Powering up E-DELIBeration: towards AI-supported moderation).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-etal-2013-sxucfn","url":"https:\/\/aclanthology.org\/S13-1009","title":"SXUCFN-Core: STS Models Integrating FrameNet Parsing Information","abstract":"This paper describes our system submitted to *SEM 2013 Semantic Textual Similarity (STS) core task which aims to measure semantic similarity of two given text snippets. In this shared task, we propose an interpolation STS model named Model_LIM integrating Fra-meNet parsing information, which has a good performance with low time complexity compared with former submissions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wolf-bernardi-2013-hybrid","url":"https:\/\/aclanthology.org\/2013.mtsummit-user.4","title":"Hybrid Domain Adaptation for a Rule Based MT System","abstract":"This study presents several experiments to show the power of domain-specific adaptation by means of hybrid terminology extraction mechanisms and the subsequent terminology integration into a rule based machine translation (RBMT) system, thus avoiding cumbersome human lexicon and grammar customization. Detailed evaluation reveals the great potential of this approach: Translation quality can be improved substantially in two domains.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"baldwin-etal-2002-enhanced","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2002\/pdf\/313.pdf","title":"Enhanced Japanese Electronic Dictionary Look-up","abstract":"This paper describes the process of data preparation and reading generation for an ongoing project aimed at improving the accessibility of unknown words for learners of foreign languages, focusing initially on Japanese. Rather then requiring absolute knowledge of the readings of words in the foreign language, we allow look-up of dictionary entries by readings which learners can predictably be expected to associate with them. We automatically extract an exhaustive set of phonemic readings for each grapheme segment and learn basic morpho-phonological rules governing compound word formation, associating a probability with each. Then we apply the naive Bayes model to generate a set of readings and give each a likeliness score based on previously extracted evidence and corpus frequencies.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by the Research Collaboration between NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation and CSLI, Stanford University. We would particularly like to thank Prof. Nishina Kikuko of the International Student Center (TITech) for hosting the web-accessible version of the system, and Francis Bond and Christoph Neumann for providing valuable feedback at various points during this research.","year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nomoto-2016-neal","url":"https:\/\/aclanthology.org\/W16-1519","title":"NEAL: A Neurally Enhanced Approach to Linking Citation and Reference","abstract":"As a way to tackle Task 1A in CL-SciSumm 2016, we introduce a composite model consisting of TFIDF and Neural Network (NN), the latter being a adaptation of the embedding model originally proposed for the Q\/A domain [2, 7]. We discuss an experiment using a development data, results thereof, and some remaining issues.","label_nlp4sg":1,"task":["Linking Citation and Reference"],"method":["TFIDF","Neural Network","embedding model"],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cl-1977-american","url":"https:\/\/aclanthology.org\/J77-1000","title":"American Journal of Computational Linguistics (February 1977)","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1977,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kwong-tsou-2008-extending","url":"https:\/\/aclanthology.org\/C08-1058","title":"Extending a Thesaurus with Words from Pan-Chinese Sources","abstract":"In this paper, we work on extending a Chinese thesaurus with words distinctly used in various Chinese communities. The acquisition and classification of such region-specific lexical items is an important step toward the larger goal of constructing a Pan-Chinese lexical resource. In particular, we extend a previous study in three respects: (1) to improve automatic classification by removing duplicated words from the thesaurus, (2) to experiment with classifying words at the subclass level and semantic head level, and (3) to further investigate the possible effects of data heterogeneity between the region-specific words and words in the thesaurus on classification performance. Automatic classification was based on the similarity between a target word and individual categories of words in the thesaurus, measured by the cosine function. Experiments were done on 120 target words from four regions. The automatic classification results were evaluated against a gold standard obtained from human judgements. In general accuracy reached 80% or more with the top 10 (out of 80+) and top 100 (out of 1,300+) candidates considered at the subclass level and semantic head level respectively, provided that the appropriate data sources were used.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work described in this paper was supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. CityU 1317\/03H). The authors would like to thank Jingbo Zhu for useful discussions on an earlier draft of this paper, and the anonymous reviewers for their comments on the submission.","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wetzel-etal-2015-maximum","url":"https:\/\/aclanthology.org\/W15-2516","title":"A Maximum Entropy Classifier for Cross-Lingual Pronoun Prediction","abstract":"We present a maximum entropy classifier for cross-lingual pronoun prediction. The features are based on local source-and target-side contexts and antecedent information obtained by a co-reference resolution system. With only a small set of feature types our best performing system achieves an accuracy of 72.31%. According to the shared task's official macroaveraged F1-score at 57.07%, we are among the top systems, at position three out of 14. Feature ablation results show the important role of target-side information in general and of the resolved targetside antecedent in particular for predicting the correct classes.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This paper has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement 644402 (HimL).","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sun-etal-2022-simple","url":"https:\/\/aclanthology.org\/2022.findings-acl.189","title":"A Simple Hash-Based Early Exiting Approach For Language Understanding and Generation","abstract":"Early exiting allows instances to exit at different layers according to the estimation of difficulty. Previous works usually adopt heuristic metrics such as the entropy of internal outputs to measure instance difficulty, which suffers from generalization and threshold-tuning. In contrast, learning to exit, or learning to predict instance difficulty is a more appealing way. Though some effort has been devoted to employing such \"learn-to-exit\" modules, it is still unknown whether and how well the instance difficulty can be learned. As a response, we first conduct experiments on the learnability of instance difficulty, which demonstrates that modern neural models perform poorly on predicting instance difficulty. Based on this observation, we propose a simple-yeteffective Hash-based Early Exiting approach (HASHEE) that replaces the learn-to-exit modules with hash functions to assign each token to a fixed exiting layer. Different from previous methods, HASHEE requires no internal classifiers nor extra parameters, and therefore is more efficient. Experimental results on classification, regression, and generation tasks demonstrate that HASHEE can achieve higher performance with fewer FLOPs and inference time compared with previous state-of-the-art early exiting methods.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"keung-etal-2019-adversarial","url":"https:\/\/aclanthology.org\/D19-1138","title":"Adversarial Learning with Contextual Embeddings for Zero-resource Cross-lingual Classification and NER","abstract":"Contextual word embeddings (e.g. GPT, BERT, ELMo, etc.) have demonstrated stateof-the-art performance on various NLP tasks. Recent work with the multilingual version of BERT has shown that the model performs very well in cross-lingual settings, even when only labeled English data is used to finetune the model. We improve upon multilingual BERT's zero-resource cross-lingual performance via adversarial learning. We report the magnitude of the improvement on the multilingual ML-Doc text classification and CoNLL 2002\/2003 named entity recognition tasks. Furthermore, we show that language-adversarial training encourages BERT to align the embeddings of English documents and their translations, which may be the cause of the observed performance gains.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Jiateng Xie, Julian Salazar and Faisal Ladhak for the helpful comments and discussions.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"faralli-etal-2020-multiple","url":"https:\/\/aclanthology.org\/2020.lrec-1.283","title":"Multiple Knowledge GraphDB (MKGDB)","abstract":"We present MKGDB, a large-scale graph database created as a combination of multiple taxonomy backbones extracted from 5 existing knowledge graphs, namely: ConceptNet, DBpedia, WebIsAGraph, WordNet and the Wikipedia category hierarchy. MKGDB, thanks the versatility of the Neo4j graph database manager technology, is intended to favour and help the development of open-domain natural language processing applications relying on knowledge bases, such as information extraction, hypernymy discovery, topic clustering, and others. Our resource consists of a large hypernymy graph which counts more than 37 million nodes and more than 81 million hypernymy relations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been supported by the MIUR (Ministery of Instruction, University and Research) Project SCN 0166 SMARTOUR.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"aziz-etal-2010-learning","url":"https:\/\/aclanthology.org\/2010.eamt-1.31","title":"Learning an Expert from Human Annotations in Statistical Machine Translation: the Case of Out-of-Vocabulary Words","abstract":"We present a general method for incorporating an \"expert\" model into a Statistical Machine Translation (SMT) system, in order to improve its performance on a particular \"area of expertise\", and apply this method to the specific task of finding adequate replacements for Out-of-Vocabulary (OOV) words. Candidate replacements are paraphrases and entailed phrases, obtained using monolingual resources. These candidate replacements are transformed into \"dynamic biphrases\", generated at decoding time based on the context of each source sentence. Standard SMT features are enhanced with a number of new features aimed at scoring translations produced by using different replacements. Active learning is used to discriminatively train the model parameters from human assessments of the quality of translations. The learning framework yields an SMT system which is able to deal with sentences containing OOV words but also guarantees that the performance is not degraded for input sentences without OOV words. Results of experiments on English-French translation show that this method outperforms previous work addressing OOV words in terms of acceptability.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by the ICT Programme of the European Community, under the PASCAL-2 Network of Excellence, ICT-216886. We thank Binyam Gebrekidan Gebre and Ibrahim Soumana for performing the annotations and the anonymous reviewers for their useful comments. This publication only reflects the authors' views.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"basta-etal-2019-evaluating","url":"https:\/\/aclanthology.org\/W19-3805","title":"Evaluating the Underlying Gender Bias in Contextualized Word Embeddings","abstract":"Gender bias is highly impacting natural language processing applications. Word embeddings have clearly been proven both to keep and amplify gender biases that are present in current data sources. Recently, contextualized word embeddings have enhanced previous word embedding techniques by computing word vector representations dependent on the sentence they appear in. In this paper, we study the impact of this conceptual change in the word embedding computation in relation with gender bias. Our analysis includes different measures previously applied in the literature to standard word embeddings. Our findings suggest that contextualized word embeddings are less biased than standard ones even when the latter are debiased.","label_nlp4sg":1,"task":["Evaluating the Underlying Gender Bias"],"method":["Contextualized Word Embeddings"],"goal1":"Gender Equality","goal2":null,"goal3":null,"acknowledgments":"We want to thank Hila Gonen for her support during our research.This work is supported in part by the Catalan Agency for Management of University and Research Grants (AGAUR) through the FI PhD Scholarship and the Industrial PhD Grant. This work is also supported in part by the Spanish Ministerio de Economa y Competitividad, the European Regional Development Fund and the Agencia Estatal de Investigacin, through the postdoctoral senior grant Ramn y Cajal, contract TEC2015-69266-P (MINECO\/FEDER,EU) and contract PCIN-2017-079 (AEI\/MINECO).","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":1,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"fenogenova-2021-russian","url":"https:\/\/aclanthology.org\/2021.bsnlp-1.2","title":"Russian Paraphrasers: Paraphrase with Transformers","abstract":"This paper focuses on generation methods for paraphrasing in the Russian language. There are several transformer-based models (Russian and multilingual) trained on a collected corpus of paraphrases. We compare different models, contrast the quality of paraphrases using different ranking methods and apply paraphrasing methods in the context of augmentation procedure for different tasks. The contributions of the work are the combined paraphrasing dataset, fine-tuned generated models for Russian paraphrasing task and additionally the open source tool for simple usage of the paraphrasers.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"fukumoto-suzuki-2006-using","url":"https:\/\/aclanthology.org\/P06-2030","title":"Using Bilingual Comparable Corpora and Semi-supervised Clustering for Topic Tracking","abstract":"We address the problem dealing with skewed data, and propose a method for estimating effective training stories for the topic tracking task. For a small number of labelled positive stories, we extract story pairs which consist of positive and its associated stories from bilingual comparable corpora. To overcome the problem of a large number of labelled negative stories, we classify them into some clusters. This is done by using k-means with EM. The results on the TDT corpora show the effectiveness of the method.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the Grant-in-aid for the JSPS, Support Center for Advanced Telecommunications Technology Research, and International Communications Foundation.","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-ji-2011-collaborative","url":"https:\/\/aclanthology.org\/D11-1071","title":"Collaborative Ranking: A Case Study on Entity Linking","abstract":"In this paper, we present a new ranking scheme, collaborative ranking (CR). In contrast to traditional non-collaborative ranking scheme which solely relies on the strengths of isolated queries and one stand-alone ranking algorithm, the new scheme integrates the strengths from multiple collaborators of a query and the strengths from multiple ranking algorithms. We elaborate three specific forms of collaborative ranking, namely, micro collaborative ranking (MiCR), macro collaborative ranking (MaCR) and micro-macro collaborative ranking (MiMaCR). Experiments on entity linking task show that our proposed scheme is indeed effective and promising.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the U.S. Army Research Laboratory under Cooperative Agreement Number W911NF-09-2-0053, the U.S. NSF CA-REER Award under Grant IIS-0953149 and PSC-CUNY Research Program. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or im-plied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"itoh-shinnou-2021-domain","url":"https:\/\/aclanthology.org\/2021.ranlp-1.72","title":"Domain-Specific Japanese ELECTRA Model Using a Small Corpus","abstract":"Recently, domain shift, which affects accuracy due to differences in data between source and target domains, has become a serious issue when using machine learning methods to solve natural language processing tasks. With additional pretraining and fine-tuning using a target domain corpus, pretraining models such as BERT 1 can address this issue. However, the additional pretraining of the BERT model is difficult because it requires significant computing resources.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research was supported by JSPS KAKENHI Grant Number JP19K12093 and ROIS NII Open Collaborative Research 21FC05.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jin-etal-2021-seed","url":"https:\/\/aclanthology.org\/2021.naacl-srw.14","title":"Seed Word Selection for Weakly-Supervised Text Classification with Unsupervised Error Estimation","abstract":"Weakly-supervised text classification aims to induce text classifiers from only a few userprovided seed words. The vast majority of previous work assumes high-quality seed words are given. However, the expert-annotated seed words are sometimes non-trivial to come up with. Furthermore, in the weakly-supervised learning setting, we do not have any labeled document to measure the seed words' efficacy, making the seed word selection process \"a walk in the dark\". In this work, we remove the need for expert-curated seed words by first mining (noisy) candidate seed words associated with the category names. We then train interim models with individual candidate seed words. Lastly, we estimate the interim models' error rate in an unsupervised manner. The seed words that yield the lowest estimated error rates are added to the final seed word set. A comprehensive evaluation of six binary classification tasks on four popular datasets demonstrates that the proposed method outperforms a baseline using only category name seed words and obtained comparable performance as a counterpart using expert-annotated seed words 1 .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"YJ was supported by the scholarship from 'The 100th Anniversary Chulalongkorn University Fund for Doctoral Scholarship'. We thank anonymous reviewers for their valuable feedback.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lewis-1997-mt","url":"https:\/\/aclanthology.org\/1997.mtsummit-papers.20","title":"MT as a Commercial Service: Three Case Studies","abstract":"This paper presents three cases studies showing the considerably different uses customers make of our Dutch-English MT service.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hahn-schulz-2002-towards","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2002\/pdf\/36.pdf","title":"Towards Very Large Ontologies for Medical Language Processing","abstract":"We describe an ontology engineering methodology by which conceptual knowledge is extracted from an informal medical thesaurus (UMLS) and automatically converted into a formal description logics system. Our approach consists of four steps: concept definitions are automatically generated from the UMLS source, integrity checking of taxonomic and partonomic hierarchies is performed by the terminological classifier, cycles and inconsistencies are eliminated, and incremental refinement of the evolving knowledge base is performed by a domain expert. We report on experiments with a knowledge base composed of 164,000 concepts and 76,000 relations.","label_nlp4sg":1,"task":["Medical Language Processing"],"method":["ontology engineering methodology"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tkachenko-sirts-2018-modeling","url":"https:\/\/aclanthology.org\/K18-1036","title":"Modeling Composite Labels for Neural Morphological Tagging","abstract":"Neural morphological tagging has been regarded as an extension to POS tagging task, treating each morphological tag as a monolithic label and ignoring its internal structure. We propose to view morphological tags as composite labels and explicitly model their internal structure in a neural sequence tagger. For this, we explore three different neural architectures and compare their performance with both CRF and simple neural multiclass baselines. We evaluate our models on 49 languages and show that the neural architecture that models the morphological labels as sequences of morphological category values performs significantly better than both baselines establishing state-of-the-art results in morphological tagging for most languages. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the Estonian Research Council (grants no. 2056, 1226 and IUT34-4).","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"santamaria-etal-2003-automatic","url":"https:\/\/aclanthology.org\/J03-3006","title":"Automatic Association of Web Directories with Word Senses","abstract":"We describe an algorithm that combines lexical information (from WordNet 1.7) with Web directories (from the Open Directory Project) to associate word senses with such directories. Such associations can be used as rich characterizations to acquire sense-tagged corpora automatically, cluster topically related senses, and detect sense specializations. The algorithm is evaluated for the 29 nouns (147 senses) used in the Senseval 2 competition, obtaining 148 (word sense, Web directory) associations covering 88% of the domain-specific word senses in the test data with 86% accuracy. The richness of Web directories as sense characterizations is evaluated in a supervised word sense disambiguation task using the Senseval 2 test suite. The results indicate that, when the directory\/word sense association is correct, the samples automatically acquired from the Web directories are nearly as valid for training as the original Senseval 2 training instances. The results support our hypothesis that Web directories are a rich source of lexical information: cleaner, more reliable, and more structured than the full Web as a corpus.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been partially supported by the Spanish government through project Hermes (TIC2000-0335-C03-01).","year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"engel-etal-2002-parsing","url":"https:\/\/aclanthology.org\/W02-1007","title":"Parsing and Disfluency Placement","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mirzaee-etal-2021-spartqa","url":"https:\/\/aclanthology.org\/2021.naacl-main.364","title":"SPARTQA: A Textual Question Answering Benchmark for Spatial Reasoning","abstract":"This paper proposes a question-answering (QA) benchmark for spatial reasoning on natural language text which contains more realistic spatial phenomena not covered by prior work and is challenging for state-of-the-art language models (LM). We propose a distant supervision method to improve on this task. Specifically, we design grammar and reasoning rules to automatically generate a spatial description of visual scenes and corresponding QA pairs. Experiments show that further pretraining LMs on these automatically generated data significantly improves LMs' capability on spatial understanding, which in turn helps to better solve two external datasets, bAbI, and boolQ. We hope that this work can foster investigations into more sophisticated models for spatial reasoning over text.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This project is supported by National Science Foundation (NSF) CAREER award #2028626 and (partially) supported by the Office of Naval Research grant #N00014-20-1-2005. We thank the reviewers for their helpful comments to improve this paper and Timothy Moran for his help in the human data generation.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-etal-2013-feature","url":"https:\/\/aclanthology.org\/D13-1117","title":"Feature Noising for Log-Linear Structured Prediction","abstract":"NLP models have many and sparse features, and regularization is key for balancing model overfitting versus underfitting. A recently repopularized form of regularization is to generate fake training data by repeatedly adding noise to real data. We reinterpret this noising as an explicit regularizer, and approximate it with a second-order formula that can be used during training without actually generating fake data. We show how to apply this method to structured prediction using multinomial logistic regression and linear-chain CRFs. We tackle the key challenge of developing a dynamic program to compute the gradient of the regularizer efficiently. The regularizer is a sum over inputs, so we can estimate it more accurately via a semi-supervised or transductive extension. Applied to text classification and NER, our method provides a >1% absolute performance gain over use of standard L 2 regularization.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank the anonymous reviewers for their comments. We gratefully acknowledge the support of the Defense Advanced Research Projects Agency (DARPA) Broad Operational Language Translation (BOLT) program through IBM. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of the DARPA, or the US government. S. Wager is supported by a BC and EJ Eaves SGF Fellowship.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"laureano-de-leon-etal-2020-cs","url":"https:\/\/aclanthology.org\/2020.semeval-1.117","title":"CS-Embed at SemEval-2020 Task 9: The Effectiveness of Code-switched Word Embeddings for Sentiment Analysis","abstract":"The growing popularity and applications of sentiment analysis of social media posts has naturally led to sentiment analysis of posts written in multiple languages, a practice known as codeswitching. While recent research into code-switched posts has focused on the use of multilingual word embeddings, these embeddings were not trained on code-switched data. In this work, we present word-embeddings trained on code-switched tweets, specifically those that make use of Spanish and English, known as Spanglish. We explore the embedding space to discover how they capture the meanings of words in both languages. We test the effectiveness of these embeddings by participating in SemEval 2020 Task 9: Sentiment Analysis on Code-Mixed Social Media Text. We utilised them to train a sentiment classifier that achieves an F-1 score of 0.722. This is higher than the baseline for the competition of 0.656, with our team (codalab username francesita) ranking 14 out of 29 participating teams, beating the baseline.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"scheible-2010-evaluation","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/206_Paper.pdf","title":"An Evaluation of Predicate Argument Clustering using Pseudo-Disambiguation","abstract":"Schulte im Walde et al. (2008) presented a novel approach to semantic verb classication. The predicate argument model (PAC) presented in their paper models selectional preferences by using soft clustering that incorporates the Expectation Maximization (EM) algorithm and the MDL principle. In this paper, I will show how the model handles the task of differentiating between plausible and implausible combinations of verbs, subcategorization frames and arguments by applying the pseudo-disambiguation evaluation method. The predicate argument clustering model will be evaluated in comparison with the latent semantic clustering model by Rooth et al. (1999). In particular, the influences of the model parameters, data frequency, and the individual components of the predicate argument model are examined. The results of these experiments show that (i) the selectional preference model overgeneralizes over arguments for the purpose of a pseudo-disambiguation task and that (ii) pseudo-disambiguation should not be used as a universal indicator for the quality of a model.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Many thanks to Helmut Schmid and Sabine Schulte im Walde for supervising my Studienarbeit and to Jason Utt for his useful comments.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bizzoni-lappin-2018-predicting","url":"https:\/\/aclanthology.org\/W18-0906","title":"Predicting Human Metaphor Paraphrase Judgments with Deep Neural Networks","abstract":"We propose a new annotated corpus for metaphor interpretation by paraphrase, and a novel DNN model for performing this task. Our corpus consists of 200 sets of 5 sentences, with each set containing one reference metaphorical sentence, and four ranked candidate paraphrases. Our model is trained for a binary classification of paraphrase candidates, and then used to predict graded paraphrase acceptability. It reaches an encouraging 75% accuracy on the binary classification task, and high Pearson (.75) and Spearman (.68) correlations on the gradient judgment prediction task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to our colleagues in the Centre for Linguistic Theory and Studies in Probability (CLASP), FLoV, at the University of Gothenburg for useful discussion of some of the ideas presented in this paper, and to three anonymous reviewers for helpful comments on an earlier draft. The research reported here was done at CLASP, which is supported by a 10 year research grant (grant 2014-39) from the Swedish Research Council.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yamada-etal-2021-transformer","url":"https:\/\/aclanthology.org\/2021.emnlp-main.335","title":"Transformer-based Lexically Constrained Headline Generation","abstract":"This paper explores a variant of automatic headline generation methods, where a generated headline is required to include a given phrase such as a company or a product name. Previous methods using Transformer-based models generate a headline including a given phrase by providing the encoder with additional information corresponding to the given phrase. However, these methods cannot always include the phrase in the generated headline. Inspired by previous RNN-based methods generating token sequences in backward and forward directions from the given phrase, we propose a simple Transformerbased method that guarantees to include the given phrase in the high-quality generated headline. We also consider a new headline generation strategy that takes advantage of the controllable generation order of Transformer. Our experiments with the Japanese News Corpus demonstrate that our methods, which are guaranteed to include the phrase in the generated headline, achieve ROUGE scores comparable to previous Transformer-based methods. We also show that our generation strategy performs better than previous strategies.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"takanobu-etal-2019-guided","url":"https:\/\/aclanthology.org\/D19-1010","title":"Guided Dialog Policy Learning: Reward Estimation for Multi-Domain Task-Oriented Dialog","abstract":"Dialog policy decides what and how a taskoriented dialog system will respond, and plays a vital role in delivering effective conversations. Many studies apply Reinforcement Learning to learn a dialog policy with the reward function which requires elaborate design and pre-specified user goals. With the growing needs to handle complex goals across multiple domains, such manually designed reward functions are not affordable to deal with the complexity of real-world tasks. To this end, we propose Guided Dialog Policy Learning, a novel algorithm based on Adversarial Inverse Reinforcement Learning for joint reward estimation and policy optimization in multi-domain task-oriented dialog. The proposed approach estimates the reward signal and infers the user goal in the dialog sessions. The reward estimator evaluates the state-action pairs so that it can guide the dialog policy at each dialog turn. Extensive experiments on a multi-domain dialog dataset show that the dialog policy guided by the learned reward function achieves remarkably higher task success than state-of-the-art baselines. * Corresponding author U: I'm looking for a hotel to stay that has 5 stars and cheap price range. S: I am sorry that there is no such hotel, would you like to reserve a 3-star hotel as an alternative? U: I'd prefer a 4-star hotel even if it's a bit expensive. Oh, and I need parking. S: OK, I find a moderately priced 4-star hotel that includes parking and free wifi. U: Are there any places to eat around it? S: Many. Japanese, Indian, French, etc. What kind of food would you like?","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the National Science Foundation of China (Grant No. 61936010 \/ 61876096) and the National Key R&D Program of China (Grant No. 2018YFC0830200). We would like to thank THUNUS NExT Joint-Lab for the support, anonymous reviewers for their valuable suggestions, and our lab mate Qi Zhu for helpful discussions. The code is available at https: \/\/github.com\/truthless11\/GDPL.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bies-etal-2005-parallel","url":"https:\/\/aclanthology.org\/W05-0304","title":"Parallel Entity and Treebank Annotation","abstract":"We describe a parallel annotation approach for PubMed abstracts. It includes both entity\/relation annotation and a treebank containing syntactic structure, with a goal of mapping entities to constituents in the treebank. Crucial to this approach is a modification of the Penn Treebank guidelines and the characterization of entities as relation components, which allows the integration of the entity annotation with the syntactic structure while retaining the capacity to annotate and extract more complex events.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The project described in this paper is based at the Institute for Research in Cognitive Science at the University of Pennsylvania and is supported by grant EIA-0205448 from the National Science Foundation's Information Technology Research (ITR) program. We would like to thank Yang Jin, Mark Liberman, Eric Pancoast, Colin Warner, Peter White, and Scott Winters for their comments and assistance, as well as the invaluable feedback of all the annotators listed at http:\/\/bioie.ldc.upenn.edu\/ index.jsp?page=aboutus.html.","year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"weisweber-preuss-1992-direct","url":"https:\/\/aclanthology.org\/C92-4174","title":"Direct Parsing With Metarules","abstract":"In this paper we argue for the direct application of metarules ill the parsing prlx;ess and intrurluce a slight restriction on metarules. This restriction relies ml theoretical results alxmt the ternfiluation of term-rewrite systems and does not retinue tile expressive power of metarules as much as previous restrictions. We prove the termination for a ~t of metarnles used in our German gramnlar and show [low nletarules can be integrated into the parer.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"narayan-gardent-2012-error","url":"https:\/\/aclanthology.org\/C12-1123","title":"Error Mining with Suspicion Trees: Seeing the Forest for the Trees","abstract":"In recent years, error mining approaches have been proposed to identify the most likely sources of errors in symbolic parsers and generators. However the techniques used generate a flat list of suspicious forms ranked by decreasing order of suspicion. We introduce a novel algorithm that structures the output of error mining into a tree (called, suspicion tree) highlighting the relationships between suspicious forms. We illustrate the impact of our approach by applying it to detect and analyse the most likely sources of failure in surface realisation; and we show how the suspicion tree built by our algorithm helps presenting the errors identified by error mining in a linguistically meaningful way thus providing better support for error analysis. The right frontier of the tree highlights the relative importance of the main error cases while the subtrees of a node indicate how a given error case divides into smaller more specific cases.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgments The research presented in this paper was partially supported by the European Fund for Regional Development within the framework of the INTERREG IV A Allegro Project.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"liang-etal-2021-evaluation","url":"https:\/\/aclanthology.org\/2021.sigdial-1.5","title":"Evaluation of In-Person Counseling Strategies To Develop Physical Activity Chatbot for Women","abstract":"Artificial intelligence chatbots are the vanguard in technology-based intervention to change people's behavior. To develop intervention chatbots, the first step is to understand natural language conversation strategies in human conversation. This work introduces an intervention conversation dataset collected from a real-world physical activity intervention program for women. We designed comprehensive annotation schemes in four dimensions (domain, strategy, social exchange, and taskfocused exchange) and annotated a subset of dialogs. We built a strategy classifier with context information to detect strategies from both trainers and participants based on the annotation. To understand how human intervention induces effective behavior changes, we analyzed the relationships between the intervention strategies and the participants' changes in the barrier and social support for physical activity. We also analyzed how participant's baseline weight correlates to the amount of occurrence of the corresponding strategy. This work lays the foundation for developing a personalized physical activity intervention bot. 1","label_nlp4sg":1,"task":["Evaluation of In - Person Counseling Strategies"],"method":["annotation schemes","strategy classifier"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"This project was supported by grant R01HL104147 from the National Heart, Lung, and Blood Institute and by the American Heart Association, grant K24NR015812 from the National Institute of Nursing Research, and grant (RAP Team Science Award) from the University of California, San Francisco. The study sponsors had no role in the study design; collection, analysis, or interpretation of data; writing of the report; or decision to submit the report for publication. We also thank Ms. Kiley Charbonneau for her assistance with data management and annotations.","year":2021,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"marx-schuth-2010-dutchparl","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/263_Paper.pdf","title":"DutchParl. The Parliamentary Documents in Dutch","abstract":"A corpus called DutchParl is created which aims to contain all digitally available parliamentary documents written in the Dutch language. The first version of DutchParl contains documents from the parliaments of The Netherlands, Flanders and Belgium. The corpus is divided along three dimensions: per parliament, scanned or digital documents, written recordings of spoken text and others. The digital collection contains more than 800 million tokens, the scanned collection more than 1 billion. All documents are available as UTF-8 encoded XML files with extensive metadata in Dublin Core standard. The text itself is divided into pages which are divided into paragraphs. Every document, page and paragraph has a unique URN which resolves to a web page. Every page element in the XML files is connected to a facsimile image of that page in PDF or JPEG format. We created a viewer in which both versions can be inspected simultaneously. The corpus is available for download in several formats. The corpus can be used for corpus-linguistic and political science research, and is suitable for performing scalability tests for XML information systems.","label_nlp4sg":1,"task":["Data collection"],"method":[],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"Maarten Marx acknowledges the financial support of the ","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"schuppler-etal-2014-grass","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/394_Paper.pdf","title":"GRASS: the Graz corpus of Read And Spontaneous Speech","abstract":"This paper provides a description of the preparation, the speakers, the recordings, and the creation of the orthographic transcriptions of the first large scale speech database for Austrian German. It contains approximately 1900 minutes of (read and spontaneous) speech produced by 38 speakers. The corpus consists of three components. First, the Conversation Speech (CS) component contains free conversations of one hour length between friends, colleagues, couples, or family members. Second, the Commands Component (CC) contains commands and keywords which were either read or elicited by pictures. Third, the Read Speech (RS) component contains phonetically balanced sentences and digits. The speech of all components has been recorded at super-wideband quality in a soundproof recording-studio with head-mounted microphones, large-diaphragm microphones, a laryngograph, and with a video camera. The orthographic transcriptions, which have been created and subsequently corrected manually, contain approximately 290 000 word tokens from 15 000 different word types.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"oger-etal-2008-local","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2008\/pdf\/193_paper.pdf","title":"Local Methods for On-Demand Out-of-Vocabulary Word Retrieval","abstract":"Most of the Web-based methods for lexicon augmenting consist in capturing global semantic features of the targeted domain in order to collect relevant documents from the Web. We suggest that the local context of the out-of-vocabulary (OOV) words contains relevant information on the OOV words. With this information, we propose to use the Web to build locally-augmented lexicons which are used in a final local decoding pass. First, an automatic web based OOV word detection method is proposed. Then, we demonstrate the relevance of the Web for the OOV word retrieval. Different methods are proposed to retrieve the hypothesis words. We finally retrieve about 26% of the OOV words with a lexicon increase of less than 1000 words using the reference context.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"feldman-1975-bad","url":"https:\/\/aclanthology.org\/T75-2020","title":"Bad-Mouthing Frames","abstract":"It appears that many people in the AI\/psycholinguistics community are like my old friend (in California) who said: \"How can I understand something unless I believe it for a while.\" This seems to me to indicate the role of \"paradigms\" such as \"frames\" in the study of thought.\nSince I do not, myself, work that way and also do not (despite years of the New York Review) function well as a critic of scientific developments, I will limit myself to three rather concrete sets of remarks. These concern vision, interactions with the world and net models in the context of \"frames\". which appear to me to be wrong and even wrongheaded.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1975,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"feiyu-etal-2021-incorporating","url":"https:\/\/aclanthology.org\/2021.ccl-1.81","title":"Incorporating translation quality estimation into Chinese-Korean neural machine translation","abstract":"Exposure bias and poor translation diversity are two common problems in neural machine translation (NMT), which are caused by the general of the teacher forcing strategy for training in the NMT models. Moreover, the NMT models usually require the large-scale and high-quality parallel corpus. However, Korean is a low resource language, and there is no large-scale parallel corpus between Chinese and Korean, which is a challenging for the researchers. Therefore, we propose a method which is to incorporate translation quality estimation into the translation process and adopt reinforcement learning. The evaluation mechanism is used to guide the training of the model, so that the prediction cannot converge completely to the ground truth word. When the model predicts a sequence different from the ground truth word, the evaluation mechanism can give an appropriate evaluation and reward to the model. In addition, we alleviated the lack of Korean corpus resources by adding training data. In our experiment, we introduce a monolingual corpus of a certain scale to construct pseudo-parallel data. At the same time, we also preprocessed the Korean corpus with different granularities to overcome the data sparsity. Experimental results show that our work is superior to the baselines in Chinese-Korean and Korean-Chinese translation tasks, which fully certificates the effectiveness of our method.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research work has been funded by the National Language Commission Scientific Research Project(YB135-76), the Yanbian University Foreign Language and Literature First-Class Subject Construction Project(18YLPY13).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"abdul-mageed-etal-2018-tweet","url":"https:\/\/aclanthology.org\/L18-1577","title":"You Tweet What You Speak: A City-Level Dataset of Arabic Dialects","abstract":"Arabic has a wide range of varieties or dialects. Although a number of pioneering works have targeted some Arabic dialects, other dialects remain largely without investigation. A serious bottleneck for studying these dialects is lack of any data that can be exploited in computational models. In this work, we aim to bridge this gap: We present a considerably large dataset of > 1\/4 billion tweets representing a wide range of dialects. Our dataset is more nuanced than previously reported work in that it is labeled at the fine-grained level of city. More specifically, the data represent 29 major Arab cities from 10 Arab countries with varying dialects (e.g., Egyptian, Gulf, KSA, Levantine, Yemeni).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"silverman-1990-microphone","url":"https:\/\/aclanthology.org\/H90-1083","title":"A Microphone-Array System for Speech Recognition Input","abstract":"This project is concexned with underlying mathematical algorithms, acoustics, hardware, and software to gain an understanding about, demonstrate tbe principles of, and, ultimately, to build an appropriate microphone-array system for speech-recognition input. Approach The approach taken might be called \"recursive build-and-study\". After investigating the layout problem and potential DSP architectures, we developed our first system. This allowed us to investigate real data and learn the real issues, begin to understand the difficult acoustics problems, and develop better DSP designs. This process is being repeated. Recent Accomplishments A new, nonlinear optimization algorithm called Stochastic Region Contraction (SRC), has been developed and has been applied to the microphone placement problem, talker location, and talker characterization. We have found that SRC is nearly two orders of magnitude faster than was simulated annealing. Our current research array system has been \"hardened\", and recal-time, time-domain beamforming is operational.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1990,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"markchom-etal-2020-uor","url":"https:\/\/aclanthology.org\/2020.semeval-1.52","title":"UoR at SemEval-2020 Task 4: Pre-trained Sentence Transformer Models for Commonsense Validation and Explanation","abstract":"SemEval Task 4 Commonsense Validation and Explanation Challenge is to validate whether a system can differentiate natural language statements that make sense from those that do not make sense. Two subtasks, A and B, are focused in this work, i.e., detecting against-common-sense statements and selecting explanations of why they are false from the given options. Intuitively, commonsense validation requires additional knowledge beyond the given statements. Therefore, we propose a system utilising pre-trained sentence transformer models based on BERT, RoBERTa and DistillBERT architectures to embed the statements before classification. According to the results, these embeddings can improve the performance of the typical MLP and LSTM classifiers as downstream models of both subtasks compared to regular tokenised statements. These embedded statements are shown to comprise additional information from external resources which help validate common sense in natural language.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ledbetter-dickinson-2015-automatic","url":"https:\/\/aclanthology.org\/W15-0604","title":"Automatic morphological analysis of learner Hungarian","abstract":"In this paper, we describe a morphological analyzer for learner Hungarian, built upon limited grammatical knowledge of Hungarian. The rule-based analyzer requires very few resources and is flexible enough to do both morphological analysis and error detection, in addition to some unknown word handling. As this is work-in-progress, we demonstrate its current capabilities, some areas where analysis needs to be improved, and an initial foray into how the system output can support the analysis of interlanguage grammars.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the participants of the IU CL discussion group, as well as the three anonymous reviewers, for their many helpful comments.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lemon-liu-2006-dude","url":"https:\/\/aclanthology.org\/E06-2004","title":"DUDE: A Dialogue and Understanding Development Environment, Mapping Business Process Models to Information State Update Dialogue Systems","abstract":"We demonstrate a new development environment 1 \"Information State Update\" dialogue systems which allows non-expert developers to produce complete spoken dialogue systems based only on a Business Process Model (BPM) describing their application (e.g. banking, cinema booking, shopping, restaurant information). The environment includes automatic generation of Grammatical Framework (GF) grammars for robust interpretation of spontaneous speech, and uses application databases to generate lexical entries and grammar rules. The GF grammar is compiled to an ATK or Nuance language model for speech recognition. The demonstration system allows users to create and modify spoken dialogue systems, starting with a definition of a Business Process Model and ending with a working system. This paper describes the environment, its main components, and some of the research issues involved in its development.","label_nlp4sg":1,"task":["Mapping Business Process Models"],"method":["Dialogue and Understanding Development Environment"],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"fierro-etal-2017-200k","url":"https:\/\/aclanthology.org\/W17-5101","title":"200K+ Crowdsourced Political Arguments for a New Chilean Constitution","abstract":"In this paper we present the dataset of 200,000+ political arguments produced in the local phase of the 2016 Chilean constitutional process. We describe the human processing of this data by the government officials, and the manual tagging of arguments performed by members of our research group. Afterwards we focus on classification tasks that mimic the human processes, comparing linear methods with neural network architectures. The experiments show that some of the manual tasks are suitable for automatization. In particular, the best methods achieve a 90% top-5 accuracy in a multiclass classification of arguments, and 65% macro-averaged F1-score for tagging arguments according to a three-part argumentation model.","label_nlp4sg":1,"task":["Data collection"],"method":["manual tagging"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers, Camilo Garrido, and Miguel Campusano for their helpful comments. We also thank Pamela Figueroa Rubio from the Ministry General Secretariat of the Presidency of Chile and Rodrigo Marquez from the United Nations Development Program for their help in the analysis process. Fierro, P\u00e9rez and Quezada are supported by the Millennium Nucleus Center for Semantic Web Research, Grant NC120004. Quezada is also supported by CON-ICYT under grant PCHA\/Doctorado Nacional 2015\/21151445.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"nn-1977-finite-string-volume-14-number-7","url":"https:\/\/aclanthology.org\/J77-4005","title":"The FINITE STRING, Volume 14, Number 7","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1977,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-etal-2016-role","url":"https:\/\/aclanthology.org\/W16-3617","title":"The Role of Discourse Units in Near-Extractive Summarization","abstract":"Although human-written summaries of documents tend to involve significant edits to the source text, most automated summarizers are extractive and select sentences verbatim. In this work we examine how elementary discourse units (EDUs) from Rhetorical Structure Theory can be used to extend extractive summarizers to produce a wider range of human-like summaries. Our analysis demonstrates that EDU segmentation is effective in preserving human-labeled summarization concepts within sentences and also aligns with near-extractive summaries constructed by news editors. Finally, we show that using EDUs as units of content selection instead of sentences leads to stronger summarization performance in near-extractive scenarios, especially under tight budgets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"roller-stevenson-2014-applying","url":"https:\/\/aclanthology.org\/W14-1112","title":"Applying UMLS for Distantly Supervised Relation Detection","abstract":"This paper describes first results using the Unified Medical Language System (UMLS) for distantly supervised relation extraction. UMLS is a large knowledge base which contains information about millions of medical concepts and relations between them. Our approach is evaluated using existing relation extraction data sets that contain relations that are similar to some of those in UMLS.","label_nlp4sg":1,"task":["relation extraction"],"method":["Distantly Supervised"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"The authors are grateful to the Engineering and Physical Sciences Research Council for supporting the work described in this paper (EP\/J008427\/1).","year":2014,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"falkenjack-etal-2017-services","url":"https:\/\/aclanthology.org\/W17-0244","title":"Services for text simplification and analysis","abstract":"We present a language technology service for web editors' work on making texts easier to understand, including tools for text complexity analysis, text simplification and text summarization. We also present a text analysis service focusing on measures of text complexity.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is financed by Internetfonden, Vinnova and SweClarin.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gillis-webber-tittel-2020-framework","url":"https:\/\/aclanthology.org\/2020.lrec-1.408","title":"A Framework for Shared Agreement of Language Tags beyond ISO 639","abstract":"The identification and annotation of languages in an unambiguous and standardized way is essential for the description of linguistic data. It is the prerequisite for machine-based interpretation, aggregation, and re-use of the data with respect to different languages. This makes it a key aspect especially for Linked Data and the multilingual Semantic Web. The standard for language tags is defined by IETF's BCP 47 and ISO 639 provides the language codes that are the tags' main constituents. However, for the identification of lesser-known languages, endangered languages, regional varieties or historical stages of a language, the ISO 639 codes are insufficient. Also, the optional language sub-tags compliant with BCP 47 do not offer a possibility fine-grained enough to represent linguistic variation. We propose a versatile pattern that extends the BCP 47 sub-tag privateuse and is, thus, able to overcome the limits of BCP 47 and ISO 639. Sufficient coverage of the pattern is demonstrated with the use case of linguistic Linked Data of the endangered Gascon language. We show how to use a URI shortcode for the extended sub-tag, making the length compliant with BCP 47. We achieve this with a web application and API developed to encode and decode the language tag.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work of Frances Gillis-Webber was financially supported by Hasso Plattner Institute for Digital Engineering.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kocon-janz-2019-propagation","url":"https:\/\/aclanthology.org\/2019.gwc-1.43","title":"Propagation of emotions, arousal and polarity in WordNet using Heterogeneous Structured Synset Embeddings","abstract":"In this paper we present a novel method for emotive propagation in a wordnet based on a large emotive seed. We introduce a sense-level emotive lexicon annotated with polarity, arousal and emotions. The data were annotated as a part of a large study involving over 20,000 participants. A total of 30,000 lexical units in Polish WordNet were described with metadata, each unit received about 50 annotations concerning polarity, arousal and 8 basic emotions, marked on a multilevel scale. We present a preliminary approach to propagating emotive metadata to unlabeled lexical units based on the distribution of manual annotations using logistic regression and description of mixed synset embeddings based on our Heterogeneous Structured Synset Embeddings.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Funded by National Centre for Research and Development, Poland, under grant \"Sentimenti -emotions analyzer in the written word\" no POIR.01.01.01-00-0472\/16.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"van-noord-2009-learning","url":"https:\/\/aclanthology.org\/E09-1093","title":"Learning Efficient Parsing","abstract":"A corpus-based technique is described to improve the efficiency of wide-coverage high-accuracy parsers. By keeping track of the derivation steps which lead to the best parse for a very large collection of sentences, the parser learns which parse steps can be filtered without significant loss in parsing accuracy, but with an important increase in parsing efficiency. An interesting characteristic of our approach is that it is self-learning, in the sense that it uses unannotated corpora.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was carried out in part in the context of the STEVIN programme which is funded by the Dutch and Flemish governments (http:\/\/taalunieversum.org\/taal\/technologie\/stevin\/).","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sarkar-zeman-2000-automatic","url":"https:\/\/aclanthology.org\/C00-2100","title":"Automatic Extraction of Subcategorization Frames for Czech","abstract":"We present some novel machine learning techniques for the identification of subcategorization information for verbs in Czech. We compare three different statistical techniques applied to this problem. We show how the learning algorithm can be used to discover previously unknown subcategorization frames from the Czech Prague Dependency Treebank. The algorithm can then be used to label dependents of a verb in the Czech treebank as either arguments or adjuncts. Using our techniques, we are able to achieve 88% precision on unseen parsed text.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lo-etal-2015-improving","url":"https:\/\/aclanthology.org\/W15-3056","title":"Improving evaluation and optimization of MT systems against MEANT","abstract":"We show that, consistent with MEANTtuned systems that translate into Chinese, MEANT-tuned MT systems that translate into English also outperforms BLEUtuned systems across commonly used MT evaluation metrics, even in BLEU. The result is achieved by significantly improving MEANT's sentence-level ranking correlation with human preferences through incorporating a more accurate distributional semantic model for lexical similarity and a novel backoff algorithm for evaluating MT output which automatic semantic parser fails to parse. The surprising result of MEANT-tuned systems having a higher BLEU score than BLEU-tuned systems suggests that MEANT is a more accurate objective function guiding the development of MT systems towards producing more adequate translation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This material is based upon work supported in part by the Defense Advanced Research Projects Agency (DARPA) under BOLT contract nos. HR0011-12-C-0014 and HR0011-12-C-0016, and GALE contract nos. HR0011-06-C-0022 and HR0011-06-C-0023; by the European Union under the FP7 grant agreement no. 287658; and by the Hong Kong Research Grants Council (RGC) research grants GRF620811, GRF621008, and GRF612806. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA, the EU, or RGC.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"xie-etal-2015-reducing","url":"https:\/\/aclanthology.org\/P15-2101","title":"Reducing infrequent-token perplexity via variational corpora","abstract":"Recurrent neural network (RNN) is recognized as a powerful language model (LM). We investigate deeper into its performance portfolio, which performs well on frequent grammatical patterns but much less so on less frequent terms. Such portfolio is expected and desirable in applications like autocomplete, but is less useful in social content analysis where many creative, unexpected usages occur (e.g., URL insertion). We adapt a generic RNN model and show that, with variational training corpora and epoch unfolding, the model improves its performance for the task of URL insertion suggestions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"flek-2020-returning","url":"https:\/\/aclanthology.org\/2020.acl-main.700","title":"Returning the N to NLP: Towards Contextually Personalized Classification Models","abstract":"Most NLP models today treat language as universal, even though socio-and psycholingustic research shows that the communicated message is influenced by the characteristics of the speaker as well as the target audience. This paper surveys the landscape of personalization in natural language processing and related fields, and offers a path forward to mitigate the decades of deviation of the NLP tools from sociolingustic findings, allowing to flexibly process the \"natural\" language of each user rather than enforcing a uniform NLP treatment. It outlines a possible direction to incorporate these aspects into neural NLP models by means of socially contextual personalization, and proposes to shift the focus of our evaluation strategies accordingly.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kim-etal-2015-wikification","url":"https:\/\/aclanthology.org\/D15-1265","title":"Wikification of Concept Mentions within Spoken Dialogues Using Domain Constraints from Wikipedia","abstract":"While most previous work on Wikification has focused on written texts, this paper presents a Wikification approach for spoken dialogues. A set of analyzers are proposed to learn dialogue-specific properties along with domain knowledge of conversations from Wikipedia. Then, the analyzed properties are used as constraints for generating candidates, and the candidates are ranked to find the appropriate links. The experimental results show that our proposed approach can significantly improve the performances of the task in human-human dialogues.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"xu-etal-2018-incorporating","url":"https:\/\/aclanthology.org\/P18-1114","title":"Incorporating Latent Meanings of Morphological Compositions to Enhance Word Embeddings","abstract":"Traditional word embedding approaches learn semantic information at word level while ignoring the meaningful internal structures of words like morphemes. Furthermore, existing morphology-based models directly incorporate morphemes to train word embeddings, but still neglect the latent meanings of morphemes. In this paper, we explore to employ the latent meanings of morphological compositions of words to train and enhance word embeddings. Based on this purpose, we propose three Latent Meaning Models (LMMs), named LMM-A, LMM-S and LMM-M respectively, which adopt different strategies to incorporate the latent meanings of morphemes during the training process. Experiments on word similarity, syntactic analogy and text classification are conducted to validate the feasibility of our models. The results demonstrate that our models outperform the baselines on five word similarity datasets. On Wordsim-353 and RG-65 datasets, our models nearly achieve 5% and 7% gains over the classic CBOW model, respectively. For the syntactic analogy and text classification tasks, our models also surpass all the baselines including a morphology-based model.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors are grateful to the reviewers for constructive feedback. This work was supported by the National Natural Science Foundation of China (No.61572456), the Anhui Province Guidance Funds for Quantum Communication and Quantum Computers and the Natural Science Foundation of Jiangsu Province of China (No.BK20151241).","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rashkin-etal-2018-event2mind","url":"https:\/\/aclanthology.org\/P18-1043","title":"Event2Mind: Commonsense Inference on Events, Intents, and Reactions","abstract":"We investigate a new commonsense inference task: given an event described in a short free-form text (\"X drinks coffee in the morning\"), a system reasons about the likely intents (\"X wants to stay awake\") and reactions (\"X feels alert\") of the event's participants. To support this study, we construct a new crowdsourced corpus of 25,000 event phrases covering a diverse range of everyday events and situations. We report baseline performance on this task, demonstrating that neural encoder-decoder models can successfully compose embedding representations of previously unseen events and reason about the likely intents and reactions of the event participants. In addition, we demonstrate how commonsense inference on people's intents and reactions can help unveil the implicit gender inequality prevalent in modern movie scripts.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their insightful comments. We also thank xlab members at the University of Washington, Martha Palmer, Tim O'Gorman, Susan Windisch Brown, Ghazaleh Kazeminejad as well as other members at the University of Colorado at Boulder for many helpful comments for our development of the annotation pipeline. This work was supported in part by National Science Foundation Graduate Research Fellowship Program under grant DGE-1256082, NSF grant IIS-1714566, and the DARPA CwC program through ARO (W911NF-15-1-0543).","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zoph-knight-2016-multi","url":"https:\/\/aclanthology.org\/N16-1004","title":"Multi-Source Neural Translation","abstract":"We build a multi-source machine translation model and train it to maximize the probability of a target English string given French and German sources. Using the neural encoderdecoder framework, we explore several combination methods and report up to +4.8 Bleu increases on top of a very strong attentionbased neural translation model.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was carried out with funding from DARPA (HR0011-15-C-0115) and ARL\/ARO (W911NF-10-1-0533).","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nagata-2019-toward","url":"https:\/\/aclanthology.org\/D19-1316","title":"Toward a Task of Feedback Comment Generation for Writing Learning","abstract":"In this paper, we introduce a novel task called feedback comment generation-a task of automatically generating feedback comments such as a hint or an explanatory note for writing learning for non-native learners of English. There has been almost no work on this task nor corpus annotated with feedback comments. We have taken the first step by creating learner corpora consisting of approximately 1,900 essays where all preposition errors are manually annotated with feedback comments. We have tested three baseline methods on the dataset, showing that a simple neural retrievalbased method sets a baseline performance with an F-measure of 0.34 to 0.41. Finally, we have looked into the results to explore what modifications we need to make to achieve better performance. We also have explored problems unaddressed in this work.","label_nlp4sg":1,"task":["Feedback Comment Generation","feedback comment generation"],"method":["corpora","neural retrievalbased method"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":"We would like to thank the three anonymous reviewers for their useful comments on this paper. This work was supported by Japan Science and Technology Agency (JST), PRESTO Grant Number JPMJPR1758, Japan","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"loureiro-camacho-collados-2020-dont","url":"https:\/\/aclanthology.org\/2020.emnlp-main.283","title":"Don't Neglect the Obvious: On the Role of Unambiguous Words in Word Sense Disambiguation","abstract":"State-of-the-art methods for Word Sense Disambiguation (WSD) combine two different features: the power of pre-trained language models and a propagation method to extend the coverage of such models. This propagation is needed as current sense-annotated corpora lack coverage of many instances in the underlying sense inventory (usually WordNet). At the same time, unambiguous words make for a large portion of all words in WordNet, while being poorly covered in existing senseannotated corpora. In this paper, we propose a simple method to provide annotations for most unambiguous words in a large corpus. We introduce the UWA (Unambiguous Word Annotations) dataset and show how a state-of-theart propagation-based model can use it to extend the coverage and quality of its word sense embeddings by a significant margin, improving on its original results on WSD.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ganesh-varma-2009-exploiting-structure","url":"https:\/\/aclanthology.org\/R09-1020","title":"Exploiting Structure and Content of Wikipedia for Query Expansion in the Context","abstract":"Retrieving answer containing passages is a challenging task in Question Answering. In this paper we describe a novel query expansion method which aims to rank the answer containing passages better. It uses content and structured information (link structure and category information) of Wikipedia to generate a set of terms semantically related to the question. As Boolean model allows a fine-grained control over query expansion, these semantically related terms are added to the original query to form an expanded Boolean query. We conducted experiments on TREC 2006 QA data. The experimental results show significant improvements of about 24.6%, 11.1% and 12.4% in precision at 1, MRR at 20 and TDRR scores respectively using our query expansion method.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mazumder-etal-2019-lifelong","url":"https:\/\/aclanthology.org\/W19-5903","title":"Lifelong and Interactive Learning of Factual Knowledge in Dialogues","abstract":"Dialogue systems are increasingly using knowledge bases (KBs) storing real-world facts to help generate quality responses. However, as the KBs are inherently incomplete and remain fixed during conversation, it limits dialogue systems' ability to answer questions and to handle questions involving entities or relations that are not in the KB. In this paper, we make an attempt to propose an engine for Continuous and Interactive Learning of Knowledge (CILK) for dialogue systems to give them the ability to continuously and interactively learn and infer new knowledge during conversations. With more knowledge accumulated over time, they will be able to learn better and answer more questions. Our empirical evaluation shows that CILK is promising.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partially supported by a grant from National Science Foundation (NSF IIS 1838770) and a research gift from Northrop Grumman.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"semmar-saadane-2013-using","url":"https:\/\/aclanthology.org\/I13-1139","title":"Using Transliteration of Proper Names from Arabic to Latin Script to Improve English-Arabic Word Alignment","abstract":"Bilingual lexicons of proper names play a vital role in machine translation and cross-language information retrieval. Word alignment approaches are generally used to construct bilingual lexicons automatically from parallel corpora. Aligning proper names is a task particularly difficult when the source and target languages of the parallel corpus do not share a same written script. We present in this paper a system to transliterate automatically proper names from Arabic to Latin script, and a tool to align single and compound words from English-Arabic parallel texts. We particularly focus on the impact of using transliteration to improve the performance of the word alignment tool. We have evaluated the word alignment tool integrating transliteration of proper names from Arabic to Latin script using two methods: A manual evaluation of the alignment quality and an evaluation of the impact of this alignment on the translation quality by using the open source statistical machine translation system Moses. Experiments show that integrating transliteration of proper names into the alignment process improves the Fmeasure of word alignment from 72% to 81% and the translation BLEU score from 20.15% to 20.63%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"moser-moore-1995-investigating","url":"https:\/\/aclanthology.org\/P95-1018","title":"Investigating Cue Selection and Placement in Tutorial Discourse","abstract":"Our goal is to identify the features that predict cue selection and placement in order to devise strategies for automatic text generation. Much previous work in this area has relied on ad hoc methods. Our coding scheme for the exhaustive analysis of discourse allows a systematic evaluation and refinement of hypotheses concerning cues. We report two results based on this analysis: a comparison of the distribution of Sn~CE and BECAUSE in our corpus, and the impact of embeddedness on cue selection.","label_nlp4sg":1,"task":["Investigating Cue Selection"],"method":["coding scheme"],"goal1":"Quality Education","goal2":"Industry, Innovation and Infrastructure","goal3":null,"acknowledgments":"The research described in this paper was supported by the Office of Naval Research, Cognitive and Neural Sciences Division (Grant Number: N00014-91-J-1694), and a grant from the DoD FY93 Augmentation of Awards for Science and Engineering Research Training (ASSERT) Program (Grant Number: N00014-93-I-0812). We are grateful to Erin Glendening for her patient and careful coding and database entry, and to Maria Gordin for her reliability coding.","year":1995,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rosner-1988-semsyn","url":"https:\/\/aclanthology.org\/A88-1004","title":"The SEMSYN Generation System: Ingredients, Applications, Prospects","abstract":"We report about the current status of the SEM-SYN generation system. This system-initially implemented within a Japanese to German MT project-has been applied to a variety of generation tasks both within MT and text generation. We will work out how these applications enhanced the system's capacities. In addition to the paper we will give a demo of both the German and a recently implemented English version of the system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1988,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"griesel-etal-2019-thinking","url":"https:\/\/aclanthology.org\/2019.gwc-1.24","title":"Thinking globally, acting locally -- Progress in the African Wordnet Project","abstract":"The African Wordnet Project (AWN) includes all nine indigenous South African languages, namely isi-Zulu, isiXhosa, Setswana, Sesotho sa Leboa, Tshivenda, Siswati, Sesotho, isiNdebele and Xitsonga. The AWN currently includes 61 000 synsets as well as definitions and usage examples for a large part of the synsets. The project recently received extended funding from the South African Centre for Digital Language Resources (SADiLaR) and aims to update all aspects of the current resource, including the seed list used for new development, software tools used and mapping the AWN to the latest version of PWN 3.1. As with any resource development project, it is essential to also include phases of focused quality assurance and updating of the basis on which the resource is built. The African languages remain under-resourced. This paper describes progress made in the development of the AWN as well as recent technical improvements.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The African Wordnet project (AWN) was made possible with support from the South African Centre for Digital Language Resources (SADiLaR). SADiLaR (www.sadilar.org) is a research infrastructure established by the Department of Science and Technology of the South African government as part of the South African Research Infrastructure Roadmap (SARIR).","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rios-gohring-2012-tree","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/350_Paper.pdf","title":"A tree is a Baum is an \\'arbol is a sach'a: Creating a trilingual treebank","abstract":"This paper describes the process of constructing a trilingual parallel treebank. While for two of the involved languages, Spanish and German, there are already corpora with well-established annotation schemes available, this is not the case with the third language: Cuzco Quechua (ISO 639-3:quz), a low-resourced, non-standardized language for which we had to define a linguistically plausible annotation scheme first.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the publishers who have granted us to use part of Gregorio Condori's autobiography, as well as the many students who have contributed to the annotation of the Spanish and German texts. Finally, we would like to give our special thanks to our Peruvian co-workers for the translations, the annotation as well as the linguistic consulting. This research is funded by the Swiss National Science Foundation under grant 100015 132219\/1.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"skalban-etal-2012-automatic","url":"https:\/\/aclanthology.org\/C12-2112","title":"Automatic Question Generation in Multimedia-Based Learning","abstract":"We investigate whether questions generated automatically by two Natural Language Processing (NLP) based systems (one developed by the authors, the other a state-of-the-art system) can successfully be used to assist multimedia-based learning. We examine the feasibility of using a Question Generation (QG) system's output as pre-questions; with different types of pre-questions used: text-based and with images. We also compare the psychometric parameters of the automatically generated questions by the two systems and of those generated manually. Specifically, we analyse the effect such pre-questions have on test-takers' performance on a comprehension test about a scientific video documentary. We also compare the discrimination power of the questions generated automatically against that of questions generated manually. The results indicate that the presence of pre-questions (preferably with images) improves the performance of test-takers. They indicate that the psychometric parameters of the questions generated by our system are comparable if not better than those of the state-of-the-art system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"martinez-hinarejos-tamarit-2008-evaluation","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2008\/pdf\/119_paper.pdf","title":"Evaluation of Different Segmentation Techniques for Dialogue Turns","abstract":"In dialogue systems, it is necessary to decode the user input into semantically meaningful units. These semantical units, usually Dialogue Acts (DA), are used by the system to produce the most appropriate response. The user turns can be segmented into utterances, which are meaningful segments from the dialogue viewpoint. In this case, a single DA is associated to each utterance. Many previous works have used DA assignation models on segmented dialogue corpora, but only a few have tried to perform the segmentation and assignation at the same time. The knowledge of the segmentation of turns into utterances is not common in dialogue corpora, and knowing the quality of the segmentations provided by the models that simultaneously perform segmentation and assignation would be interesting. In this work, we evaluate the accuracy of the segmentation offered by this type of model. The evaluation is done on a Spanish dialogue system on a railway information task. The results reveal that one of these techniques provides a high quality segmentation for this corpus.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"fancellu-webber-2014-applying","url":"https:\/\/aclanthology.org\/E14-1063","title":"Applying the semantics of negation to SMT through n-best list re-ranking","abstract":"Although the performance of SMT systems has improved over a range of different linguistic phenomena, negation has not yet received adequate treatment. Previous works have considered the problem of translating negative data as one of data sparsity (Wetzel and Bond (2012)) or of structural differences between source and target language with respect to the placement of negation (Collins et al. (2005)). This work starts instead from the questions of what is meant by negation and what makes a good translation of negation. These questions have led us to explore the use of semantics of negation in SMTspecifically, identifying core semantic elements of negation (cue, event and scope) in a source-side dependency parse and reranking hypotheses on the n-best list produced after decoding according to the extent to which an hypothesis realises these elements. The method shows considerable improvement over the baseline as measured by BLEU scores and Stanford's entailmentbased MT evaluation metric (Pad\u00f3 et al. (2009)).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ranta-etal-2010-tools","url":"https:\/\/aclanthology.org\/P10-4012","title":"Tools for Multilingual Grammar-Based Translation on the Web","abstract":"This is a system demo for a set of tools for translating texts between multiple languages in real time with high quality. The translation works on restricted languages, and is based on semantic interlinguas. The underlying model is GF (Grammatical Framework), which is an open-source toolkit for multilingual grammar implementations. The demo will cover up to 20 parallel languages.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"russo-etal-2012-italian","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/813_Paper.pdf","title":"Italian and Spanish Null Subjects. A Case Study Evaluation in an MT Perspective.","abstract":"Thanks to their rich morphology, Italian and Spanish allow pro-drop pronouns, i.e., non lexically-realized subject pronouns. Here we distinguish between two different types of null subjects: personal pro-drop and impersonal pro-drop. We evaluate the translation of these two categories into French, a non pro-drop language, using Its-2, a transfer-based system developed at our laboratory; and Moses, a statistical system. Three different corpora are used: two subsets of the Europarl corpus and a third corpus built using newspaper articles. Null subjects turn out to be quantitatively important in all three corpora, but their distribution varies depending on the language and the text genre though. From a MT perspective, translation results are determined by the type of pro-drop and the pair of languages involved. Impersonal pro-drop is harder to translate than personal pro-drop, especially for the translation from Italian into French, and a significant portion of incorrect translations consists of missing pronouns.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been supported in part by the Swiss National Science Foundation (grant No 100015-130634). ","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"marinelli-2004-proper","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/580.pdf","title":"Proper Names and Polysemy: From a Lexicographic Experience","abstract":"In the framework of the SI-TAL (Integrated Systems for the Automatic Treatment of Language) project the lexical coverage of IWN has been extended by adding, besides two grammatical categories not encoded in EWN (i.e. adjectives and adverbs), a set of proper names which are taken into consideration in this paper. This decision was also due to the high degree of incidence of proper names observed in the corpus selected within SI-TAL for semantic annotation. In this paper we would refer more widely about the relations involving the pn in particular codifying the relation between the pn and the senses (literal, derived and extended). We consider the pn as the basis for many extensions of meaning. In fact, many types of derivates and sense extensions are generated, by means of lexical rules that operate as \"generative factors\". Novel usages of a word form can be derived through productive application of a lexical rule; therefore we propose to represent these lexical rules codifying new semantic relations in the database. We want to give prominence to the polysemy of pn to confirm the linguistic manifestation(s) of the faculty for generative categorization and compositional thought \" (Pustejovsky, 2001).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pustejovsky-yocum-2013-capturing","url":"https:\/\/aclanthology.org\/W13-0503","title":"Capturing Motion in ISO-SpaceBank","abstract":"This paper presents the first description of the motion subcorpus of ISO-SpaceBank (Mo-tionBank) and discusses how motion-events are represented in ISO-Space 1.5, a specification language for the representation of spatial information in language. We present data from this subcorpus with examples from the pilot annotation, focusing specifically on the annotation of motion-events and their various participants. These data inform further discussion of outstanding issues concerning semantic annotation, such as quantification and measurement. We address these questions briefly as they impact the design of ISO-Space.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by grants from the NSF (NSF-IIS 1017765) and the NGA (NURI HM1582-08-1-0018). We would like to thank Jessica Moszkowicz, Marc Verhagen, Harry Bunt, and Kiyong Lee for their contributions to this discussion. We would also like to acknowledge the four anonymous reviewers for their helpful comments. All errors and mistakes are, of course, the responsibilities of the authors.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"schuler-joshi-2011-tree","url":"https:\/\/aclanthology.org\/W11-0806","title":"Tree-Rewriting Models of Multi-Word Expressions","abstract":"Multi-word expressions (MWEs) account for a large portion of the language used in dayto-day interactions. A formal system that is flexible enough to model these large and often syntactically-rich non-compositional chunks as single units in naturally occurring text could considerably simplify large-scale semantic annotation projects, in which it would be undesirable to have to develop internal compositional analyses of common technical expressions that have specific idiosyncratic meanings. This paper will first define a notion of functorargument decomposition on phrase structure trees analogous to graph coloring, in which the tree is cast as a graph, and the elementary structures of a grammar formalism are colors. The paper then presents a formal argument that tree-rewriting systems, a class of grammar formalism that includes Tree Adjoining Grammars, are able to produce a proper superset of the functor-argument decompositions that string-rewriting systems can produce.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jaffe-1963-simultaneous","url":"https:\/\/aclanthology.org\/1963.earlymt-1.17","title":"Simultaneous computation of lexical and extralinguistic information measures in dialogue","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1963,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kurfali-2020-travis","url":"https:\/\/aclanthology.org\/2020.mwe-1.18","title":"TRAVIS at PARSEME Shared Task 2020: How good is (m)BERT at seeing the unseen?","abstract":"This paper describes the TRAVIS system built for the PARSEME Shared Task 2020 on semisupervised identification of verbal multiword expressions. TRAVIS is a fully feature-independent model, relying only on the contextual embeddings. We have participated with two variants of TRAVIS, TRAVIS multi and TRAVIS mono , where the former employs multilingual contextual embeddings and the latter uses monolingual ones. Our systems are ranked second and third among seven submissions in the open track, respectively. Thorough comparison of both systems on eight languages reveals that despite the strong performance of multilingual contextual embeddings across all languages, language-specific contextual embeddings exhibit much better generalization capabilities.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"I would like to thank Johan Sjons and anonymous reviewers for their valuable comments and NVIDIA for the GPU grant.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-lee-1996-identification","url":"https:\/\/aclanthology.org\/C96-1039","title":"Identification and Classification of Proper Nouns in Chinese Texts","abstract":"Various strategies are proposed to identify and classify three types of proper nouns in Chinese texts. Clues from character, sentence and paragraph levels are employed to resolve Chinese personal names. Character, Syllable and Frequency Conditions are presented to treat transliterated personal names, To deal with organization names, keywords, prefix, word association and parts-of-speech are applied. For fair evaluation, large scale test data are selected from six sections of a newspaper. The precision and the recall for these three types are (88.04%, 92.56%), (50.62%, 71.93%) and (61.79%, 54.50%), respectively. When the former two types are regarded as a category, the performance becomes (81.46%, 91.22%). Compared with other approaches, our approach has better performance and our classification is automatic.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research was supported in part by National Science Council, Taipei, Taiwan, R.O.C. under contract NSC83-0408-E002-019,We are also thankful for the anonymous referees' comments,","year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"brayne-etal-2022-masked","url":"https:\/\/aclanthology.org\/2022.deelio-1.9","title":"On Masked Language Models for Contextual Link Prediction","abstract":"In the real world, many relational facts require context; for instance, a politician holds a given elected position only for a particular timespan. This context (the timespan) is typically ignored in knowledge graph link prediction tasks, or is leveraged by models designed specifically to make use of it (i.e. n-ary link prediction models). Here, we show that the task of n-ary link prediction is easily performed using language models, applied with a basic method for constructing cloze-style query sentences. We introduce a pre-training methodology based around an auxiliary entity-linked corpus that outperforms other popular pre-trained models like BERT, even with a smaller model. This methodology also enables n-ary link prediction without access to any n-ary training set, which can be invaluable in circumstances where expensive and time-consuming curation of n-ary knowledge graphs is not feasible. We achieve state-ofthe-art performance on the primary n-ary link prediction dataset WD50K and on WikiPeople facts that include literals-typically ignored by knowledge graph embedding methods.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"makhambetov-etal-2013-assembling","url":"https:\/\/aclanthology.org\/D13-1104","title":"Assembling the Kazakh Language Corpus","abstract":"This paper presents the Kazakh Language Corpus (KLC), which is one of the first attempts made within a local research community to assemble a Kazakh corpus. KLC is designed to be a large scale corpus containing over 135 million words and conveying five stylistic genres: literary, publicistic, official, scientific and informal. Along with its primary part KLC comprises such parts as: (i) annotated sub-corpus, containing segmented documents encoded in the eXtensible Markup Language (XML) that marks complete morphological, syntactic, and structural characteristics of texts; (ii) as well as a sub-corpus with the annotated speech data. KLC has a web-based corpus management system that helps to navigate the data and retrieve necessary information. KLC is also open for contributors, who are willing to make suggestions, donate texts and help with annotation of existing materials.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the Ministry of Education and Science of the Republic of Kazakhstan for supporting this work through a grant under the 055 research program.We express our gratitude to Dr. A. Sharipbayev for his valuable advice on methodology of constructing the read-speech corpus.We also would like to sincerely thank our validators and annotators: Bobek A., Asemgul R., Aidana Zh., Nazerke G., Nazym K., Ainur N., Sandughash A., Zhuldyzai S., Dinara O., Aigerim Zh. The annotation and validation work they have done helped a great deal in designing tagsets. This work would not be possible without their contribution.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhao-etal-2020-line","url":"https:\/\/aclanthology.org\/2020.acl-main.67","title":"Line Graph Enhanced AMR-to-Text Generation with Mix-Order Graph Attention Networks","abstract":"Efficient structure encoding for graphs with labeled edges is an important yet challenging point in many graph-based models. This work focuses on AMR-to-text generation-A graph-to-sequence task aiming to recover natural language from Abstract Meaning Representations (AMR). Existing graph-to-sequence approaches generally utilize graph neural networks as their encoders, which have two limitations: 1) The message propagation process in AMR graphs is only guided by the firstorder adjacency information. 2) The relationships between labeled edges are not fully considered. In this work, we propose a novel graph encoding framework which can effectively explore the edge relations. We also adopt graph attention networks with higherorder neighborhood information to encode the rich structure in AMR graphs. Experiment results show that our approach obtains new state-of-the-art performance on English AMR benchmark datasets. The ablation analyses also demonstrate that both edge relations and higher-order information are beneficial to graph-to-sequence modeling.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their thoughtful comments.This work has been supported by the National Key Research and Development Program of China (Grant No. 2017YFB1002102) and Shanghai Jiao Tong University Scientific and Technological Innovation Funds (YG2020YQ01).","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"na-lee-2020-jbnu","url":"https:\/\/aclanthology.org\/2020.semeval-1.65","title":"JBNU at SemEval-2020 Task 4: BERT and UniLM for Commonsense Validation and Explanation","abstract":"This paper presents our contributions to the SemEval-2020 Task 4 Commonsense Validation and Explanation (ComVE) and includes the experimental results of the two Subtasks B and C of the SemEval-2020 Task 4. Our systems rely on pre-trained language models, i.e., BERT (including its variants) and UniLM, and rank 10th and 7th among 27 and 17 systems on Subtasks B and C, respectively. We analyze the commonsense ability of the existing pretrained language models by testing them on the SemEval-2020 Task 4 ComVE dataset, specifically for Subtasks B and C, the explanation subtasks with multi-choice and sentence generation, respectively.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wu-tsai-2018-cross","url":"https:\/\/aclanthology.org\/N18-2054","title":"Cross-language Article Linking Using Cross-Encyclopedia Entity Embedding","abstract":"Cross-language article linking (CLAL) is the task of finding corresponding article pairs of different languages across encyclopedias. This task is a difficult disambiguation problem in which one article must be selected among several candidate articles with similar titles and contents. Existing works focus on engineering text-based or link-based features for this task, which is a time-consuming job, and some of these features are only applicable within the same encyclopedia. In this paper, we address these problems by proposing crossencyclopedia entity embedding. Unlike other works, our proposed method does not rely on known cross-language pairs. We apply our method to CLAL between English Wikipedia and Chinese Baidu Baike. Our features improve performance relative to the baseline by 29.62%. Tested 30 times, our system achieved an average improvement of 2.76% over the current best system (26.86% over baseline), a statistically significant result.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"buitelaar-etal-2006-ontology","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2006\/pdf\/93_pdf.pdf","title":"Ontology-based Information Extraction with SOBA","abstract":"In this paper we describe SOBA, a sub-component of the SmartWeb multi-modal dialog system. SOBA is a component for ontologybased information extraction from soccer web pages for automatic population of a knowledge base that can be used for domainspecific question answering. SOBA realizes a tight connection between the ontology, knowledge base and the information extraction component. The originality of SOBA is in the fact that it extracts information from heterogeneous sources such as tabular structures, text and image captions in a semantically integrated way. In particular, it stores extracted information in a knowledge base, and in turn uses the knowledge base to interpret and link newly extracted information with respect to already existing entities.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research has been supported by grant 01 IMD01 of the German Ministry of Education and Research (BMB+F) for the SmartWeb project. We would like to thank our students G\u00fcnter Ladwig, Matthias Mantel, Alexander Schutz, Nicolas Weber and Honggang Zhu for implementing parts of the system.","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"besancon-rajman-2002-evaluation","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2002\/pdf\/314.pdf","title":"Evaluation of a Vector Space Similarity Measure in a Multilingual Framework","abstract":"In this contribution, we propose a method that uses a multilingual framework to validate the relevance of the notion of vector based semantic similarity between texts. The goal is to verify that vector based semantic similarities can be reliably transfered from one language to another. More precisely, the idea is to test whether the relative positions of documents in a vector space associated with a given source language are close to the ones of their translations in the vector space associated with the target language. The experiments, carried out with both the standard Vector Space model and the more advanced DSIR model, have given very promising results.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wuebker-etal-2010-training","url":"https:\/\/aclanthology.org\/P10-1049","title":"Training Phrase Translation Models with Leaving-One-Out","abstract":"Several attempts have been made to learn phrase translation probabilities for phrasebased statistical machine translation that go beyond pure counting of phrases in word-aligned training data. Most approaches report problems with overfitting. We describe a novel leavingone-out approach to prevent over-fitting that allows us to train phrase models that show improved translation performance on the WMT08 Europarl German-English task. In contrast to most previous work where phrase models were trained separately from other models used in translation, we include all components such as single word lexica and reordering models in training. Using this consistent training of phrase models we are able to achieve improvements of up to 1.4 points in BLEU. As a side effect, the phrase table size is reduced by more than 80%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partly realized as part of the Quaero Programme, funded by OSEO, French State agency for innovation, and also partly based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001-06-C-0023. Any opinions, ndings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reect the views of the DARPA.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"liberman-1989-text","url":"https:\/\/aclanthology.org\/H89-2024","title":"Text on Tap: the ACL\/DCI","abstract":"s 200,00 scientific abstracts, diverse topics (25 million words) Archives of the Challenger Commission Transcripts of depositions and hearings about the space shuttle disaster (2.5 million words) Library of America American literary classics: 44 volumes (-130 books) promised-(20 million words) 11 volumes in hand, successfully decrypted:","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We have studied the growth and metabolism of Syntrophomonas wolfei in pure culture with crotonate as the energy source. S. wolfei grows in crotonate mineral salts medium without rumen fluid with cobalamin, thymine, lipoic acid and biotin added. However, after four to six transfers in this medium, growth ceases, indicating that another vitamin is required. The chemically defined medium allows large batches of S. wolfei to be grown for enzyme purification. All the enzymes involved in the oxidation of crotonyl-CoA to acetate have been detected. The pure culture of S. wolfei or coculture of S. wolfei grown with crotonate contain high activities of a crotonate: acetyl-CoA CoA-transferase activity. This activity is not detected in cocultures grown with butyrate...","year":1989,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"martin-etal-2020-mudoco","url":"https:\/\/aclanthology.org\/2020.lrec-1.13","title":"MuDoCo: Corpus for Multidomain Coreference Resolution and Referring Expression Generation","abstract":"This paper proposes a new dataset, MuDoCo, composed of authored dialogs between a fictional user and a system who are given tasks to perform within six task domains. These dialogs are given rich linguistic annotations by expert linguists for several types of reference mentions and named entity mentions, either of which can span multiple words, as well as for coreference links between mentions. The dialogs sometimes cross and blend domains, and the users exhibit complex task switching behavior such as re-initiating a previous task in the dialog by referencing the entities within it. The dataset contains a total of 8,429 dialogs with an average of 5.36 turns per dialog. We are releasing this dataset to encourage research in the field of coreference resolution, referring expression generation and identification within realistic, deep dialogs involving multiple domains. To demonstrate its utility, we also propose two baseline models for the downstream tasks: coreference resolution and referring expression generation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We give thanks to Long Ma and Rebecca Silvert for their help with organizing and labeling the dataset, and to Ashish Baghudana for developing an initial prototype of the experimental coreference model we employed here. We also thank the anonymous reviewers for several helpful suggestions for related work.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"iglesias-etal-2018-accelerating","url":"https:\/\/aclanthology.org\/N18-3013","title":"Accelerating NMT Batched Beam Decoding with LMBR Posteriors for Deployment","abstract":"We describe a batched beam decoding algorithm for NMT with LMBR n-gram posteriors, showing that LMBR techniques still yield gains on top of the best recently reported results with Transformers. We also discuss acceleration strategies for deployment, and the effect of the beam size and batching on memory and speed.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"magooda-litman-2021-mitigating-data","url":"https:\/\/aclanthology.org\/2021.findings-emnlp.175","title":"Mitigating Data Scarceness through Data Synthesis, Augmentation and Curriculum for Abstractive Summarization","abstract":"This paper explores three simple data manipulation techniques (synthesis, augmentation, curriculum) for improving abstractive summarization models without the need for any additional data. We introduce a method of data synthesis with paraphrasing, a data augmentation technique with sample mixing, and curriculum learning with two new difficulty metrics based on specificity and abstractiveness. We conduct experiments to show that these three techniques can help improve abstractive summarization across two summarization models and two different small datasets. Furthermore, we show that these techniques can improve performance when applied in isolation and when combined.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research reported here was supported, in whole or in part, by the institute of Education Sciences, U.S. Department of Education, through Grant R305A180477 to the University of Pittsburgh. The opinions expressed are those of the authors and do not represent the views of the institute or the U.S. Department of Education. We like to thank the Pitt PETAL group and the anonymous reviewers for advice in improving this paper.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"meng-etal-2000-mandarin","url":"https:\/\/aclanthology.org\/W00-0504","title":"Mandarin-English Information (MEI): Investigating Translingual Speech Retrieval","abstract":"We describe a system which supports English text queries searching for Mandarin Chinese spoken documents. This is one of the first attempts to tightly couple speech recognition with machine translation technologies for cross-media and cross-language retrieval. The Mandarin Chinese news audio are indexed with word and subword units by speech recognition. Translation of these multiscale units can effect cross-language information retrieval. The integrated technologies will be evaluated based on the performance of translingnal speech retrieval.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors wish to thank Patrick Schone, Erika Grams, Fred Jelinek, Charles Wayne, Kenney \u2022 Ng, John Garofolo, and ","year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yang-etal-2018-unsupervised","url":"https:\/\/aclanthology.org\/P18-1005","title":"Unsupervised Neural Machine Translation with Weight Sharing","abstract":"Unsupervised neural machine translation (NMT) is a recently proposed approach for machine translation which aims to train the model without using any labeled data. The models proposed for unsupervised NMT often use only one shared encoder to map the pairs of sentences from different languages to a shared-latent space, which is weak in keeping the unique and internal characteristics of each language, such as the style, terminology, and sentence structure. To address this issue, we introduce an extension by utilizing two independent encoders but sharing some partial weights which are responsible for extracting high-level representations of the input sentences. Besides, two different generative adversarial networks (GANs), namely the local GAN and global GAN, are proposed to enhance the cross-language translation. With this new approach, we achieve significant improvements on English-German, English-French and Chinese-to-English translation tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by the National Key Research and Development Program of China under Grant No. 2017YFB1002102, and Beijing Engineering Research Center under Grant No. Z171100002217015. We would like to thank Xu Shuang for her preparing data used in this work. Additionally, we also want to thank Jiaming Xu, Suncong Zheng and Wenfu Wang for their invaluable discussions on this work.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ding-feng-2020-learning","url":"https:\/\/aclanthology.org\/2020.findings-emnlp.421","title":"Learning to Classify Events from Human Needs Category Descriptions","abstract":"We study the problem of learning an event classifier from human needs category descriptions, which is challenging due to: (1) the use of highly abstract concepts in natural language descriptions, (2) the difficulty of choosing key concepts. To tackle these two challenges, we propose LEAPI, a zero-shot learning method that first automatically generate weak labels by instantiating high-level concepts with prototypical instances and then trains a human needs classifier with the weakly labeled data. To filter noisy concepts, we design a reinforced selection algorithm to choose high-quality concepts for instantiation. Experimental results on the human needs categorization task show that our method outperforms baseline methods, producing substantially better precision. Physiological Needs Description: the need for a person to obtain food, to have meals \u2026 food\u00e0fruit, vegetable, meat, egg, fish,\u2026 \"I bought fruits\", \"I had eggs this morning\" Concept\u00e0Instances Labeled Events Leisure Needs Description: the need for a person to have leisure activities, to enjoy art \u2026 leisure activities\u00e0fishing, shopping, golf \"I went to fishing\", \"Dad went to play golf\" Concept\u00e0Instances Labeled Events","label_nlp4sg":1,"task":["Classify Events"],"method":["zero - shot learning"],"goal1":"Peace, Justice and Strong Institutions","goal2":"Good Health and Well-Being","goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"chang-etal-2012-learning","url":"https:\/\/aclanthology.org\/P12-2026","title":"Learning to Find Translations and Transliterations on the Web","abstract":"In this paper, we present a new method for learning to finding translations and transliterations on the Web for a given term. The approach involves using a small set of terms and translations to obtain mixed-code snippets from a search engine, and automatically annotating the snippets with tags and features for training a conditional random field model. At runtime, the model is used to extracting translation candidates for a given term. Preliminary experiments and evaluation show our method cleanly combining various features, resulting in a system that outperforms previous work.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-etal-2010-fast","url":"https:\/\/aclanthology.org\/C10-2081","title":"Fast-Champollion: A Fast and Robust Sentence Alignment Algorithm","abstract":"Sentence-level aligned parallel texts are important resources for a number of natural language processing (NLP) tasks and applications such as statistical machine translation and cross-language information retrieval. With the rapid growth of online parallel texts, efficient and robust sentence alignment algorithms become increasingly important. In this paper, we propose a fast and robust sentence alignment algorithm, i.e., Fast-Champollion, which employs a combination of both length-based and lexiconbased algorithm. By optimizing the process of splitting the input bilingual texts into small fragments for alignment, Fast-Champollion, as our extensive experiments show, is 4.0 to 5.1 times as fast as the current baseline methods such as Champollion (Ma, 2006) on short texts and achieves about 39.4 times as fast on long texts, and Fast-Champollion is as robust as Champollion.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is supported by the Boeing-Tsinghua Joint Research Project \"Robust Chinese Word Segmentation and High Performance English-Chinese Bilingual Text Alignment\".","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"basile-novielli-2015-uniba","url":"https:\/\/aclanthology.org\/S15-2099","title":"UNIBA: Sentiment Analysis of English Tweets Combining Micro-blogging, Lexicon and Semantic Features","abstract":"This paper describes the UNIBA team participation in the Sentiment Analysis in Twitter task (Task 10) at SemEval-2015. We propose a supervised approach relying on keyword, lexicon and micro-blogging features as well as representation of tweets in a word space.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nakayama-1994-modeling","url":"https:\/\/aclanthology.org\/A94-1004","title":"Modeling Content Identification from Document Images","abstract":"A new technique to locate content-representing words for a given document image using abstract representation of character shapes is described. A character shape code representation defined by the location of a character in a text line has been developed. Character shape code generation avoids the computational expense of conventional optical character recognition (OCR). Because character shape codes are an abstraction of standard character code (e.g., ASCII), the mapping is ambiguous. In this paper, the ambiguity is shown to be practically limited to an acceptable level. It is illustrated that: first, punctuation marks are clearly distinguished from the other characters; second, stop words are generally distinguishable from other words, because the permutations of character shape codes in function words are characteristically different from those in content words; and third, numerals and acronyms in capital letters are distinguishable from other words. With these clAssifications, potential content-representing words are identified, and an analysis of their distribution yields their rank. Consequently, introducing character shape codes makes it possible to inexpensively and robustly bridge the gap between electronic documents and hardcopy documents for the purpose of content identification.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The author gratefully acknowledges helpful suggestions by Larry Spitz and Penni Sibun.","year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nina-alcocer-2019-haterecognizer","url":"https:\/\/aclanthology.org\/S19-2072","title":"HATERecognizer at SemEval-2019 Task 5: Using Features and Neural Networks to Face Hate Recognition","abstract":"This paper presents a detailed description of our participation in task 5 on SemEval-2019 1. This task consists of classifying English and Spanish tweets that contain hate towards women or immigrants. We carried out several experiments; for a finer-grained study of the task, we analyzed different features and designing architectures of neural networks. Additionally, to face the lack of hate content in tweets, we include data augmentation as a technique to increase hate content in our datasets.","label_nlp4sg":1,"task":["Hate Recognition"],"method":["Neural Networks","neural networks","data augmentation"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"yao-etal-2010-pdtb","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/336_Paper.pdf","title":"PDTB XML: the XMLization of the Penn Discourse TreeBank 2.0","abstract":"The current study presents a conversion and unification of the Penn Discourse TreeBank 2.0 under the XML format. The converted corpus allows for a simultaneous search for syntactically specified discourse information on the basis of the XQuery standard.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This project was accomplished as a part of our studies within the Erasmus Mundus European Masters Program in Language and Communication Technologies. The authors are very grateful to Gosse Bouma, Gisela Redeker, Jennifer Spenader, and the students in the PDTB research group at the University of Groningen, The Netherlands, for their supervision, inspiring interest and valuable suggestions. We would like to thank the anonymous reviewers for the careful reading and very useful comments.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jin-etal-2020-hooks","url":"https:\/\/aclanthology.org\/2020.acl-main.456","title":"Hooks in the Headline: Learning to Generate Headlines with Controlled Styles","abstract":"Current summarization systems only produce plain, factual headlines, but do not meet the practical needs of creating memorable titles to increase exposure. We propose a new task, Stylistic Headline Generation (SHG), to enrich the headlines with three style options (humor, romance and clickbait), in order to attract more readers. With no style-specific article-headline pair (only a standard headline summarization dataset and mono-style corpora), our method TitleStylist generates style-specific headlines by combining the summarization and reconstruction tasks into a multitasking framework. We also introduced a novel parameter sharing scheme to further disentangle the style from the text. Through both automatic and human evaluation, we demonstrate that TitleStylist can generate relevant, fluent headlines with three target styles: humor, romance, and clickbait. The attraction score of our model generated headlines surpasses that of the state-ofthe-art summarization model by 9.68%, and even outperforms human-written references. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We appreciate all the volunteer native speakers (Shreya Karpoor, Lisa Orii, Abhishek Mohan, Paloma Quiroga, etc.) for the human evaluation of our study, and thank the reviewers for their inspiring comments. Joey Tianyi Zhou is partially supported by the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funding Scheme (Project No. A18A1b0045).","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"shimizu-etal-2014-collection","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/162_Paper.pdf","title":"Collection of a Simultaneous Translation Corpus for Comparative Analysis","abstract":"This paper describes the collection of an English-Japanese\/Japanese-English simultaneous interpretation corpus. There are two main features of the corpus. The first is that professional simultaneous interpreters with different amounts of experience cooperated with the collection. By comparing data from simultaneous interpretation of each interpreter, it is possible to compare better interpretations to those that are not as good. The second is that for part of our corpus there are already translation data available. This makes it possible to compare translation data with simultaneous interpretation data. We recorded the interpretations of lectures and news, and created time-aligned transcriptions. A total of 387k words of transcribed data were collected. The corpus will be helpful to analyze differences in interpretations styles and to construct simultaneous interpretation systems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"antoniak-mimno-2018-evaluating","url":"https:\/\/aclanthology.org\/Q18-1008","title":"Evaluating the Stability of Embedding-based Word Similarities","abstract":"Word embeddings are increasingly being used as a tool to study word associations in specific corpora. However, it is unclear whether such embeddings reflect enduring properties of language or if they are sensitive to inconsequential variations in the source documents. We find that nearest-neighbor distances are highly sensitive to small changes in the training corpus for a variety of algorithms. For all methods, including specific documents in the training set can result in substantial variations. We show that these effects are more prominent for smaller training corpora. We recommend that users never rely on single embedding models for distance calculations, but rather average over multiple bootstrap samples, especially for small corpora.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by NSF #1526155, #1652536, and the Alfred P. Sloan Foundation. We would like to thank Alexandra Schofield, Laure Thompson, our Action Editor Ivan Titov, and our anonymous reviewers for their helpful comments.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tan-jiang-2021-bert","url":"https:\/\/aclanthology.org\/2021.ranlp-1.156","title":"Does BERT Understand Idioms? A Probing-Based Empirical Study of BERT Encodings of Idioms","abstract":"Understanding idioms is important in NLP. In this paper, we study to what extent a pretrained BERT model is able to encode the meaning of a potentially idiomatic expression (PIE) in a certain context. We make use of a few existing datasets and perform two probing tasks: PIE usage classification and idiom paraphrase identification. Our experiment results suggest that BERT indeed is able to separate the literal and idiomatic usages of a PIE with high accuracy. It is also able to encode the idiomatic meaning of a PIE to some extent.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"prasad-etal-2020-opinion","url":"https:\/\/aclanthology.org\/2020.icon-demos.8","title":"Opinion Mining System for Processing Hindi Text for Home Remedies Domain","abstract":"Lexical and computational components developed for an Opinion Mining System that process Hindi text taken from weblogs are presented in the paper. Text chosen for processing are the ones demonstrating cause and effect relationship between related entities 'Food' and 'Health Issues'. The work is novel and lexical resources developed are useful in the current research and may be of importance for future research.","label_nlp4sg":1,"task":["Processing Hindi Text for Home Remedies"],"method":["Opinion Mining System"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jain-etal-2004-anaphora","url":"https:\/\/aclanthology.org\/W04-2310","title":"Anaphora Resolution in Multi-Person Dialogues","abstract":"Anaphora resolution for dialogues is a difficult problem because of the several kinds of complex anaphoric references generally present in dialogic discourses. It is nevertheless a critical first step in the processing of any such discourse. In this paper, we describe a system for anaphora resolution in multi-person dialogues. This system aims to bring together a wide array syntactic, semantic and world knowledge based techniques used for anaphora resolution. In this system, the performance of the heuristics is optimized for specific dialogues using genetic algorithms, which relieves the programmer of hand-crafting the weights of these heuristics. In our system, we propose a new technique based on the use of anaphora chains to enable resolution of a large variety of anaphors, including plural anaphora and cataphora.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jakob-gurevych-2010-extracting","url":"https:\/\/aclanthology.org\/D10-1101","title":"Extracting Opinion Targets in a Single and Cross-Domain Setting with Conditional Random Fields","abstract":"In this paper, we focus on the opinion target extraction as part of the opinion mining task. We model the problem as an information extraction task, which we address based on Conditional Random Fields (CRF). As a baseline we employ the supervised algorithm by Zhuang et al. (2006), which represents the state-of-the-art on the employed data. We evaluate the algorithms comprehensively on datasets from four different domains annotated with individual opinion target instances on a sentence level. Furthermore, we investigate the performance of our CRF-based approach and the baseline in a single-and cross-domain opinion target extraction setting. Our CRF-based approach improves the performance by 0.077, 0.126, 0.071 and 0.178 regarding F-Measure in the single-domain extraction in the four domains. In the crossdomain setting our approach improves the performance by 0.409, 0.242, 0.294 and 0.343 regarding F-Measure over the baseline.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The project was funded by means of the German Federal Ministry of Economy and Technology under the promotional reference \"01MQ07012\". The authors take the responsibility for the contents. This work has been supported by the Volkswagen Foundation as part of the Lichtenberg-Professorship Program under grant No. I\/82806.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bhardwaj-etal-2021-knowledge","url":"https:\/\/aclanthology.org\/2021.wnut-1.33","title":"Knowledge Distillation with Noisy Labels for Natural Language Understanding","abstract":"Knowledge Distillation (KD) is extensively used to compress and deploy large pre-trained language models on edge devices for realworld applications. However, one neglected area of research is the impact of noisy (corrupted) labels on KD. We present, to the best of our knowledge, the first study on KD with noisy labels in Natural Language Understanding (NLU). We document the scope of the problem and present two methods to mitigate the impact of label noise. Experiments on the GLUE benchmark show that our methods are effective even under high noise levels. Nevertheless, our results indicate that more research is necessary to cope with label noise under the KD.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Mindspore 4 for the partial support of this work, which is a new deep learning computing framework.[normalem]ulem","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jakubina-langlais-2016-bad","url":"https:\/\/aclanthology.org\/W16-2370","title":"BAD LUC@WMT 2016: a Bilingual Document Alignment Platform Based on Lucene","abstract":"We participated in the Bilingual Document Alignment shared task of WMT 2016 with the intent of testing plain cross-lingual information retrieval platform built on top of the Apache Lucene framework. We devised a number of interesting variants, including one that only considers the URLs of the pages, and that offers-without any heuristic-surprisingly high performances. We finally submitted the output of a system that combines two informations (text and url) from documents and a post-treatment for an accuracy that reaches 92% on the development dataset distributed for the shared task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been founded by the Fonds de Recherche du Qu\u00e9bec en Nature et Technologie (FRQNT).","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sardinha-2000-comparing","url":"https:\/\/aclanthology.org\/W00-0902","title":"Comparing corpora with WordSmith Tools: How large must the reference corpus be?","abstract":"WordSmith Tools (Scott, 1998) offers a program for comparing corpora, known as KeyWords. KeyWords compares a word list extracted from what has been called 'the study corpus' (the corpus which the researcher is interested in describing) with a word list made from a reference corpus. The only requirement for a word list to be accepted as reference corpus by the software is that must be larger than the study corpus. one of the most pressing questions with respect to using KeyWords seems to be what would be the ideal size of a reference corpus. The aim of this paper is thus to propose answers to this question. Five English corpora were compared to reference corpora of various sizes (varying from two to 100 times larger than the study corpus). The results indicate that a reference corpus that is five times as large as the study corpus yielded a larger number of keywords than a smaller reference corpus. Corpora larger than five times the size of the study corpus yielded similar amounts of keywords. The implication is that a larger reference corpus is not always better than a smaller one, for WordSmith Tools Keywords analysis, while a reference corpus that is less than five times the size of the study corpus may not be reliable. There seems to be no need for using extremely large reference corpora, given that the number of keywords yielded do not seem to change by using corpora larger than five times the size of the study corpus.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"My thanks go to Mike Scott and the three anonymous reviewers for their comments.","year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"shamanna-girishekar-etal-2021-training","url":"https:\/\/aclanthology.org\/2021.naacl-industry.35","title":"Training Language Models under Resource Constraints for Adversarial Advertisement Detection","abstract":"Advertising on e-commerce and social media sites deliver ad impressions at web scale on a daily basis driving value to both shoppers and advertisers. This scale necessitates programmatic ways of detecting unsuitable content in ads to safeguard customer experience and trust. This paper focusses on techniques for training text classification models under resource constraints, built as part of automated solutions for advertising content moderation. We show how weak supervision, curriculum learning and multilingual training can be applied effectively to fine-tune BERT and its variants for text classification tasks in conjunction with different data augmentation strategies. Our extensive experiments on multiple languages show that these techniques detect adversarial ad categories with a substantial gain in precision at high recall threshold over the baseline.","label_nlp4sg":1,"task":["Adversarial Advertisement Detection"],"method":["BERT","weak supervision","curriculum learning","multilingual training"],"goal1":"Decent Work and Economic Growth","goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":1,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yngve-1960-mt","url":"https:\/\/aclanthology.org\/1960.earlymt-nsmt.13","title":"MT at the Massachusetts Institute of Technology","abstract":"Mechanical translation has had a long history at M.I.T. Shortly after the Warren Weaver memorandum of 1949, Yehoshua Bar-Hillel became the first full-time worker in the field. He contributed many of the early ideas and will be well remembered for this. He organized the first conference on mechanical translation, held at M.I.T. in June of 1952. It was an international conference, and although there were only 18 persons registered, nearly everyone interested in MT in the world at that time was there. Of those 18 people, 4 are on the program of this conference, Leon Dostert, Victor Oswald, Erwin Reifler, and myself. The number of people here today gives a measure of how the field has grown in the intervening 7-1\/2 years.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1960,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"qin-etal-2018-dsgan","url":"https:\/\/aclanthology.org\/P18-1046","title":"DSGAN: Generative Adversarial Training for Distant Supervision Relation Extraction","abstract":"Distant supervision can effectively label data for relation extraction, but suffers from the noise labeling problem. Recent works mainly perform soft bag-level noise reduction strategies to find the relatively better samples in a sentence bag, which is suboptimal compared with making a hard decision of false positive samples in sentence level. In this paper, we introduce an adversarial learning framework, which we named DSGAN, to learn a sentencelevel true-positive generator. Inspired by Generative Adversarial Networks, we regard the positive samples generated by the generator as the negative samples to train the discriminator. The optimal generator is obtained until the discrimination ability of the discriminator has the greatest decline. We adopt the generator to filter distant supervision training dataset and redistribute the false positive instances into the negative set, in which way to provide a cleaned dataset for relation classification. The experimental results show that the proposed strategy significantly improves the performance of distant supervision relation extraction comparing to state-of-the-art systems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kurtic-etal-2012-corpus","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/513_Paper.pdf","title":"A Corpus of Spontaneous Multi-party Conversation in Bosnian Serbo-Croatian and British English","abstract":"In this paper we present a corpus of audio and video recordings of spontaneous, face-to-face multi-party conversation in two languages. Freely available high quality recordings of mundane, non-institutional, multi-party talk are still sparse, and this corpus aims to contribute valuable data suitable for study of multiple aspects of spoken interaction. In particular, it constitutes a unique resource for spoken Bosnian Serbo-Croatian (BSC), an under-resourced language with no spoken resources available at present. The corpus consists of just over 3 hours of free conversation in each of the target languages, BSC and British English (BE). The audio recordings have been made on separate channels using head-set microphones, as well as using a microphone array, containing 8 omni-directional microphones. The data has been segmented and transcribed using segmentation notions and transcription conventions developed from those of the conversation analysis research tradition. Furthermore, the transcriptions have been automatically aligned with the audio at the word and phone level, using the method of forced alignment. In this paper we describe the procedures behind the corpus creation and present the main features of the corpus for the study of conversation.","label_nlp4sg":1,"task":["Multi - party Conversation"],"method":["Corpus","audio and video recordings"],"goal1":"Partnership for the goals","goal2":null,"goal3":null,"acknowledgments":"This work was supported by the UK Arts and Humanities Research Council (AHRC). We are grateful to Matt Gibson and Thomas Hain for advice and assistance with forced alignment for British English. We also thank all who took part in or facilitated the creation of the corpus.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":1} {"ID":"mayn-etal-2021-familiar","url":"https:\/\/aclanthology.org\/2021.eacl-srw.14","title":"Familiar words but strange voices: Modelling the influence of speech variability on word recognition","abstract":"We present a deep neural model of spoken word recognition which is trained to retrieve the meaning of a word (in the form of a word embedding) given its spoken form, a task which resembles that faced by a human listener. Furthermore, we investigate the influence of variability in speech signals on the model's performance. To this end, we conduct a set of controlled experiments using wordaligned read speech data in German. Our experiments show that (1) the model is more sensitive to dialectical variation than gender variation, and (2) recognition performance of word cognates from related languages reflect the degree of relatedness between languages in our study. Our work highlights the feasibility of modeling human speech perception using deep neural networks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers of the student research workshop at EACL for their comments and suggestions. Badr M. Abdullah is supported by funding from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), Project ID 232722074, SFB 1102.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"furui-etal-1992-recent","url":"https:\/\/aclanthology.org\/H92-1032","title":"Recent Topics in Speech Recognition Research at NTT Laboratories","abstract":"This paper introduces three recent topics in speech recognition research at NTT (Nippon Telegraph and Telephone) Human Interface Laboratories. The first topic is a new HMM (hidden Markov model) technique that uses VQ-code bigrams to constrain the output probability distribution of the model according to the VQ-codes of previons frames. The output probability distribution changes depending on the previous frames even in the same state, so this method reduces the overlap of feature distributions with different phonemes. The second topic is approaches for adapting a syllable trigram model to a new task in Japanese continuous speech recognition. An approach which uses the most recent input phrases for adaptation is effective in reducing the perplexity and improving phrase recognition rates. The third topic is stochastic language models for sequences of Japanese characters to be used in a Japanese dictation system with unlimited vocabulary. Japanese characters consist of Kanji (Chinese characters) and Kana (Japanese alphabets), and each Kanji has several readings depending on the context. Our dictation system uses character-trigram probabilities as a source model obtained from a text database consisting of both Kanji and Kana~ and generates Kanji-and-Kana sequences directly from input speech.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chua-etal-2018-text","url":"https:\/\/aclanthology.org\/L18-1216","title":"Text Normalization Infrastructure that Scales to Hundreds of Language Varieties","abstract":"We describe the automated multi-language text normalization infrastructure that prepares textual data to train language models used in Google's keyboards and speech recognition systems, across hundreds of language varieties. Training corpora are sourced from various types of data sets, and the text is then normalized using a sequence of handwritten grammars and learned models. These systems need to scale to hundreds or thousands of language varieties in order to meet product needs. Frequent data refreshes, privacy considerations and simultaneous updates across such a high number of languages make manual inspection of the normalized training data infeasible, while there is ample opportunity for data normalization issues. By tracking metrics about the data and how it was processed, we are able to catch internal data preparation issues and external data corruption issues that can be hard to notice using standard extrinsic evaluation methods. Showing the importance of paying attention to data normalization behavior in large-scale pipelines, these metrics have highlighted issues in Google's real-world speech recognition system that have caused significant, but latent, quality degradation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"krieger-declerck-2014-tmo","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/115_Paper.pdf","title":"TMO --- The Federated Ontology of the TrendMiner Project","abstract":"This paper describes work carried out in the European project TrendMiner which partly deals with the extraction and representation of real time information from dynamic data streams. The focus of this paper lies on the construction of an integrated ontology, TMO, the TrendMiner Ontology, that has been assembled from several independent multilingual taxonomies and ontologies which are brought together by an interface specification, expressed in OWL. Within TrendMiner, TMO serves as a common language that helps to interlink data, delivered from both symbolic and statistical components of the TrendMiner system. Very often, the extracted data is supplied as quintuples, RDF triples that are extended by two further temporal arguments, expressing the temporal extent in which an atemporal statement is true. In this paper, we will also sneak a peek on the temporal entailment rules and queries that are built into the semantic repository hosting the data and which can be used to derive useful new information.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research described in this paper has been financed by the European Project TRENDMINER under contract number FP7 ICT 287863. The authors would like to thank our three reviewers for their encouraging and detailed comments. We would also like to thank our colleagues Ingrid Aichberger, Bernd Kiefer, Ashok Kumar, and Paul Ringler. Finally, we want to say a big thank you to the providers of the pivotal data on which our ontologies are based.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"grandeit-etal-2020-using","url":"https:\/\/aclanthology.org\/2020.nlpcss-1.2","title":"Using BERT for Qualitative Content Analysis in Psychosocial Online Counseling","abstract":"Qualitative content analysis is a systematic method commonly used in the social sciences to analyze textual data from interviews or online discussions. However, this method usually requires high expertise and manual effort because human coders need to read, interpret, and manually annotate text passages. This is especially true if the system of categories used for annotation is complex and semantically rich. Therefore, qualitative content analysis could benefit greatly from automated coding. In this work, we investigate the usage of machine learning-based text classification models for automatic coding in the area of psychosocial online counseling. We developed a system of over 50 categories to analyze counseling conversations, labeled over 10.000 text passages manually, and evaluated the performance of different machine learning-based classifiers against human coders.","label_nlp4sg":1,"task":["Psychosocial Online Counseling"],"method":["BERT","Qualitative Content Analysis"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gao-etal-2018-neural-approaches","url":"https:\/\/aclanthology.org\/P18-5002","title":"Neural Approaches to Conversational AI","abstract":"This tutorial surveys neural approaches to conversational AI that were developed in the last few years. We group conversational systems into three categories: (1) question answering agents, (2) taskoriented dialogue agents, and (3) social bots. For each category, we present a review of state-of-the-art neural approaches, draw the connection between neural approaches and traditional symbolic approaches, and discuss the progress we have made and challenges we are facing, using specific systems and models as case studies.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pontiki-etal-2020-verbal","url":"https:\/\/aclanthology.org\/2020.lr4sshoc-1.4","title":"Verbal Aggression as an Indicator of Xenophobic Attitudes in Greek Twitter during and after the Financial Crisis","abstract":"We present a replication of a data-driven and linguistically inspired Verbal Aggression analysis framework that was designed to examine Twitter verbal attacks against predefined target groups of interest as an indicator of xenophobic attitudes during the financial crisis in Greece, in particular during the period 2013-2016. The research goal in this paper is to reexamine Verbal Aggression as an indicator of xenophobic attitudes in Greek Twitter three years later, in order to trace possible changes regarding the main t argets, the types and the content of the verbal attacks against the same targets in the post crisis era, given also the ongoing refugee crisis and the political landscape in Greece as it was shaped after the elections in 2019. The results indicate an interesting rearrangement of the main targets of the verbal attacks, while the content and the types of the attacks provide valuable insights about the way these targets are being framed as compared to the respective dominant perceptions and stereotypes about them during the period 2013-2016.","label_nlp4sg":1,"task":["replication of a data - driven and linguistically inspired Verbal Aggression analysis"],"method":[],"goal1":"Peace, Justice and Strong Institutions","goal2":"Reduced Inequalities","goal3":null,"acknowledgments":"The authors are grateful to Prof. Vasiliki ","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":1,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"mcdonald-meteer-1988-water","url":"https:\/\/aclanthology.org\/A88-1006","title":"From Water to Wine: Generating Natural Language Text From Today's Applications Programs","abstract":"In this paper we present a means of compensating for the semantic deficits of linguistically naive underlying application programs without compromising principled grammatical treatments in natural language generation. We present a method for building an interface from today's underlying application programs to the linguistic realization component Mumble-86. The goal of the paper is not to discuss how Mumble works, but to describe how one exploits its capabilities. We provide examples from current generation projects using Mumble as their linguistic component.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1988,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"abadeer-2020-assessment","url":"https:\/\/aclanthology.org\/2020.clinicalnlp-1.18","title":"Assessment of DistilBERT performance on Named Entity Recognition task for the detection of Protected Health Information and medical concepts","abstract":"Bidirectional Encoder Representations from Transformers (BERT) models achieve state-ofthe-art performance on a number of Natural Language Processing tasks. However, their model size on disk often exceeds 1 GB and the process of fine-tuning them and using them to run inference consumes significant hardware resources and runtime. This makes them hard to deploy to production environments. This paper fine-tunes DistilBERT, a lightweight deep learning model, on medical text for the named entity recognition task of Protected Health Information (PHI) and medical concepts. This work provides a full assessment of the performance of DistilBERT in comparison with BERT models that were pre-trained on medical text. For Named Entity Recognition task of PHI, DistilBERT achieved almost the same results as medical versions of BERT in terms of F 1 score at almost half the runtime and consuming approximately half the disk space. On the other hand, for the detection of medical concepts, DistilBERT's F 1 score was lower by 4 points on average than medical BERT variants.","label_nlp4sg":1,"task":["detection of Protected Health Information","Named Entity Recognition"],"method":["DistilBERT","BERT"],"goal1":"Good Health and Well-Being","goal2":"Peace, Justice and Strong Institutions","goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"russell-1976-computer","url":"https:\/\/aclanthology.org\/J76-2003","title":"Computer Understanding of Metaphorically Used Verbs","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1976,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chathuranga-etal-2017-opinion","url":"https:\/\/aclanthology.org\/O17-1028","title":"Opinion Target Extraction for Student Course Feedback","abstract":"Student feedback is an essential part of the instructor-student relationship. Traditionally student feedback is manually summarized by instructors, which is time consuming. Automatic student feedback summarization provides a potential solution to this. For summarizing student feedback, first, the opinion targets should be identified and extracted. In this context, opinion targets such as \"lecture slides\", \"teaching style\" are the important key points in the feedback that the students have shown their sentiment towards. In this paper, we focus on the opinion target extraction task of general student feedback. We model this problem as an information extraction task and extract opinion targets using a Conditional Random Fields (CRF) classifier. Our results show that this classifier outperforms the state-of-the-art techniques for student feedback summarization.","label_nlp4sg":1,"task":["Opinion Target Extraction","student feedback summarization"],"method":["Conditional Random Fields"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"resnik-1992-probabilistic","url":"https:\/\/aclanthology.org\/C92-2065","title":"Probabilistic Tree-Adjoining Grammar as a Framework for Statistical Natural Language Processing","abstract":"In this paper, I argue for the use of a probabilistic form of tree-adjoining grammar (TAG) in statistical natural language processing. I first discuss two previous statistical approaches-one that concentrates on the probabilities of structural operations, and another that emphasizes co, occurrence relationships between words. I argue that a purely structural approach, exemplified by probabilistie context-free grammar, lacks sufficient sensitivity to lexical context, and, conversely, that lexical co-occurence analyses require a richer notion of locality that is best provided by importing some notion of structure.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kassner-schutze-2020-bert","url":"https:\/\/aclanthology.org\/2020.findings-emnlp.307","title":"BERT-kNN: Adding a kNN Search Component to Pretrained Language Models for Better QA","abstract":"Khandelwal et al. (2020) use a k-nearestneighbor (kNN) component to improve language model performance. We show that this idea is beneficial for open-domain question answering (QA). To improve the recall of facts encountered during training, we combine BERT (Devlin et al., 2019) with a traditional information retrieval step (IR) and a kNN search over a large datastore of an embedded text collection. Our contributions are as follows: i) BERT-kNN outperforms BERT on cloze-style QA by large margins without any further training. ii) We show that BERT often identifies the correct response category (e.g., US city), but only kNN recovers the factually correct answer (e.g., \"Miami\"). iii) Compared to BERT, BERT-kNN excels for rare facts. iv) BERT-kNN can easily handle facts not covered by BERT's training set, e.g., recent events.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been funded by the German Federal Ministry of Education and Research (BMBF) under Grant No. 01IS18036A. The authors of this work take full responsibility for its content.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ch-wang-jurgens-2021-using","url":"https:\/\/aclanthology.org\/2021.emnlp-main.782","title":"Using Sociolinguistic Variables to Reveal Changing Attitudes Towards Sexuality and Gender","abstract":"Individuals signal aspects of their identity and beliefs through linguistic choices. Studying these choices in aggregate allows us to examine large-scale attitude shifts within a population. Here, we develop computational methods to study word choice within a sociolinguistic lexical variable-alternate words used to express the same concept-in order to test for change in the United States towards sexuality and gender. We examine two variables: i) referents to significant others, such as the word \"partner\" and ii) referents to an indefinite person, both of which could optionally be marked with gender. The linguistic choices in each variable allow us to study increased rates of acceptances of gay marriage and gender equality, respectively. In longitudinal analyses across Twitter and Reddit over 87M messages, we demonstrate that attitudes are changing but that these changes are driven by specific demographics within the United States. Further, in a quasi-causal analysis, we show that passages of Marriage Equality Acts in different states are drivers of linguistic change.","label_nlp4sg":1,"task":["Reveal Changing Attitudes Towards Sexuality and Gender"],"method":["Sociolinguistic Variables","quasi - causal analysis"],"goal1":"Gender Equality","goal2":"Reduced Inequalities","goal3":null,"acknowledgments":"We thank Julia Mendelsohn, Jiaxin Pei, and Jian Zhu for their helpful discussions, Jack Grieve and Dirk Hovy for an initial discussion on this idea at the Lorentz Center (and Nanna Hilton, Dirk Hovy, Dong Nguyen, and Christoph Purschke for organizing that fantastic workshop on Computational Sociolinguistics, which planted the seed that led to this work!), and to the students in the SI 710 PhD seminar on Computational Sociolinguistics for their discussions on this idea. This material is based upon work supported by the National Science Foundation under Grant No. 1850221.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":1,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":1,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"su-etal-2020-multi","url":"https:\/\/aclanthology.org\/2020.findings-emnlp.416","title":"Multi-hop Question Generation with Graph Convolutional Network","abstract":"Multi-hop Question Generation (QG) aims to generate answer-related questions by aggregating and reasoning over multiple scattered evidence from different paragraphs. It is a more challenging yet under-explored task compared to conventional single-hop QG, where the questions are generated from the sentence containing the answer or nearby sentences in the same paragraph without complex reasoning. To address the additional challenges in multi-hop QG, we propose Multi-Hop Encoding Fusion Network for Question Generation (MulQG), which does context encoding in multiple hops with Graph Convolutional Network and encoding fusion via an Encoder Reasoning Gate. To the best of our knowledge, we are the first to tackle the challenge of multi-hop reasoning over paragraphs without any sentence-level information. Empirical results on HotpotQA dataset demonstrate the effectiveness of our method, in comparison with baselines on automatic evaluation metrics. Moreover, from the human evaluation, our proposed model is able to generate fluent questions with high completeness and outperforms the strongest baseline by 20.8% in the multihop evaluation. The code is publicly available at https:\/\/github.com\/HLTCHKUST\/MulQG.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"johnson-2008-unsupervised","url":"https:\/\/aclanthology.org\/W08-0704","title":"Unsupervised Word Segmentation for Sesotho Using Adaptor Grammars","abstract":"This paper describes a variety of nonparametric Bayesian models of word segmentation based on Adaptor Grammars that model different aspects of the input and incorporate different kinds of prior knowledge, and applies them to the Bantu language Sesotho. While we find overall word segmentation accuracies lower than these models achieve on English, we also find some interesting differences in which factors contribute to better word segmentation. Specifically, we found little improvement to word segmentation accuracy when we modeled contextual dependencies, while modeling morphological structure did improve segmentation accuracy.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"I'd like to thank Katherine Demuth for the Sesotho data and help with Sesotho morphology, my collaborators Sharon Goldwater and Tom Griffiths for their comments and suggestions about adaptor grammars, and the anonymous SIGMORPHON reviewers for their careful reading and insightful comments on the original abstract. This research was funded by NSF awards 0544127 and 0631667.","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mcdonald-1993-issues","url":"https:\/\/aclanthology.org\/J93-1009","title":"Issues in the choice of a source for Natural Language Generation","abstract":"The most vexing question in natural language generation is 'what is the source'-what do speakers start from when they begin to compose an utterance? Theories of generation in the literature differ markedly in their assumptions. A few start with an unanalyzed body of numerical data (e.g. Bourbeau et al. 1990; Kukich 1988). Most start with the structured objects that are used by a particular reasoning system or simulator and are cast in that system's representational formalism (e.g. Hovy 1990; Meteer 1992; R6sner 1988). A growing number of systems, largely focused on problems in machine translation or grammatical theory, take their input to be logical formulae based on lexical predicates (e.g. Wedekind 1988; Shieber et al. 1990). The lack of a consistent answer to the question of the generator's source has been at the heart of the problem of how to make research on generation intelligible and engaging for the rest of the computational linguistics community, and has complicated efforts to evaluate alternative treatments even for people in the field. Nevertheless, a source cannot be imposed by fiat. Differences in what information is assumed to be available, its relative decomposition when compared to the \"packaging\" available in the words or syntactic constructions of the language (linguistic resources), what amount and kinds of information are contained in the atomic units of the source, and what sorts of compositions and other larger scale organizations are possible-all these have an impact on what architectures are plausible for generation and what efficiencies they can achieve. Advances in the field often come precisely through insights into the representation of the source. Language comprehension research does not have this problem-its source is a text. Differences in methodology govern where this text comes from (e.g., single sentence vs. discourse, sample sentences vs. corpus study, written vs. spoken), but these aside there is no question of what the comprehension process starts with. Where comprehension \"ends\" is quite another matter. If we go back to some of the early comprehension systems, the end point of the process was an action, and there was linguistic processing at every stage (Winograd 1972). Some researchers, this author included, take the end point to be an elaboration of an already existing semantic model whereby some new individuals are added and new relations established between them and other individuals (Martin and Riesbeck 1986; McDonald 1992a). Today's dominant paradigm, however, stemming perhaps from the predominance of research on question-answering and following the lead of theoretical linguistics, is to take the end point to be a logical form: an expression that codifies the information in the text at a fairly shallow level, e.g., a first-order formula with content words mapped to predicates with the same spelling, and with individuals represented by quantified variables or constants.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1993,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-sporleder-2009-classifier","url":"https:\/\/aclanthology.org\/D09-1033","title":"Classifier Combination for Contextual Idiom Detection Without Labelled Data","abstract":"We propose a novel unsupervised approach for distinguishing literal and non-literal use of idiomatic expressions. Our model combines an unsupervised and a supervised classifier. The former bases its decision on the cohesive structure of the context and labels training data for the latter, which can then take a larger feature space into account. We show that a combination of both classifiers leads to significant improvements over using the unsupervised classifier alone.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was funded by the Cluster of Excellence \"Multimodal Computing and Interaction\".","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhao-kawahara-2017-joint","url":"https:\/\/aclanthology.org\/I17-1071","title":"Joint Learning of Dialog Act Segmentation and Recognition in Spoken Dialog Using Neural Networks","abstract":"Dialog act segmentation and recognition are basic natural language understanding tasks in spoken dialog systems. This paper investigates a unified architecture for these two tasks, which aims to improve the model's performance on both of the tasks. Compared with past joint models, the proposed architecture can (1) incorporate contextual information in dialog act recognition, and (2) integrate models for tasks of different levels as a whole, i.e. dialog act segmentation on the word level and dialog act recognition on the segment level. Experimental results show that the joint training system outperforms the simple cascading system and the joint coding system on both dialog act segmentation and recognition tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by JST ERATO Ishiguro Symbiotic Human-Robot Interaction program (Grant Number JPMJER1401), Japan.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"blekhman-1996-pars","url":"https:\/\/aclanthology.org\/1996.amta-1.36","title":"PARS and PARS\/U machine translation systems: English-Russian-English and English-Ukrainian-English","abstract":"presents the following commercially sold bi-directional machine translation systems for IBM PCs: \u2022 PARS 3.1-a Russian-English-Russian system; \u2022 PARS\/U 1.1-a Ukrainian-English-Ukrainian system. Both systems run under MS Windows; PARS also runs under MS DOS. Both PARS and PARS\/U run in stand-alone and multiuser environments. Up to 4 dictionaries can be used in the translation session. The systems are compatible with all text processors supported by Windows: the user may copy the text portion to be translated to Clipboard, have it translated by PARS or PARS\/U in the \"Clipboard\" mode, and the target text will be written to the Clipboard automatically. The systems are also \"embedded\" in MS Word 6 and MS Word 7: if a text is opened in WinWord, the \"Translate\" option in the editor main menu lets the user have the text translated, after which the target text is written to another window, placed under the first. The source text format is fully preserved in the target file, including fonts, styles, paragraphs, and tables.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"walker-etal-2002-speech","url":"https:\/\/aclanthology.org\/W02-2110","title":"Speech-Plans: Generating Evaluative Responses in Spoken Dialogue","abstract":"Recent work on evaluation of spoken dialogue systems indicates that better algorithms are needed for the presentation of complex information in speech. Current dialogue systems often rely on presenting sets of options and their attributes sequentially. This places a large memory burden on users, who have to remember complex trade-offs between multiple options and their attributes. To address these problems we build on previous work using multiattribute decision theory to devise speech-planning algorithms that present usertailored summaries, comparisons and recommendations that allow users to focus on critical differences between options and their attributes. We discuss the differences between speech and text planning that result from the particular demands of the speech situation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tanenhaus-2011-common","url":"https:\/\/aclanthology.org\/W11-2009","title":"Common Ground and Perspective-taking in Real-time Language Processing","abstract":"Successful communication would seem to require that speakers and listeners distinguish between their own knowledge, commitments and intentions, and those of their interlocutors. A particularly important distinction is between shared knowledge (common ground) and private knowledge (privileged ground). Keeping track of what is shared and what is privileged might seem too computationally expensive and too memory intensive to inform real-time language processing--a position supported by striking experimental evidence that speakers and listeners act egocentrically, showing strong and seemingly inappropriate intrusions from their own privileged ground. I'll review recent results from my laboratory using unscripted conversation demonstrating that (1) speaker's utterances provide evidence about whether they believe information is shared or privileged; and (2) addressees are extremely sensitive to this evidence. I'll suggest an integrative framework that explains discrepancies in the literature and might be informative for researchers in the computational dialogue community.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bi-etal-2019-incorporating","url":"https:\/\/aclanthology.org\/D19-1255","title":"Incorporating External Knowledge into Machine Reading for Generative Question Answering","abstract":"Commonsense and background knowledge is required for a QA model to answer many nontrivial questions. Different from existing work on knowledge-aware QA, we focus on a more challenging task of leveraging external knowledge to generate answers in natural language for a given question with context. In this paper, we propose a new neural model, Knowledge-Enriched Answer Generator (KEAG), which is able to compose a natural answer by exploiting and aggregating evidence from all four information sources available: question, passage, vocabulary and knowledge. During the process of answer generation, KEAG adaptively determines when to utilize symbolic knowledge and which fact from the knowledge is useful. This allows the model to exploit external knowledge that is not explicitly stated in the given text, but that is relevant for generating an answer. The empirical study on public benchmark of answer generation demonstrates that KEAG improves answer quality over models without knowledge and existing knowledge-aware models, confirming its effectiveness in leveraging knowledge.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"adesam-bouma-2016-old","url":"https:\/\/aclanthology.org\/W16-2104","title":"Old Swedish Part-of-Speech Tagging between Variation and External Knowledge","abstract":"We present results on part-of-speech and morphological tagging for Old Swedish (1225-1526). In a set of experiments we look at the difference between withincorpus and across-corpus accuracy, and explore ways of mitigating the effects of variation and data sparseness by adding different types of dictionary information. Combining several methods, together with a simple approach to handle spelling variation, we achieve a major boost in tagger performance on a modest test collection.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is funded by Stiftelsen Marcus och Amalia Wallenbergs Minnesfond (project \"MA\u00deiR\", nr MAW 2012.0146).","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"shaar-etal-2021-findings","url":"https:\/\/aclanthology.org\/2021.nlp4if-1.12","title":"Findings of the NLP4IF-2021 Shared Tasks on Fighting the COVID-19 Infodemic and Censorship Detection","abstract":"We present the results and the main findings of the NLP4IF-2021 shared tasks. Task 1 focused on fighting the COVID-19 infodemic in social media, and it was offered in Arabic, Bulgarian, and English. Given a tweet, it asked to predict whether that tweet contains a verifiable claim, and if so, whether it is likely to be false, is of general interest, is likely to be harmful, and is worthy of manual fact-checking; also, whether it is harmful to society, and whether it requires the attention of policy makers. Task 2 focused on censorship detection, and was offered in Chinese. A total of ten teams submitted systems for task 1, and one team participated in task 2; nine teams also submitted a system description paper. Here, we present the tasks, analyze the results, and discuss the system submissions and the methods they used. Most submissions achieved sizable improvements over several baselines, and the best systems used pre-trained Transformers and ensembles. The data, the scorers and the leaderboards for the tasks are available at http:\/\/ gitlab.com\/NLP4IF\/nlp4if-2021.","label_nlp4sg":1,"task":["Infodemic and Censorship Detection"],"method":["Analysis"],"goal1":"Peace, Justice and Strong Institutions","goal2":"Good Health and Well-Being","goal3":null,"acknowledgments":"We would like to thank Akter Fatema, Al-Awthan Ahmed, Al-Dobashi Hussein, El Messelmani Jana, Fayoumi Sereen, Mohamed Esraa, Ragab Saleh, and Shurafa Chereen for helping with the Arabic data annotations.This research is part of the Tanbih mega-project, developed at the Qatar Computing Research Institute, HBKU, which aims to limit the impact of \"fake news,\" propaganda, and media bias by making users aware of what they are reading.This material is also based upon work supported by the US National Science Foundation under Grants No. 1704113 and No. 1828199. This publication was also partially made possible by the innovation grant No. 21 -Misinformation and Social Networks Analysis in Qatar from Hamad Bin Khalifa University's (HBKU) Innovation Center. The findings achieved herein are solely the responsibility of the authors.","year":2021,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"boonkwan-steedman-2011-grammar","url":"https:\/\/aclanthology.org\/I11-1049","title":"Grammar Induction from Text Using Small Syntactic Prototypes","abstract":"We present an efficient technique to incorporate a small number of cross-linguistic parameter settings defining default word orders to otherwise unsupervised grammar induction. A syntactic prototype, represented by the integrated model between Categorial Grammar and dependency structure, generated from the language parameters, is used to prune the search space. We also propose heuristics which prefer less complex syntactic categories to more complex ones in parse decoding. The system reduces errors generated by the state-of-the-art baselines for WSJ10 (1% error reduction of F1 score for the model trained on Sections 2-22 and tested on Section 23), Chinese10 (26% error reduction of F1), German10 (9% error reduction of F1), and Japanese10 (8% error reduction of F1), and is not significantly different from the baseline for Czech10.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Tom Kwiatkowski, Michael Auli, Christos Christodoulopoulos, Alexandra Birch, Mark Granroth-Wilding, and Emily Thomforde (University of Edinburgh), Adam Lopez (Johns Hopkins University), and Michael Collins (Columbia University) for useful comments and discussion related to this work, and the three anonymous reviewers for their useful feedback. This research was funded by the Royal Thai Government Scholarship to Prachya Boonkwan and EU ERC Advanced Fellowship 249520 GRAMPLUS to Mark Steedman.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"masis-anderson-2021-prosper","url":"https:\/\/aclanthology.org\/2021.blackboxnlp-1.8","title":"ProSPer: Probing Human and Neural Network Language Model Understanding of Spatial Perspective","abstract":"Understanding perspectival language is important for applications like dialogue systems and human-robot interaction. We propose a probe task that explores how well language models understand spatial perspective. We present a dataset for evaluating perspective inference in English, ProSPer, and use it to explore how humans and Transformer-based language models infer perspective. Although the best bidirectional model performs similarly to humans, they display different strengths: humans outperform neural networks in conversational contexts, while RoBERTa excels at written genres. 108 Speech verb: verb is in the scope of a speech verb like say or tell. Example: Mimi. I love your work. Found you through. Purl Bee. I visit your web site for inspiration and to \"visit\". I'm in California but daughters boyfriend is from Boston I've said if I ever go there I want to see your work or take a workshop. Thought verb: verb is in the scope of a thought verb like think or believe. Example: Stephanie: You saved a seat for me. Thank you. I thought I'd find you here. Nick: I'm not going, you know. They asked me to go, but I'm not going. Stephanie: You don't think I drove all the way down here to try and talk you into going, do you? Nick: If you came here for some Brooke bashing, you came to the wrong place. Let me tell you that right now. Stephanie: Actually, I came for a drink. Quotation: verb is part of a quotation. Example: Although he hasn't put all the pieces together yet, young Justin knows that words have power, too. \"Once we stopped at a paint store with my mother,\" Laurie Bradley recalls. \"She went in to get something while we waited in the car. Justin noticed the closed sign in the door of the store that the owner had forgotten to flip over. He said, \"Mommy that's a c. That means kids aren't allowed. Don't you just hate that?\"\" Other: verb is in the scope of a predicate that is not a speech or thought verb, such as discover or learn. Example: I see the mailman's truck go by and I know his routine so well; he'll be at my house in fifteen minutes. \"I got ta go,\" I say, and standing, I take of my Thriftway apron. \"I wanted to ask you to go to the movies,\" he says and I don't have time to think. I have fourteen minutes to check out with my boss and run home.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bansal-etal-2011-gappy","url":"https:\/\/aclanthology.org\/P11-1131","title":"Gappy Phrasal Alignment By Agreement","abstract":"We propose a principled and efficient phraseto-phrase alignment model, useful in machine translation as well as other related natural language processing problems. In a hidden semi-Markov model, word-to-phrase and phraseto-word translations are modeled directly by the system. Agreement between two directional models encourages the selection of parsimonious phrasal alignments, avoiding the overfitting commonly encountered in unsupervised training with multi-word units. Expanding the state space to include \"gappy phrases\" (such as French ne pas) makes the alignment space more symmetric; thus, it allows agreement between discontinuous alignments. The resulting system shows substantial improvements in both alignment quality and translation quality over word-based Hidden Markov Models, while maintaining asymptotically equivalent runtime.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their helpful suggestions. This project is funded by Microsoft Research.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"maharjan-etal-2017-dt","url":"https:\/\/aclanthology.org\/S17-2014","title":"DT\\_Team at SemEval-2017 Task 1: Semantic Similarity Using Alignments, Sentence-Level Embeddings and Gaussian Mixture Model Output","abstract":"We describe our system (DT Team) submitted at SemEval-2017 Task 1, Semantic Textual Similarity (STS) challenge for English (Track 5). We developed three different models with various features including similarity scores calculated using word and chunk alignments, word\/sentence embeddings, and Gaussian Mixture Model (GMM). The correlation between our system's output and the human judgments were up to 0.8536, which is more than 10% above baseline, and almost as good as the best performing system which was at 0.8547 correlation (the difference is just about 0.1%). Also, our system produced leading results when evaluated with a separate STS benchmark dataset. The word alignment and sentence embeddings based features were found to be very effective.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ma-etal-2021-contrastive","url":"https:\/\/aclanthology.org\/2021.findings-acl.51","title":"Contrastive Fine-tuning Improves Robustness for Neural Rankers","abstract":"The performance of state-of-the-art neural rankers can deteriorate substantially when exposed to noisy inputs or applied to a new domain. In this paper, we present a novel method for fine-tuning neural rankers that can significantly improve their robustness to out-of-domain data and query perturbations. Specifically, a contrastive loss that compares data points in the representation space is combined with the standard ranking loss during fine-tuning. We use relevance labels to denote similar\/dissimilar pairs, which allows the model to learn the underlying matching semantics across different query-document pairs and leads to improved robustness. In experiments with four passage ranking datasets, the proposed contrastive fine-tuning method obtains improvements on robustness to query reformulations, noise perturbations, and zeroshot transfer for both BERT and BART-based rankers. Additionally, our experiments show that contrastive fine-tuning outperforms data augmentation for robustifying neural rankers.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhou-etal-2013-collective","url":"https:\/\/aclanthology.org\/D13-1189","title":"Collective Opinion Target Extraction in Chinese Microblogs","abstract":"Microblog messages pose severe challenges for current sentiment analysis techniques due to some inherent characteristics such as the length limit and informal writing style. In this paper, we study the problem of extracting opinion targets of Chinese microblog messages. Such fine-grained word-level task has not been well investigated in microblogs yet. We propose an unsupervised label propagation algorithm to address the problem. The opinion targets of all messages in a topic are collectively extracted based on the assumption that similar messages may focus on similar opinion targets. Topics in microblogs are identified by hashtags or using clustering algorithms. Experimental results on Chinese microblogs show the effectiveness of our framework and algorithms.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work was supported by NSFC (61170166), Beijing Nova Program (2008B03) and National High-Tech R&D Program (2012AA011101).","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"asheghi-etal-2014-designing","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/470_Paper.pdf","title":"Designing and Evaluating a Reliable Corpus of Web Genres via Crowd-Sourcing","abstract":"Research in Natural Language Processing often relies on a large collection of manually annotated documents. However, currently there is no reliable genre-annotated corpus of web pages to be employed in Automatic Genre Identification (AGI). In AGI, documents are classified based on their genres rather than their topics or subjects. The major shortcoming of available web genre collections is their relatively low inter-coder agreement. Reliability of annotated data is an essential factor for reliability of the research result. In this paper, we present the first web genre corpus which is reliably annotated. We developed precise and consistent annotation guidelines which consist of well-defined and well-recognized categories. For annotating the corpus, we used crowd-sourcing which is a novel approach in genre annotation. We computed the overall as well as the individual categories' chance-corrected inter-annotator agreement. The results show that the corpus has been annotated reliably.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was partly funded by the Google Research Award to Serge Sharoff and Katja Markert, as well as by EU FP7 funding, contract No 251534 (HyghTra). Also Noushin Rezapour is funded by an EPSRC Doctoral Training Grant.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rozovskaya-roth-2013-joint","url":"https:\/\/aclanthology.org\/D13-1074","title":"Joint Learning and Inference for Grammatical Error Correction","abstract":"State-of-the-art systems for grammatical error correction are based on a collection of independently-trained models for specific errors. Such models ignore linguistic interactions at the sentence level and thus do poorly on mistakes that involve grammatical dependencies among several words. In this paper, we identify linguistic structures with interacting grammatical properties and propose to address such dependencies via joint inference and joint learning. We show that it is possible to identify interactions well enough to facilitate a joint approach and, consequently, that joint methods correct incoherent predictions that independentlytrained classifiers tend to produce. Furthermore, because the joint learning model considers interacting phenomena during training, it is able to identify mistakes that require making multiple changes simultaneously and that standard approaches miss. Overall, our model significantly outperforms the Illinois system that placed first in the CoNLL-2013 shared task on grammatical error correction.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors thank Peter Chew, Jennifer Cole, Mark Sammons, and the anonymous reviewers for their helpful feedback. The authors thank Josh Gioja for the code that performs phonetic disambiguation of the indefinite article. This material is based on research sponsored by DARPA under agreement number FA8750-13-2-0008. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kuncoro-etal-2017-recurrent","url":"https:\/\/aclanthology.org\/E17-1117","title":"What Do Recurrent Neural Network Grammars Learn About Syntax?","abstract":"Recurrent neural network grammars (RNNG) are a recently proposed probablistic generative modeling family for natural language. They show state-ofthe-art language modeling and parsing performance. We investigate what information they learn, from a linguistic perspective, through various ablations to the model and the data, and by augmenting the model with an attention mechanism (GA-RNNG) to enable closer inspection. We find that explicit modeling of composition is crucial for achieving the best performance. Through the attention mechanism, we find that headedness plays a central role in phrasal representation (with the model's latent attention largely agreeing with predictions made by hand-crafted head rules, albeit with some important differences). By training grammars without nonterminal labels, we find that phrasal representations depend minimally on nonterminals, providing support for the endocentricity hypothesis.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was sponsored in part by the Defense Advanced Research Projects Agency (DARPA) Information Innovation Office (I2O) under the Low Resource Languages for Emergent Incidents (LORELEI) program issued by DARPA\/I2O under Contract No. HR0011-15-C-0114; it was also supported in part by Contract No. W911NF-15-1-0543 with DARPA and the Army Research Office (ARO). Approved for public release, distribution unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"long-etal-2020-hierarchical","url":"https:\/\/aclanthology.org\/2020.findings-emnlp.430","title":"Hierarchical Region Learning for Nested Named Entity Recognition","abstract":"Named Entity Recognition (NER) is deeply explored and widely used in various tasks. Usually, some entity mentions are nested in other entities, which leads to the nested NER problem. Leading region based models face both the efficiency and effectiveness challenge due to the high subsequence enumeration complexity. To tackle these challenges, we propose a hierarchical region learning framework to automatically generate a tree hierarchy of candidate regions with nearly linear complexity and incorporate structure information into the region representation for better classification. Experiments on benchmark datasets ACE-2005, GENIA and JNLPBA demonstrate competitive or better results than state-of-theart baselines.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We sincerely thank all reviewers and AC for their comments and suggestions. This research work was funded by the National Natural Science Foundation of China under Grant No. 62072447.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhao-etal-2014-ecnu","url":"https:\/\/aclanthology.org\/S14-2042","title":"ECNU: Expression- and Message-level Sentiment Orientation Classification in Twitter Using Multiple Effective Features","abstract":"Microblogging websites (such as Twitter, Facebook) are rich sources of data for opinion mining and sentiment analysis. In this paper, we describe our approaches used for sentiment analysis in twitter (task 9) organized in SemEval 2014. This task tries to determine whether the sentiment orientations conveyed by the whole tweets or pieces of tweets are positive, negative or neutral. To solve this problem, we extracted several simple and basic features considering the following aspects: surface text, syntax, sentiment score and twitter characteristic. Then we exploited these features to build a classifier using SVM algorithm. Despite the simplicity of features, our systems rank above the average. This work is licensed under a Creative Commons Attribution 4.0 International Licence.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is supported by grants from National Natural Science Foundation of China (No.60903093) and Shanghai Knowledge Service Platform Project (No. ZF1213).","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nica-etal-2004-enriching","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/579.pdf","title":"Enriching EWN with Syntagmatic Information by Means of WSD","abstract":"Word Sense Disambiguation confronts with the lack of syntagmatic information associated to word senses. In the present work we propose a method for the enrichment of EuroWordNet with syntagmatic information, by means of the WSD process itself. We consider that an ambiguous occurrence drastically reduces its ambiguity when considered together with the words it establishes syntactic relations in the sentence: the claim of \"quasi one sense per syntactic relation\". On this hypothesis, we obtain sense-tagged syntactic patterns for an ambiguous word intensively using the corpus, with the help of EWN and of associated WSD algorithms. For an occurrence disambiguation, we also consider the whole sentential context where we apply the same WSD algorithms, and combine the sense proposals from the syntactic patterns with the ones from the sentential context. We evaluate the hole WSD method on the nouns in the Spanish Senseval-2 exercise and also the utility of the syntactic patterns for the sense assignment. The annotated patterns we obtain in the WSD process are incorporated into EWN, associated to the synset of the assigned sense. As the syntactic pattern repeat themselves in the text, if sense-tagged, they are a valuable information for future WSD tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"garimella-etal-2021-intelligent","url":"https:\/\/aclanthology.org\/2021.findings-acl.397","title":"He is very intelligent, she is very beautiful? On Mitigating Social Biases in Language Modelling and Generation","abstract":"Social biases with respect to demographics (e.g., gender, age, race) in datasets are often encoded in the large pre-trained language models trained on them. Prior works have largely focused on mitigating biases in context-free representations, with recent shift to contextual ones. While this is useful for several word and sentence-level classification tasks, mitigating biases in only the representations may not suffice to use these models for language generation tasks, such as auto-completion, summarization, or dialogue generation. In this paper, we propose an approach to mitigate social biases in BERT, a large pre-trained contextual language model, and show its effectiveness in fill-in-the-blank sentence completion and summarization tasks. In addition to mitigating biases in BERT, which in general acts as an encoder, we propose lexical co-occurrence-based bias penalization in the decoder units in generation frameworks, and show bias mitigation in summarization. Finally, our approach results in better debiasing of BERT-based representations compared to post training bias mitigation, thus illustrating the efficacy of our approach to not just mitigate biases in representations, but also generate text with reduced biases.","label_nlp4sg":1,"task":["Mitigating Social Biases in Language Modelling and Generation"],"method":["lexical co - occurrence - based bias penalization"],"goal1":"Reduced Inequalities","goal2":"Gender Equality","goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":1,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":1,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bellynck-1998-multimodal","url":"https:\/\/aclanthology.org\/W98-0213","title":"Multimodal Visualization of Geometrical Constructions","abstract":"2 CabriII We present an environment for multimodal visualization of geometrical constructions, including both graphical and textual realizations. The graphic interface is programmed by direct manipulation, and this process is mirrored in the text. The text resembles a program written in a classical programming language, but no computer science knowledge is required. The guiding principle is that of textual and graphical equivalence: the same linguistic resources are used for graphical construction and for text generation. During construction, the names of several tools appear in pop-up menus. As the tools are used, their names are written in the text, and geometrical objects are simultaneously drawn in the figure and written in the text. Text can be produced in a variety of \" dialects\" according to the user's mother tongue. Moreover, the visualization system can be used for interfaces which include a facility for programming by demonstration (with macro definitions) and can offer textual support for interaction through other media.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"funk-bontcheva-2010-ontology","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/170_Paper.pdf","title":"Ontology-Based Categorization of Web Services with Machine Learning","abstract":"We present the problem of categorizing web services according to a shallow ontology for presentation on a specialist portal, using their WSDL and associated textual documents found by a crawler. We treat this as a text classification problem and apply first information extraction (IE) techniques (voting using keywords weight according to their context), then machine learning (ML), and finally a combined approach in which ML has priority over weighted keywords, but the latter can still make up categorizations for services for which ML does not produce enough. We evaluate the techniques (using data manually annotated through the portal, which we also use as the training data for ML) according to standard IE measures for flat categorization as well as the Balanced Distance Metric (more suitable for ontological classification) and compare them with related work in web service categorization. The ML and combined categorization results are good and the system is designed to take users' contributions through the portal's Web 2.0 features as additional training data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is partially supported by the European Union's Seventh Framework Program project Service-Finder (FP7-215876).","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lyu-etal-2021-improving","url":"https:\/\/aclanthology.org\/2021.emnlp-main.340","title":"Improving Unsupervised Question Answering via Summarization-Informed Question Generation","abstract":"Question Generation (QG) is the task of generating a plausible question for a given pair. Template-based QG uses linguistically-informed heuristics to transform declarative sentences into interrogatives, whereas supervised QG uses existing Question Answering (QA) datasets to train a system to generate a question given a passage and an answer. A disadvantage of the heuristic approach is that the generated questions are heavily tied to their declarative counterparts. A disadvantage of the supervised approach is that they are heavily tied to the domain\/language of the QA dataset used as training data. In order to overcome these shortcomings, we propose an unsupervised QG method which uses questions generated heuristically from summaries as a source of training data for a QG system. We make use of freely available news summary data, transforming declarative summary sentences into appropriate questions using heuristics informed by dependency parsing, named entity recognition and semantic role labeling. The resulting questions are then combined with the original news articles to train an end-to-end neural QG model. We extrinsically evaluate our approach using unsupervised QA: our QG model is used to generate synthetic QA pairs for training a QA model. Experimental results show that, trained with only 20k English Wikipedia-based synthetic QA pairs, the QA model substantially outperforms previous unsupervised models on three in-domain datasets (SQuAD1.1, Natural Questions, TriviaQA) and three out-of-domain datasets (NewsQA, BioASQ, DuoRC), demonstrating the transferability of the approach.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was funded by Science Foundation Ireland through the SFI Centre for Research Training in Machine Learning (18\/CRT\/6183). We also thank the reviewers for their insightful and helpful comments.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cawsey-1991-using","url":"https:\/\/aclanthology.org\/E91-1021","title":"Using Plausible Inference Rules in Description Planning","abstract":"Current approaches to generating multi-sentence text fail to consider what the user may infer from the different statements in a description. This paper presents a system which contains an explicit model of the inferences that people may make from different statement types, and uses this model, together with assumptions about the user's prior knowledge, to pick the most appropriate sequence of utterances for achieving a given communicative goal.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1991,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"flachs-etal-2020-grammatical","url":"https:\/\/aclanthology.org\/2020.emnlp-main.680","title":"Grammatical Error Correction in Low Error Density Domains: A New Benchmark and Analyses","abstract":"Evaluation of grammatical error correction (GEC) systems has primarily focused on essays written by non-native learners of English, which however is only part of the full spectrum of GEC applications. We aim to broaden the target domain of GEC and release CWEB, a new benchmark for GEC consisting of website text generated by English speakers of varying levels of proficiency. Website data is a common and important domain that contains far fewer grammatical errors than learner essays, which we show presents a challenge to stateof-the-art GEC systems. We demonstrate that a factor behind this is the inability of systems to rely on a strong internal language model in low error density domains. We hope this work shall facilitate the development of opendomain GEC models that generalize to different topics and genres.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"saunders-etal-2020-neural","url":"https:\/\/aclanthology.org\/2020.gebnlp-1.4","title":"Neural Machine Translation Doesn't Translate Gender Coreference Right Unless You Make It","abstract":"Neural Machine Translation (NMT) has been shown to struggle with grammatical gender that is dependent on the gender of human referents, which can cause gender bias effects. Many existing approaches to this problem seek to control gender inflection in the target language by explicitly or implicitly adding a gender feature to the source sentence, usually at the sentence level. In this paper we propose schemes for incorporating explicit word-level gender inflection tags into NMT. We explore the potential of this gender-inflection controlled translation when the gender feature can be determined from a human reference, or when a test sentence can be automatically gender-tagged, assessing on English-to-Spanish and English-to-German translation. We find that simple existing approaches can over-generalize a gender-feature to multiple entities in a sentence, and suggest effective alternatives in the form of tagged coreference adaptation data. We also propose an extension to assess translations of gender-neutral entities from English given a corresponding linguistic convention, such as a non-binary inflection, in the target language.","label_nlp4sg":1,"task":["Neural Machine Translation"],"method":["schemes","word - level gender inflection tags","tagged coreference adaptation data"],"goal1":"Gender Equality","goal2":null,"goal3":null,"acknowledgments":"This work was supported by EPSRC grants EP\/M508007\/1 and EP\/N509620\/1 and has been performed using resources provided by the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service 5 funded by EPSRC Tier-2 capital grant EP\/P020259\/1. Work by R. Sallis during a research placement was funded by the Humanities and Social Change International Foundation.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":1,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"das-etal-2018-constructing","url":"https:\/\/aclanthology.org\/W18-5042","title":"Constructing a Lexicon of English Discourse Connectives","abstract":"We present a new lexicon of English discourse connectives called DiMLex-Eng, built by merging information from two annotated corpora and an additional list of relation signals from the literature. The format follows the German connective lexicon DiMLex, which provides a crosslinguistically applicable XML schema. DiMLex-Eng contains 149 English connectives, and gives information on syntactic categories, discourse semantics and non-connective uses (if any). We report on the development steps and discuss design decisions encountered in the lexicon expansion phase. The resource is freely available for use in studies of discourse structure and computational applications.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Our work was financially supported by Deutsche Forschungsgemeinschaft (DFG), as part of (i) project A03 in the Collaborative Research Center 1287 \"Limits of Variability in Language\" and (ii) project \"Anaphoricity in Connectives\".","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sun-etal-2020-semi","url":"https:\/\/aclanthology.org\/2020.ecnlp-1.9","title":"Semi-supervised Category-specific Review Tagging on Indonesian E-Commerce Product Reviews","abstract":"Product reviews are a huge source of natural language data in e-commerce applications. Several millions of customers write reviews regarding a variety of topics. We categorize these topics into two groups as either \"category-specific\" topics or as \"generic\" topics that span multiple product categories. While we can use a supervised learning approach to tag review text for generic topics, it is impossible to use supervised approaches to tag category-specific topics due to the sheer number of possible topics for each category. In this paper, we present an approach to tag each review with several product category-specific tags on Indonesian language product reviews using a semi-supervised approach. We show that our proposed method can work at scale on real product reviews at Tokopedia 1 , a major e-commerce platform in Indonesia. Manual evaluation shows that the proposed method can efficiently generate category-specific product tags.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"shen-etal-2007-hkust","url":"https:\/\/aclanthology.org\/2007.iwslt-1.12","title":"HKUST statistical machine translation experiments for IWSLT 2007","abstract":"This paper describes the HKUST experiments in the IWSLT 2007 evaluation campaign on spoken language translation. Our primary objective was to compare the open-source phrase-based statistical machine translation toolkit Moses against Pharaoh. We focused on Chinese to English translation, but we also report results on the Arabic to English, Italian to English, and Japanese to English tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kurohashi-etal-2005-example","url":"https:\/\/aclanthology.org\/2005.iwslt-1.27","title":"Example-based Machine Translation Pursuing Fully Structural NLP","abstract":"We are conducting Example-Based Machine Translation research aiming at the improvement both of structural NLP and machine translation. This paper describes UTokyo system challenged IWSLT05 Japanese-English translation tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"poon-2010-markov","url":"https:\/\/aclanthology.org\/N10-4002","title":"Markov Logic in Natural Language Processing: Theory, Algorithms, and Applications","abstract":"Hoifung Poon, University of Washington Natural languages are characterized by rich relational structures and tight integration with world knowledge. As the field of NLP\/CL moves towards more complex and challenging tasks, there has been increasing interest in applying joint inference to leverage such relations and prior knowledge. Recent work in statistical relational learning (a.k.a. structured prediction) has shown that joint inference can not only substantially improve predictive accuracy, but also enable effective learning with little or no labeled information. Markov logic is the unifying framework for statistical relational learning, and has spawned a series of successful NLP applications, ranging from information extraction to unsupervised semantic parsing. In this tutorial, I will introduce Markov logic to the NLP community and survey existing NLP applications. The target audience of the tutorial is all NLP researchers, students and practitioners. The audience will gain the ability to efficiently develop state-of-the-art solutions to NLP problems using Markov logic and the Alchemy open-source software.\nThe tutorial will be structured as follows:","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"locke-1952-mechanical","url":"https:\/\/aclanthology.org\/1952.earlymt-1.15","title":"Mechanical translation of printed and spoken material","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1952,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jana-goyal-2018-network","url":"https:\/\/aclanthology.org\/L18-1006","title":"Network Features Based Co-hyponymy Detection","abstract":"Distinguishing lexical relations has been a long term pursuit in natural language processing (NLP) domain. Recently, in order to detect lexical relations like hypernymy, meronymy, co-hyponymy etc., distributional semantic models are being used extensively in some form or the other. Even though a lot of efforts have been made for detecting hypernymy relation, the problem of co-hyponymy detection has been rarely investigated. In this paper, we are proposing a novel supervised model where various network measures have been utilized to identify co-hyponymy relation with high accuracy performing better or at par with the state-of-the-art models.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"funakoshi-etal-2006-group","url":"https:\/\/aclanthology.org\/W06-1411","title":"Group-Based Generation of Referring Expressions","abstract":"Past work of generating referring expressions mainly utilized attributes of objects and binary relations between objects in order to distinguish the target object from others. However, such an approach does not work well when there is no distinctive attribute among objects. To overcome this limitation, this paper proposes a novel generation method utilizing perceptual groups of objects and n-ary relations among them. The evaluation using 18 subjects showed that the proposed method could effectively generate proper referring expressions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jacobson-dalianis-2016-applying","url":"https:\/\/aclanthology.org\/W16-2926","title":"Applying deep learning on electronic health records in Swedish to predict healthcare-associated infections","abstract":"Detecting healthcare-associated infections pose a major challenge in healthcare. Using natural language processing and machine learning applied on electronic patient records is one approach that has been shown to work. However the results indicate that there was room for improvement and therefore we have applied deep learning methods. Specifically we implemented a network of stacked sparse auto encoders and a network of stacked restricted Boltzmann machines. Our best results were obtained using the stacked restricted Boltzmann machines with a precision of 0.79 and a recall of 0.88.","label_nlp4sg":1,"task":["predict healthcare - associated infections"],"method":["stacked sparse auto encoders","Boltzmann machines"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"We would like to thank Mia Kvist and Elda Sparrelid both at Karolinska University Hospital. We would also like to thank Claudia Ehrentraut and Hideyuki Tanushi for their ground breaking work to construct the Stockholm EPR Detect-HAI Corpus.","year":2016,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"eryigit-2014-itu","url":"https:\/\/aclanthology.org\/E14-2001","title":"ITU Turkish NLP Web Service","abstract":"We present a natural language processing (NLP) platform, namely the \"ITU Turkish NLP Web Service\" by the natural language processing group of Istanbul Technical University. The platform (available at tools.nlp.itu.edu.tr) operates as a SaaS (Software as a Service) and provides the researchers and the students the state of the art NLP tools in many layers: preprocessing, morphology, syntax and entity recognition. The users may communicate with the platform via three channels: 1. via a user friendly web interface, 2. by file uploads and 3. by using the provided Web APIs within their own codes for constructing higher level applications.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"I want to thank my students without whose it would be impossible to produce the ITU Turkish NLP pipeline: Thomas Joole, Dilara Torunoglu, Umut Sulubacak and Hasan Kaya. This work is part of a research project supported by TUBITAK 1001(Grant number: 112E276) as an ICT cost action (IC1207) project.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"raleigh-2020-keynote","url":"https:\/\/aclanthology.org\/2020.aespen-1.2","title":"Keynote Abstract: Too soon? The limitations of AI for event data","abstract":"Not all conflict datasets offer equal levels of coverage, depth, use-ability, and content. A review of the inclusion criteria, methodology, and sourcing of leading publicly available conflict datasets demonstrates that there are significant discrepancies in the output produced by ostensibly similar projects. This keynote will question the presumption of substantial overlap between datasets, and identify a number of important gaps left by deficiencies across core criteria for effective conflict data collection and analysis, including: Data Collection and Oversight : A rigorous, human coder is the best way to ensure reliable, consistent, and accurate events that are not false positives. Automated event data projects are still being refined and are not yet at the point where they can be used as accurate representations of reality. It is not appropriate to use these event datasets to present trends, maps, or distributions of violence in a state.","label_nlp4sg":1,"task":["Data Collection"],"method":["limitations"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"zhou-tanner-1997-construction","url":"https:\/\/aclanthology.org\/A97-1045","title":"Construction and Visualization of Key Term Hierarchies","abstract":"This paper presents a prototype system for key term manipulation and visualization in a real-world commercial environment. The system consists of two components. A preprocessor generates a set of key terms from a text dataset which represents a specific topic. The generated key terms are organized in a hierarchical structure and fed into a graphic user interface (GUI). The friendly and interactive GUI toolkit allows the user to visualize the key terms in context and explore the content of the original dataset.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bonneau-maynard-etal-2000-predictive","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2000\/pdf\/303.pdf","title":"Predictive Performance of Dialog Systems","abstract":"This paper relates some of our experiments on the possibility of predictive performance measures of dialog systems. Experimenting dialog systems is often a very high cost procedure due to the necessity to carry out user trials. Obviously it is advantageous when evaluation can be carried out automatically. It would be helpfull if for each application we were able to measure the system performances by an objective cost function. This performance function can be used for making predictions about a future evolution of the systems without user interaction. Using the PARADISE paradigm, a performance function derived from the relative contribution of various factors is first obtained for one system developed at LIMSI: PARIS-SITI (kiosk for tourist information retrieval in Paris). A second experiment with PARIS-SITI with a new test population confirms that the most important predictors of user satisfaction are understanding accuracy, recognition accuracy and number of user repetitions. Futhermore, similar spoken dialog features appear as important features for the Arise system (train timetable telephone information system). We also explore different ways of measuring user satisfaction. We then discuss the introduction of subjective factors in the predictive coefficients.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-etal-2020-inducing","url":"https:\/\/aclanthology.org\/2020.emnlp-main.451","title":"Inducing Target-Specific Latent Structures for Aspect Sentiment Classification","abstract":"Aspect-level sentiment analysis aims to recognize the sentiment polarity of an aspect or a target in a comment. Recently, graph convolutional networks based on linguistic dependency trees have been studied for this task. However, the dependency parsing accuracy of commercial product comments or tweets might be unsatisfactory. To tackle this problem, we associate linguistic dependency trees with automatically induced aspectspecific graphs. We propose gating mechanisms to dynamically combine information from word dependency graphs and latent graphs which are learned by self-attention networks. Our model can complement supervised syntactic features with latent semantic dependencies. Experimental results on five benchmarks show the effectiveness of our proposed latent models, giving significantly better results than models without using latent graphs.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Yue Zhang is the corresponding author. Thanks to anonymous reviewers for their insightful comments and suggestions. This project is supported by the Westlake-BrightDreams Robotics research grant and a research grant from Rxhui Inc.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lin-etal-2012-online","url":"https:\/\/aclanthology.org\/P12-3025","title":"Online Plagiarized Detection Through Exploiting Lexical, Syntax, and Semantic Information","abstract":"In this paper, we introduce a framework that identifies online plagiarism by exploiting lexical, syntactic and semantic features that includes duplication-gram, reordering and alignment of words, POS and phrase tags, and semantic similarity of sentences. We establish an ensemble framework to combine the predictions of each model. Results demonstrate that our system can not only find considerable amount of real-world online plagiarism cases but also outperforms several state-of-the-art algorithms and commercial software.","label_nlp4sg":1,"task":["Online Plagiarized Detection"],"method":["ensemble","lexical , syntactic and semantic features"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"boriola-paetzold-2020-utfpr","url":"https:\/\/aclanthology.org\/2020.semeval-1.297","title":"UTFPR at SemEval 2020 Task 12: Identifying Offensive Tweets with Lightweight Ensembles","abstract":"Offensive language is a common issue on social media platforms nowadays. In an effort to address this issue, the SemEval 2020 event held the OffensEval 2020 shared task where the participants were challenged to develop systems that identify and classify offensive language in tweets. In this paper, we present a system that uses an Ensemble model stacking a BOW model and a CNN model that led us to place 29th in the ranking for English sub-task A.","label_nlp4sg":1,"task":["Identifying Offensive Tweets"],"method":["Ensemble","BOW","CNN"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"We gratefully acknowledge the support of the Universidade Tecnol\u00f3gica Federal do Paran\u00e1 -Campus Toledo, which provided other important resources that were crucial in the development of this contribution.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"moore-rayson-2017-lancaster","url":"https:\/\/aclanthology.org\/S17-2095","title":"Lancaster A at SemEval-2017 Task 5: Evaluation metrics matter: predicting sentiment from financial news headlines","abstract":"This paper describes our participation in Task 5 track 2 of SemEval 2017 to predict the sentiment of financial news headlines for a specific company on a continuous scale between-1 and 1. We tackled the problem using a number of approaches, utilising a Support Vector Regression (SVR) and a Bidirectional Long Short-Term Memory (BLSTM). We found an improvement of 4-6% using the LSTM model over the SVR and came fourth in the track. We report a number of different evaluations using a finance specific word embedding model and reflect on the effects of using different evaluation metrics.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to Nikolaos Tsileponis (University of Manchester) and Mahmoud El-Haj (Lancaster University) for access to headlines in the corpus of financial news articles collected from Factiva. This research was supported at Lancaster University by an EPSRC PhD studentship.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhang-etal-2017-robust","url":"https:\/\/aclanthology.org\/W17-1315","title":"Robust Dictionary Lookup in Multiple Noisy Orthographies","abstract":"We present the MultiScript Phonetic Search algorithm to address the problem of language learners looking up unfamiliar words that they heard. We apply it to Arabic dictionary lookup with noisy queries done using both the Arabic and Roman scripts. Our algorithm is based on a computational phonetic distance metric that can be optionally machine learned. To benchmark our performance, we created the ArabScribe dataset, containing 10,000 noisy transcriptions of random Arabic dictionary words. Our algorithm outperforms Google Translate's \"did you mean\" feature, as well as the Yamli smart Arabic keyboard.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The creation of ArabScribe data set was support by a New York University Abu Dhabi Capstone Project Fund.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"weller-seppi-2019-humor","url":"https:\/\/aclanthology.org\/D19-1372","title":"Humor Detection: A Transformer Gets the Last Laugh","abstract":"Much previous work has been done in attempting to identify humor in text. In this paper we extend that capability by proposing a new task: assessing whether or not a joke is humorous. We present a novel way of approaching this problem by building a model that learns to identify humorous jokes based on ratings gleaned from Reddit pages, consisting of almost 16,000 labeled instances. Using these ratings to determine the level of humor, we then employ a Transformer architecture for its advantages in learning from sentence context. We demonstrate the effectiveness of this approach and show results that are comparable to human performance. We further demonstrate our model's increased capabilities on humor identification problems, such as the previously created datasets for short jokes and puns. These experiments show that this method outperforms all previous work done on these tasks, with an F-measure of 93.1% for the Puns dataset and 98.6% on the Short Jokes dataset.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"evans-weir-1998-structure","url":"https:\/\/aclanthology.org\/C98-1059","title":"A structure-sharing parser for lexicalized grammars","abstract":"In wide-coverage lexicalized grammars many of the elementary structures have substructures in common. This means that in conventional parsing algorithms some of the computation associated with different structures is duplicated. In this paper we describe a precompilation technique for such grammars which allows some of this computation to be shared. In our approach the elementary structures of the grammar are transformed into finite state automata which can be merged and minimised using standard algorithms, and then parsed using an automatonbased parser. We present algorithms for constructing automata from elementary structures, merging and minimising them, and string recognition and parse recovery with the resulting grammar.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"eckart-etal-2016-features","url":"https:\/\/aclanthology.org\/L16-1444","title":"Features for Generic Corpus Querying","abstract":"The availability of large corpora for more and more languages enforces generic querying and standard interfaces. This development is especially relevant in the context of integrated research environments like CLARIN or DARIAH. The paper focuses on several applications and implementation details on the basis of a unified corpus format, a unique POS tag set, and prepared data for word similarities. All described data or applications are already or will be in the near future accessible via well-documented RESTful Web services. The target group are all kinds of interested persons with varying level of experience in programming or corpus query languages.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"onyshkevych-1991-lexicon","url":"https:\/\/aclanthology.org\/W91-0221","title":"Lexicon, Ontology and Text Meaning","abstract":"A computationally relevant theory of lexical semantics must take into consideration both the form and the content of three different static knowledge sources-the lexicon, the ontological domain model and a text meaning representation language. Meanings of lexical units axe interpreted in terms of their mappings into the ontology and\/or their contributions to the text meaning representation. We briefly describe one such theory. Its precepts have been used in mONYSUS, a machine translation system prototype.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We gratefully acknowledge the contributions of Christine Defrise, Ingrid Meyer, and Lynn Carlson. ","year":1991,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bilgram-keson-1998-construction","url":"https:\/\/aclanthology.org\/W98-1614","title":"The Construction of a Tagged Danish Corpus","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kwok-grunfeld-1994-learning","url":"https:\/\/aclanthology.org\/H94-1071","title":"Learning from Relevant Documents in Large Scale Routing Retrieval","abstract":"The normal practice of selecting relevant documents for training routing queries is to either use all relevants or the 'best n' of them after a (retrieval) ranking operation with respect to each query. Using all relevants can introduce noise and ambiguities in training because documents can be long with many irrelevant portions. Using only the 'best n' risks leaving out documents that do not resemble a query. Based on a method of segmenting documents into more uniform size subdocuments, a better approach is to use the top ranked subdocument of every relevant. An alternative selection strategy is based on document properties without ranking. We found experimentally that short relevant documents are the quality items for training. Beginning portions of longer relevants are also useful. Using both types provides a strategy that is effective and efficient.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is partially supported by a grant from ARPA via the TREC program.","year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hasan-ney-2005-clustered","url":"https:\/\/aclanthology.org\/2005.eamt-1.17","title":"Clustered language models based on regular expressions for SMT","abstract":"In this paper, we present a language model based on clusters obtained by applying regular expressions to the training data and, thus, discriminating several different sentence types as, e.g. interrogatives, imperatives or enumerations. The main motivation lies in the observation that different sentence types also underlie a different syntactic structure, and thus yield a varying distribution of n-grams reflecting their word order. We show that this assumption is valid by applying the models to English-Spanish bilingual corpora and obtaining good perplexity reductions of approximately 25%. In addition, we perform an n-best rescoring experiment and show a relative improvement of 4-5% in word error rate. The models can be easily adapted to other translation tasks and do not need complicated training methods, thus being a valuable alternative for on-demand rescoring of sentence hypotheses such as they occur in the CAT framework.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been partly funded by the European Union under the RTD project TransType2 (IST-2001-32091), the integrated project TC-STAR -Technology and Corpora for Speech to Speech Translation - (IST-2002-FP6-506738) and by the Deutsche Forschungsgemeinschaft (DFG) under the project \"Statistische Text\u00fcbersetzung\" (Ne572\/5).","year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"vilares-etal-2016-one","url":"https:\/\/aclanthology.org\/P16-2069","title":"One model, two languages: training bilingual parsers with harmonized treebanks","abstract":"We introduce an approach to train lexicalized parsers using bilingual corpora obtained by merging harmonized treebanks of different languages, producing parsers that can analyze sentences in either of the learned languages, or even sentences that mix both. We test the approach on the Universal Dependency Treebanks, training with MaltParser and MaltOptimizer. The results show that these bilingual parsers are more than competitive, as most combinations not only preserve accuracy, but some even achieve significant improvements over the corresponding monolingual parsers. Preliminary experiments also show the approach to be promising on texts with code-switching and when more languages are added.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is supported by the Ministerio de Econom\u00eda y Competitividad (FFI2014-51978-C2). David Vilares is funded by the Ministerio de Educaci\u00f3n, Cultura y Deporte (FPU13\/01180). Carlos G\u00f3mez-Rodr\u00edguez is funded by an Oportunius program grant (Xunta de Galicia). We thank Marcos Garcia for helping with the codeswitching treebank. We also thank the reviewers for their comments and suggestions.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"koulierakis-etal-2020-recognition","url":"https:\/\/aclanthology.org\/2020.signlang-1.20","title":"Recognition of Static Features in Sign Language Using Key-Points","abstract":"In this paper we report on a research effort focusing on recognition of static features of sign formation in single sign videos. Three sequential models have been developed for handshape, palm orientation and location of sign formation respectively, which make use of key-points extracted via OpenPose software. The models have been applied to a Danish and a Greek Sign Language dataset, providing results around 96%. Moreover, during the reported research, a method has been developed for identifying the time-frame of real signing in the video, which allows to ignore transition frames during sign recognition processing.","label_nlp4sg":1,"task":["Recognition of Static Features in Sign Language"],"method":["sequential models"],"goal1":"Reduced Inequalities","goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":1,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"schiehlen-2000-granularity","url":"https:\/\/aclanthology.org\/C00-2103","title":"Granularity Effects in Tense Translation","abstract":"One of the daunting problems in machine translation MT is the mapping of tense. The paper singles out the problem of translating German present tense into English. This problem seems particularly instructive as its solution requires calculation of aspect as well as determination of the temporal location of events with respect to the time of speech. We present a disambiguation algorithm which makes use of granularity calculations to establish the scopal order of temporal adverbial phrases. The described algorithm has been implemented and is running in the Verbmobil system. The paper is organized as follows. In sections 2 through 4 we present the problem and discuss the linguistic factors involved, always keeping an eye on their exploitation for disambiguation. Sections 5 and 6 are devoted to an abstract definition of temporal granularity and a discussion of granularity e ects on scope resolution. In section 7 the actual disambiguation algorithm is presented, while section 8 describes its performance on the Verbmobil test data. A summary closes the paper.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pan-wang-2021-hyperbolic-hierarchy","url":"https:\/\/aclanthology.org\/2021.findings-emnlp.251","title":"Hyperbolic Hierarchy-Aware Knowledge Graph Embedding for Link Prediction","abstract":"Knowledge graph embedding (KGE) using low-dimensional representations to predict missing information is widely applied in knowledge completion. Existing embedding methods are mostly built on Euclidean space, which are difficult to handle hierarchical structures. Hyperbolic embedding methods have shown the promise of high fidelity and concise representation for hierarchical data. However, the logical patterns in knowledge graphs are not considered well in these methods. To address this problem, we propose a novel KGE model with extended Poincar\u00e9 Ball and polar coordinate system to capture hierarchical structures. We use the tangent space and exponential transformation to initialize and map the corresponding vectors to the Poincar\u00e9 Ball in hyperbolic space. To solve the boundary conditions, the boundary is stretched and zoomed by expanding the modulus length in the Poincar\u00e9 Ball. We optimize our model using polar coordinate and changing operators in the extended Poincar\u00e9 Ball. Experiments achieve new state-of-the-art results on part of link prediction tasks, which demonstrates the effectiveness of our method.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Xingchen Zhou for his suggestions on this paper, and the anonymous reviewers for their insightful comments. The work is supported by All-Army Common Information System Equipment Pre-Research Project (No. 31514020501, No. 31514020503).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"liu-etal-2009-joint","url":"https:\/\/aclanthology.org\/P09-1065","title":"Joint Decoding with Multiple Translation Models","abstract":"Current SMT systems usually decode with single translation models and cannot benefit from the strengths of other models in decoding phase. We instead propose joint decoding, a method that combines multiple translation models in one decoder. Our joint decoder draws connections among multiple models by integrating the translation hypergraphs they produce individually. Therefore, one model can share translations and even derivations with other models. Comparable to the state-of-the-art system combination technique, joint decoding achieves an absolute improvement of 1.5 BLEU points over individual decoding.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors were supported by National Natural Science Foundation of China, Contracts 60873167 and 60736014, and 863 State Key Project No. 2006AA010108. Part of this work was done while Yang Liu was visiting the SMT group led by Stephan Vogel at CMU. We thank the anonymous reviewers for their insightful comments. We are also grateful to Yajuan L\u00fc, Liang Huang, Nguyen Bach, Andreas Zollmann, Vamshi Ambati, and Kevin Gimpel for their helpful feedback.","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"grothe-etal-2008-comparative","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2008\/pdf\/249_paper.pdf","title":"A Comparative Study on Language Identification Methods","abstract":"In this paper we present two experiments conducted for comparison of different language identification algorithms. Short words-, frequent words-and n-gram-based approaches are considered and combined with the Ad-Hoc Ranking classification method. The language identification process can be subdivided into two main steps: First a document model is generated for the document and a language model for the language; second the language of the document is determined on the basis of the language model and is added to the document as additional information. In this work we present our evaluation results and discuss the importance of a dynamic value for the out-of-place measure.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"minematsu-etal-2002-english","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2002\/pdf\/155.pdf","title":"English Speech Database Read by Japanese Learners for CALL System Development","abstract":"With the help of recent advances in speech processing techniques, we can see various kinds of practical speech applications in both laboratories and the real world. One of the major applications in Japan is CALL (Computer Assisted Language Learning) systems. It is well-known that most of the recent speech technologies are based upon statistical methods, which require a large amount of speech data. Although we can find many speech corpora available from distribution sites such as Linguistic Data Consortium, European Language Resources Association, and so on, the number of speech corpora built especially for CALL system development is very small. In this paper, we firstly introduce a Japanese national project of \"Advanced Utilization of Multimedia to Promote Higher Educational Reform,\" under which some research groups are currently developing CALL systems. One of the main objectives of the project is to construct an English speech database read by Japanese students for CALL system development. This paper describes specification of the database and strategies adopted to select speakers and record their sentence\/word utterances in addition to preliminary discussions and investigations done before the database development. Further, by using the new database and WSJ database, corpus-based analysis and comparison between Japanese English and American English is done in view of the entire phonemic system of English. Here, tree diagrams of the two kinds of English are drawn through their HMM sets. Results show many interesting characteristics of Japanese English.","label_nlp4sg":1,"task":["speech applications"],"method":["English speech database"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"perez-miguel-etal-2018-biomedical","url":"https:\/\/aclanthology.org\/L18-1322","title":"Biomedical term normalization of EHRs with UMLS","abstract":"This paper presents a novel prototype for biomedical term normalization of electronic health record excerpts with the Unified Medical Language System (UMLS) Metathesaurus, a large, multilingual compendium of biomedical and health-related terminologies. Despite the prototype being multilingual and cross-lingual by design, we first focus on processing clinical text in Spanish because there is no existing tool for this language and for this specific purpose. The tool is based on Apache Lucene TM to index the Metathesaurus and generate mapping candidates from input text. It uses the IXA pipeline for basic language processing and resolves lexical ambiguities with the UKB toolkit. It has been evaluated by measuring its agreement with MetaMap-a mature software to discover UMLS concepts in English texts-in two English-Spanish parallel corpora. In addition, we present a web-based interface for the tool.","label_nlp4sg":1,"task":["Biomedical term normalization"],"method":["multilingual compendium","web - based interface"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"This work has been funded by the Department of Economic Development and Infrastructure of the Basque Government under the project BERBAOLA (KK-2017\/00043), and by the Spanish Ministry of Economy and Competitiveness (MINECO\/FEDER, UE) under the projects CROSSTEXT (TIN2015-72646-EXP) and TUNER (TIN2015-65308-C5-1-R).","year":2018,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"grover-etal-2012-aspects","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/226_Paper.pdf","title":"Aspects of a Legal Framework for Language Resource Management","abstract":"The management of language resources requires several legal aspects to be taken into consideration. In this paper we discuss a number of these aspects which lead towards the formation of a legal framework for a language resources management agency. The legal framework entails examination of the agency's stakeholders and the relationships that exist amongst them, the privacy and intellectual property rights that exist around the language resources offered by the agency, and the external (e.g. laws, acts, policies) and internal legal instruments (e.g. end user licence agreements) required for the agency's operation.","label_nlp4sg":1,"task":["Language Resource Management"],"method":["Legal Framework"],"goal1":"Industry, Innovation and Infrastructure","goal2":"Peace, Justice and Strong Institutions","goal3":null,"acknowledgments":"The authors would like to acknowledge the indirect inputs of various members of the DAC's HLT Expert Panel, as well as the financial contribution of DAC towards this investigation.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"cheng-etal-2017-generative","url":"https:\/\/aclanthology.org\/P17-2019","title":"A Generative Parser with a Discriminative Recognition Algorithm","abstract":"Generative models defining joint distributions over parse trees and sentences are useful for parsing and language modeling, but impose restrictions on the scope of features and are often outperformed by discriminative models. We propose a framework for parsing and language modeling which marries a generative model with a discriminative recognition model in an encoder-decoder setting. We provide interpretations of the framework based on expectation maximization and variational inference, and show that it enables parsing and language modeling within a single implementation. On the English Penn Treenbank, our framework obtains competitive performance on constituency parsing while matching the state-of-the-art singlemodel language modeling score. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgments We thank three anonymous reviewers and members of the ILCC for valuable feedback, and Muhua Zhu and James Cross for help with data preparation. The support of the European Research Council under award number 681760 \"Translating Multiple Modalities into Text\" is gratefully acknowledged.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"alfter-volodina-2018-towards","url":"https:\/\/aclanthology.org\/W18-0508","title":"Towards Single Word Lexical Complexity Prediction","abstract":"In this paper we present work-in-progress where we investigate the usefulness of previously created word lists to the task of singleword lexical complexity analysis and prediction of the complexity level for learners of Swedish as a second language. The word lists used map each word to a single CEFR level, and the task consists of predicting CEFR levels for unseen words. In contrast to previous work on word-level lexical complexity, we experiment with topics as additional features and show that linking words to topics significantly increases accuracy of classification.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has in part been funded by an infrastructure grant from the Swedish Research Council to Swedish National Language Bank. We would also like to thank the anonymous reviewers for their constructive feedback.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cohen-etal-2012-domain","url":"https:\/\/aclanthology.org\/W12-3308","title":"Domain Adaptation of a Dependency Parser with a Class-Class Selectional Preference Model","abstract":"When porting parsers to a new domain, many of the errors are related to wrong attachment of out-of-vocabulary words. Since there is no available annotated data to learn the attachment preferences of the target domain words, we attack this problem using a model of selectional preferences based on domainspecific word classes. Our method uses Latent Dirichlet Allocations (LDA) to learn a domain-specific Selectional Preference model in the target domain using un-annotated data. The model provides features that model the affinities among pairs of words in the domain. To incorporate these new features in the parsing model, we adopt the co-training approach and retrain the parser with the selectional preferences features. We apply this method for adapting Easy First, a fast nondirectional parser trained on WSJ, to the biomedical domain (Genia Treebank). The Selectional Preference features reduce error by 4.5% over the co-training baseline.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"coppersmith-kelly-2014-dynamic","url":"https:\/\/aclanthology.org\/W14-3103","title":"Dynamic Wordclouds and Vennclouds for Exploratory Data Analysis","abstract":"The wordcloud is a ubiquitous visualization of human language, though it falls short when used for exploratory data analysis. To address some of these shortcomings, we give the viewer explicit control over the creation of the wordcloud, allowing them to interact with it in real timea dynamic wordcloud. This allows iterative adaptation of the visualization to the data and inference task at hand. We next present a principled approach to visualization which highlights the similarities and differences between two sets of documents-a Venncloud. We make all the visualization code (primarily JavaScript) freely available.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Carey Priebe for insightful discussions on exploratory data analysis,","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"purver-2002-processing","url":"https:\/\/aclanthology.org\/W02-0222","title":"Processing Unknown Words in a Dialogue System","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"qian-etal-2019-graphie","url":"https:\/\/aclanthology.org\/N19-1082","title":"GraphIE: A Graph-Based Framework for Information Extraction","abstract":"Most modern Information Extraction (IE) systems are implemented as sequential taggers and only model local dependencies. Non-local and non-sequential context is, however, a valuable source of information to improve predictions. In this paper, we introduce GraphIE, a framework that operates over a graph representing a broad set of dependencies between textual units (i.e. words or sentences). The algorithm propagates information between connected nodes through graph convolutions, generating a richer representation that can be exploited to improve word-level predictions. Evaluation on three different tasks-namely textual, social media and visual information extraction-shows that GraphIE consistently outperforms the state-of-the-art sequence tagging model by a significant margin. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the MIT NLP group and the reviewers for their helpful comments. This work is supported by MIT-IBM Watson AI Lab. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"deng-gao-2007-guiding","url":"https:\/\/aclanthology.org\/P07-1001","title":"Guiding Statistical Word Alignment Models With Prior Knowledge","abstract":"We present a general framework to incorporate prior knowledge such as heuristics or linguistic features in statistical generative word alignment models. Prior knowledge plays a role of probabilistic soft constraints between bilingual word pairs that shall be used to guide word alignment model training. We investigate knowledge that can be derived automatically from entropy principle and bilingual latent semantic analysis and show how they can be applied to improve translation performance.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgement We thank Mohamed Afify for discussions and the anonymous reviewers for suggestions.","year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mayfield-etal-2017-language","url":"https:\/\/aclanthology.org\/W17-1414","title":"Language-Independent Named Entity Analysis Using Parallel Projection and Rule-Based Disambiguation","abstract":"The 2017 shared task at the Balto-Slavic NLP workshop requires identifying coarse-grained named entities in seven languages, identifying each entity's base form, and clustering name mentions across the multilingual set of documents. The fact that no training data is provided to systems for building supervised classifiers further adds to the complexity. To complete the task we first use publicly available parallel texts to project named entity recognition capability from English to each evaluation language. We ignore entirely the subtask of identifying non-inflected forms of names. Finally, we create cross-document entity identifiers by clustering named mentions using a procedure-based approach.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR0011-16-C-0102.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"durco-windhouwer-2014-cmd","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/156_Paper.pdf","title":"The CMD Cloud","abstract":"The CLARIN Component Metadata Infrastructure (CMDI) established means for flexible resource descriptions for the domain of language resources with sound provisions for semantic interoperability weaved deeply into the meta model and the infrastructure. Based on this solid grounding, the infrastructure accommodates a growing collection of metadata records. In this paper, we give a short overview of the current status in the CMD data domain on the schema and instance level and harness the installed mechanisms for semantic interoperability to explore the similarity relations between individual profiles\/schemas. We propose a method to use the semantic links shared among the profiles to generate\/compile a similarity graph. This information is further rendered in an interactive graph viewer: the SMC Browser. The resulting interactive graph offers an intuitive view on the complex interrelations of the discussed dataset revealing clusters of more similar profiles. This information is useful both for metadata modellers, for metadata curation tasks as well as for general audience seeking for a 'big picture' of the complex CMD data domain.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pal-etal-2010-ju","url":"https:\/\/aclanthology.org\/S10-1045","title":"JU: A Supervised Approach to Identify Semantic Relations from Paired Nominals","abstract":"This article presents the experiments carried out at Jadavpur University as part of the participation in Multi-Way Classification of Semantic Relations between Pairs of Nominals in the SemEval 2010 exercise. Separate rules for each type of the relations are identified in the baseline model based on the verbs and prepositions present in the segment between each pair of nominals. Inclusion of WordNet features associated with the paired nominals play an important role in distinguishing the relations from each other. The Conditional Random Field (CRF) based machine-learning framework is adopted for classifying the pair of nominals. Application of dependency relations, Named Entities (NE) and various types of WordNet features along with several combinations of these features help to improve the performance of the system. Error analysis suggests that the performance can be improved by applying suitable strategies to differentiate each paired nominal in an already identified relation. Evaluation result gives an overall macro-averaged F1 score of 52.16%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"richardson-etal-1998-mindnet","url":"https:\/\/aclanthology.org\/C98-2175","title":"MindNet: acquiring and structuring semantic information from text","abstract":"As a lexical knowledge base constructed automatically from the definitions and example sentences in two machine-readable dictionaries (MRDs), MindNet embodies several features that distinguish it from prior work with MRDs. It is, however, more than this static resource alone. MindNet represents a general methodology for acquiring, structuring, accessing, and exploiting semantic information from natural language text. This paper provides an overview of the distinguishing characteristics of MindNet, the steps involved in its creation, and its extension beyond dictionary text.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"liu-2021-crosslinguistic","url":"https:\/\/aclanthology.org\/2021.scil-1.24","title":"The Crosslinguistic Relationship between Ordering Flexibility and Dependency Length Minimization: A Data-Driven Approach","abstract":"This paper asks whether syntactic constructions with more flexible constituent orderings have a weaker tendency for dependency length minimization (DLM). For test cases, I use verb phrases in which the head verb has one direct object noun phrase (NP) dependent and exactly one adpositional phrase (PP) dependent adjacent to each other on the same side (e.g. Kobe praised [ N P his oldest daughter] [ P P from the stands]). Data from multilingual corpora of 36 languages show that when combining all these languages together, there is no consistent relationship between flexibility and DLM. When looking at specific ordering domains, on average there appears to be a weaker preference for shorter dependencies in constructions with more flexibility mostly in preverbal contexts, while no correlation exists between the two in postverbal domains.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wagner-foster-2021-revisiting","url":"https:\/\/aclanthology.org\/2021.emnlp-main.745","title":"Revisiting Tri-training of Dependency Parsers","abstract":"We compare two orthogonal semi-supervised learning techniques, namely tri-training and pretrained word embeddings, in the task of dependency parsing. We explore languagespecific FastText and ELMo embeddings and multilingual BERT embeddings. We focus on a low resource scenario as semi-supervised learning can be expected to have the most impact here. Based on treebank size and available ELMo models, we select Hungarian, Uyghur (a zero-shot language for mBERT) and Vietnamese. Furthermore, we include English in a simulated low-resource setting. We find that pretrained word embeddings make more effective use of unlabelled data than tritraining but that the two approaches can be successfully combined.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is supported by Science Foundation Ireland (SFI) through the ADAPT Centre for Digital Content Technology, which is funded under the SFI Research Centres Programme (Grant 13\/RC\/2106) and is co-funded under the European Regional Development Fund, and through the SFI Frontiers for the Future programme (19\/FFP\/6942).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sai-etal-2021-perturbation","url":"https:\/\/aclanthology.org\/2021.emnlp-main.575","title":"Perturbation CheckLists for Evaluating NLG Evaluation Metrics","abstract":"Natural Language Generation (NLG) evaluation is a multifaceted task requiring assessment of multiple desirable criteria, e.g., fluency, coherency, coverage, relevance, adequacy, overall quality, etc. Across existing datasets for 6 NLG tasks, we observe that the human evaluation scores on these multiple criteria are often not correlated. For example, there is a very low correlation between human scores on fluency and data coverage for the task of structured data to text generation. This suggests that the current recipe of proposing new automatic evaluation metrics for NLG by showing that they correlate well with scores assigned by humans for a single criteria (overall quality) alone is inadequate. Indeed, our extensive study involving 25 automatic evaluation metrics across 6 different tasks and 18 different evaluation criteria shows that there is no single metric which correlates well with human scores on all desirable criteria, for most NLG tasks. Given this situation, we propose CheckLists for better design and evaluation of automatic metrics. We design templates which target a specific criteria (e.g., coverage) and perturb the output such that the quality gets affected only along this specific criteria (e.g., the coverage drops). We show that existing evaluation metrics are not robust against even such simple perturbations and disagree with scores assigned by humans to the perturbed output. The proposed templates thus allow for a fine-grained assessment of automatic evaluation metrics exposing their limitations and will facilitate better design, analysis and evaluation of such metrics. 1 Task Criteria Machine Translation Adequacy: The generated translation should adequately represent all the information present in the reference. Question Generation Relevance: Is the question related to the source material they are based upon. Answerability: Is the generated question answerable given the context. Informativeness: The summary should convey the key points of the text. Non-redundancy: The summary should not repeat any points, and ideally have maximal information coverage within the limited text length. Abstractive Summarization Referential clarity: Any intra-sentence or cross-sentence references in the summary should be unambiguous and within the scope of the summary. Focus: The summary needs to have a focus and all the sentences need to contain information related to this focal point. Structure and Coherence: The summary should be a well-organized and coherent body of information Dialogue Generation Making sense: Does the bot say things that don't make sense? Engagingness: Is the dialogue agent enjoyable to talk to? Interestingness: Did you find the bot interesting to talk to? Inquisitivenes: Does the bot ask a good amount of questions? Listening: Does the bot pay attention to what you say? Avoiding Repetition: Does the bot repeat itself? (either within or across utterances) Humanness: Is the conversation with a person or a bot? Image Captioning Relevance: The caption should be specific and related to the image. Thoroughness: The caption should adequately describe the image. Data Coverage: Does the text include descriptions of all predicates presented in the data? Relevance: Does the text describe only such predicates which are found in the data? Data to Text Generation Correctness: When describing predicates which are found in the data, does the text mention correct the objects and adequately introduces the subject for this specific predicate? Text Structure: Is the text grammatical, well-structured, written in acceptable English? All above tasks Fluency: How fluent is the generated text?","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the annotators for helping us evaluate and annotate the perturbations. We thank the anonymous EMNLP-21 reviewers whose comments and feedback helped enhance the paper. We thank the Google India Ph.D. Fellowship Program and the Prime Minister's Fellowship Scheme for Doctoral Research for supporting Ananya Sai and Samsung IITM-Pravartak Undergraduate Fellowship for supporting Dev Yashpal Sheth. Finally, we thank the Robert Bosch Centre for Data Science and AI for supporting this work.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gorrell-etal-2016-identifying","url":"https:\/\/aclanthology.org\/W16-2927","title":"Identifying First Episodes of Psychosis in Psychiatric Patient Records using Machine Learning","abstract":"Natural language processing is being pressed into use to facilitate the selection of cases for medical research in electronic health record databases, though study inclusion criteria may be complex, and the linguistic cues indicating eligibility may be subtle. Finding cases of first episode psychosis raised a number of problems for automated approaches, providing an opportunity to explore how machine learning technologies might be used to overcome them. A system was delivered that achieved an AUC of 0.85, enabling 95% of relevant cases to be identified whilst halving the work required in manually reviewing cases. The techniques that made this possible are presented.","label_nlp4sg":1,"task":["Identifying First Episodes of Psychosis"],"method":["Machine Learning"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"del-tredici-fernandez-2018-road","url":"https:\/\/aclanthology.org\/C18-1135","title":"The Road to Success: Assessing the Fate of Linguistic Innovations in Online Communities","abstract":"We investigate the birth and diffusion of lexical innovations in a large dataset of online social communities. We build on sociolinguistic theories and focus on the relation between the spread of a novel term and the social role of the individuals who use it, uncovering characteristics of innovators and adopters. Finally, we perform a prediction task that allows us to anticipate whether an innovation will successfully spread within a community.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research has received funding from the Netherlands Organisation for Scientific Research (NWO) under VIDI grant nr. 276-89-008, Asymmetry in Conversation. We thank the anonymous reviewers for their comments as well as the area chairs and PC chairs of COLING 2018.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"elkaref-bohnet-2019-recursive","url":"https:\/\/aclanthology.org\/W19-8012","title":"Recursive LSTM Tree Representation for Arc-Standard Transition-Based Dependency Parsing","abstract":"We propose a method to represent dependency trees as dense vectors through the recursive application of Long Short-Term Memory networks to build Recursive LSTM Trees (RLTs). We show that the dense vectors produced by Recursive LSTM Trees replace the need for structural features by using them as feature vectors for a greedy Arc-Standard transition-based dependency parser. We also show that RLTs have the ability to incorporate useful information from the bi-LSTM contextualized representation used by Cross and Huang (2016) and Kiperwasser and Goldberg (2016b). The resulting dense vectors are able to express both structural information relating to the dependency tree, as well as sequential information relating to the position in the sentence. The resulting parser only requires the vector representations of the top two items on the parser stack, which is, to the best of our knowledge, the smallest feature set ever published for Arc-Standard parsers to date, while still managing to achieve competitive results.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was part funded by the STFC Hartree Centres Innovation Return on Research programme, funded by the Department for Business, Energy Industrial Strategy.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"farag-etal-2020-analyzing","url":"https:\/\/aclanthology.org\/2020.codi-1.11","title":"Analyzing Neural Discourse Coherence Models","abstract":"In this work, we systematically investigate how well current models of coherence can capture aspects of text implicated in discourse organisation. We devise two datasets of various linguistic alterations that undermine coherence and test model sensitivity to changes in syntax and semantics. We furthermore probe discourse embedding space and examine the knowledge that is encoded in representations of coherence. We hope this study shall provide further insight into how to frame the task and improve models of coherence assessment further. Finally, we make our datasets publicly available as a resource for researchers to use to test discourse coherence models.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tang-etal-2021-high","url":"https:\/\/aclanthology.org\/2021.findings-acl.163","title":"High-Quality Dialogue Diversification by Intermittent Short Extension Ensembles","abstract":"Many task-oriented dialogue systems use deep reinforcement learning (DRL) to learn policies that respond to the user appropriately and complete the tasks successfully. Training DRL agents with diverse dialogue trajectories prepare them well for rare user requests and unseen situations. One effective diversification method is to let the agent interact with a diverse set of learned user models. However, trajectories created by these artificial user models may contain generation errors, which can quickly propagate into the agent's policy. It is thus important to control the quality of the diversification and resist the noise. In this paper, we propose a novel dialogue diversification method for task-oriented dialogue systems trained in simulators. Our method, Intermittent Short Extension Ensemble (I-SEE), 1 constrains the intensity to interact with an ensemble of diverse user models and effectively controls the quality of the diversification. Evaluations on the Multiwoz dataset show that I-SEE successfully boosts the performance of several state-of-the-art DRL dialogue agents.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by the U.S. National Science Foundation Grant no. IIS-145374. Any opinions, findings, conclusions, or recommendations expressed in this paper are of the authors, and do not necessarily reflect those of the sponsor.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yusoff-1990-generation","url":"https:\/\/aclanthology.org\/C90-2073","title":"Generation of Synthes Is Programs in Robra (Ariane) From String-Tree Correspondence Grammars (Or a Strategy for Synthesis in Machine Translation)","abstract":"Specialised Languages for Linguistic Programming, or SLLPs (like ROBRA, Osystems, Augmented Transition Networks, etc.), in Machine Translation (MT) systems may be considerably efficient in terms of processing power, but its procedural nature makes it quite difficult for linguists to describe natural languages in a declarative and natural way. Furthermore, the effort can be quite wasteful in the sense that different grammars will have to be written for analysis and for generation, as veil as for different MT systems, 0n the other hand, purely linguistic formalisms (like those for Government and Binding, Lexical Functional Grammars, General Phrase Structure Grammars, etc.) may prove to be adequate for natural language description, but it is not quite clear hey they can be adapted for the purposes of MT in a natural way. Besides, MT-specific problems, like appositions, ambiguities, etc., have yet to find their place in linguistics.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1990,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sugiyama-etal-2013-open","url":"https:\/\/aclanthology.org\/W13-4051","title":"Open-domain Utterance Generation for Conversational Dialogue Systems using Web-scale Dependency Structures","abstract":"Even though open-domain conversational dialogue systems are required in many fields, their development is complicated because of the flexibility and variety of user utterances. To address this flexibility, previous research on conversational dialogue systems has selected system utterances from web articles based on surface cohesion and shallow semantic coherence; however, the generated utterances sometimes contain irrelevant sentences with respect to the input user utterance. We propose a template-based approach that fills templates with the most salient words in a user utterance and with related words that are extracted using web-scale dependency structures gathered from Twitter. Our open-domain conversational dialogue system outperforms retrieval-based conventional systems in chat experiments.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"denecke-tsukada-2005-instance","url":"https:\/\/aclanthology.org\/I05-1043","title":"Instance-Based Generation for Interactive Restricted Domain Question Answering Systems","abstract":"One important component of interactive systems is the generation component. While template-based generation is appropriate in many cases (for example, task oriented spoken dialogue systems), interactive question answering systems require a more sophisticated approach. In this paper, we propose and compare two example-based methods for generation of information seeking questions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We acknowledge the help of Takuya Suzuki with the implementation. Jun Suzuki provided the implementation of the HDAG kernel. We would like to thank Hideki Isozaki and our colleagues at NTT CS labs for discussion and encouragement.","year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nongmeikapam-etal-2012-manipuri","url":"https:\/\/aclanthology.org\/W12-5008","title":"Manipuri Morpheme Identification","abstract":"The Morphemes of the Manipuri word are the real bottleneck for any of the Manipuri Natural Language Processing (NLP) works. It is one of the Indian Scheduled Language with less advancement so far in terms of NLP applications. This is because the nature of the language is highly agglutinative. Segmentation of a word and identifying the morphemes becomes necessary before proceeding for any of the Manipuri NLP application. A highly inflected word may sometimes consist of ten or more affixes. These affixes are the morphemes which change the semantic and grammatical structure. So the inflexion in a word plays an important role. Words are segmented to the syllables and are examined to extract a morpheme among the syllables. This work is implemented in the Manipuri words written with the Meitei Mayek (script). This is because the syllable formations are distinct comparing to the Manipuri written with Bengali script. The combination of 2-gram or bi-gram and the Standard Deviation technique are used for the identification of the morphemes. This system gives an output with the recall of 59.80%, the precision of 83.02% and the f-score of 69.52%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"These could be a boon for this highly agglutinative language since the complexity of Machine translation could be solved up-to a great extend. Other statistical method can also be implemented to improve the output.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pao-etal-2005-emotion","url":"https:\/\/aclanthology.org\/O05-1013","title":"Emotion Recognition and Evaluation of Mandarin Speech Using Weighted D-KNN Classification","abstract":"In this paper, we proposed a weighted discrete K-nearest neighbor (weighted D-KNN) classification algorithm for detecting and evaluating emotion from Mandarin speech. In the experiments of the emotion recognition, Mandarin emotional speech database used contains five basic emotions, including anger, happiness, sadness, boredom and neutral, and the extracted acoustic features are Mel-Frequency Cepstral Coefficients (MFCC) and Linear Prediction Cepstral Coefficients (LPCC). The results reveal that the highest recognition rate is 79.55% obtained with weighted D-KNN optimized based on Fibonacci series. Besides, we design an emotion radar chart which can present the intensity of each emotion in our emotion evaluation system. Based on our emotion evaluation system, we implement a computer-assisted speech training system for training the hearing-impaired people to speak more naturally.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lawrie-etal-2020-building","url":"https:\/\/aclanthology.org\/2020.lrec-1.570","title":"Building OCR\/NER Test Collections","abstract":"Named entity recognition (NER) identifies spans of text that contain names. Many researchers have reported the results of NER on text created through optical character recognition (OCR) over the past two decades. Unfortunately, the test collections that support this research are annotated with named entities after optical character recognition (OCR) has been run. This means that the collection must be re-annotated if the OCR output changes. Instead, by tying annotations to character locations on the page, a collection can be built that supports OCR and NER research without requiring re-annotation when either improves. This means that named entities are annotated on the transcribed text. The transcribed text is all that is needed to evaluate the performance of OCR. For NER evaluation, the tagged OCR output is aligned to the transcription, and modified versions of each are created and scored. This paper presents a methodology for building such a test collection and releases a collection of Chinese OCR-NER data constructed using the methodology. The paper provides performance baselines for current OCR and NER systems applied to this new collection.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"aramaki-etal-2013-word","url":"https:\/\/aclanthology.org\/I13-1110","title":"Word in a Dictionary is used by Numerous Users","abstract":"Dictionary editing requires enormous time to discuss whether a word should be listed in a dictionary or not. So as to define a dictionary word, this study employs the number of word users as a novel metrics for selecting a dictionary word. In order to obtain the word user, we used about 0.25 billion tweets of approximately 100,000 people published for five months. This study compared the classification performance of various measures. The result of the experiments revealed that a word in a dictionary is used by numerous users.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"linebarger-etal-1993-portable","url":"https:\/\/aclanthology.org\/H93-1006","title":"A Portable Approach to Last Resort Parsing and Interpretation","abstract":"This paper describes an approach to robust processing which is domain-independent in its design, yet which can easily take advantage of domain-specific information. Robust processing is well-integrated into standard processing in this approach, requiring essentially only a single new BNF rule in the grammar. We describe the results of implementing this approach in two different domains.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1993,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lin-cho-2006-question","url":"https:\/\/aclanthology.org\/W06-0703","title":"Question Pre-Processing in a QA System on Internet Discussion Groups","abstract":"This paper proposes methods to pre-process questions in the postings before a QA system can find answers in a discussion group in the Internet. Pre-processing includes garbage text removal and question segmentation. Garbage keywords are collected and different length thresholds are assigned to them for garbage text identification. Interrogative forms and question types are used to segment questions. The best performance on the test set achieves 92.57% accuracy in garbage text removal and 85.87% accuracy in question segmentation, respectively.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"johnson-etal-2020-spice","url":"https:\/\/aclanthology.org\/2020.lrec-1.503","title":"SpiCE: A New Open-Access Corpus of Conversational Bilingual Speech in Cantonese and English","abstract":"This paper describes the design, collection, orthographic transcription, and phonetic annotation of SpiCE, a new corpus of conversational Cantonese-English bilingual speech recorded in Vancouver, Canada. The corpus includes high-quality recordings of 34 early bilinguals in both English and Cantonese-to date, 27 have been recorded for a total of 19 hours of participant speech. Participants completed a sentence reading task, storyboard narration, and conversational interview in each language. Transcription and annotation for the corpus are currently underway. Transcripts produced with Google Cloud Speech-to-Text are available for all participants, and will be included in the initial SpiCE corpus release. Hand-corrected orthographic transcripts and force-aligned phonetic transcripts will be released periodically, and upon completion for all recordings, comprise the second release of the corpus. As an open-access language resource, SpiCE will promote bilingualism research for a typologically distinct pair of languages, of which Cantonese remains understudied despite there being millions of speakers around the world. The SpiCE corpus is especially well-suited for phonetic research on conversational speech, and enables researchers to study cross-language within-speaker phenomena for a diverse group of early Cantonese-English bilinguals. These are areas with few existing high-quality resources.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The development of SpiCE was supported by the University of British Columbia Public Scholars Initiative, 13 and by SSHRC grant F16-04616 awarded to the second author. Rachel Soo assisted with sentence list and provided feedback on the interview format. Various undergraduate members of the Speech-in-Context lab contributed by handcorrecting the orthographic transcriptions, including Ariana Hernandez, Christina Sen, Kristy Chan, Michelle To, Nat\u00e1lia Oliveira, Katherine Lee, and Rachel Wong. The collection and dissemination of SpiCE was approved by the University of British Columbia Behavioural Research Ethics Board.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pustejovsky-krishnaswamy-2018-every","url":"https:\/\/aclanthology.org\/W18-4301","title":"Every Object Tells a Story","abstract":"Most work within the computational event modeling community has tended to focus on the interpretation and ordering of events that are associated with verbs and event nominals in linguistic expressions. What is often overlooked in the construction of a global interpretation of a narrative is the role contributed by the objects participating in these structures, and the latent events and activities conventionally associated with them. Recently, the analysis of visual images has also enriched the scope of how events can be identified, by anchoring both linguistic expressions and ontological labels to segments, subregions, and properties of images. By semantically grounding event descriptions in their visualizations, the importance of object-based attributes becomes more apparent. In this position paper, we look at the narrative structure of objects: that is, how objects reference events through their intrinsic attributes, such as affordances, purposes, and functions. We argue that, not only do objects encode conventionalized events, but that when they are composed within specific habitats, the ensemble can be viewed as modeling coherent event sequences, thereby enriching the global interpretation of the evolving narrative being constructed.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank the reviewers for their helpful comments. We would also like Tuan Do, Kyeongmin Rim, Marc Verhagen, and David McDonald for discussion on the topic. This work is supported by a contract with the US Defense Advanced Research Projects Agency (DARPA), Contract CwC-W911NF-15-C-0238. Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jindal-2016-personalized","url":"https:\/\/aclanthology.org\/P16-3022","title":"A Personalized Markov Clustering and Deep Learning Approach for Arabic Text Categorization","abstract":"Text categorization has become a key research field in the NLP community. However, most works in this area are focused on Western languages ignoring other Semitic languages like Arabic. These languages are of immense political and social importance necessitating robust categorization techniques. In this paper, we present a novel three-stage technique to efficiently classify Arabic documents into different categories based on the words they contain. We leverage the significance of root-words in Arabic and incorporate a combination of Markov clustering and Deep Belief Networks to classify Arabic words into separate groups (clusters). Our approach is tested on two public datasets giving a F-Measure of 91.02%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We acknowledge the support and guidance of Quality of Life Technology (QoLT) laboratory in the University of Texas at Dallas. We are thankful to Dr. Maziyar Baran Pouyan and Dr. Mehrdad Nourani for conceiving the original integration techniques for bioinformatics data using Gaussian Estimation and Fuzzy-c-means. We further like to thank University of Texas at Dallas and ACL Don and Betty Walker Scholarship program. We are specially thankful to Dr. Christoph Teichmann for his insightful comments as the mentor through ACL Student Mentorship Program.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"takmaz-2022-team","url":"https:\/\/aclanthology.org\/2022.cmcl-1.16","title":"Team DMG at CMCL 2022 Shared Task: Transformer Adapters for the Multi- and Cross-Lingual Prediction of Human Reading Behavior","abstract":"In this paper, we present the details of our approaches that attained the second place in the shared task of the ACL 2022 Cognitive Modeling and Computational Linguistics Workshop. The shared task is focused on multiand cross-lingual prediction of eye movement features in human reading behavior, which could provide valuable information regarding language processing. To this end, we train 'adapters' inserted into the layers of frozen transformer-based pretrained language models. We find that multilingual models equipped with adapters perform well in predicting eyetracking features. Our results suggest that utilizing language-and task-specific adapters is beneficial and translating test sets into similar languages that exist in the training set could help with zero-shot transferability in the prediction of human reading behavior.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Raquel Fern\u00e1ndez, Sandro Pezzelle and Ahmet \u00dcst\u00fcn for their valuable feedback regarding the project and the paper. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 819455).","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hovy-2011-invited","url":"https:\/\/aclanthology.org\/W11-3701","title":"Invited Keynote: What are Subjectivity, Sentiment, and Affect?","abstract":"Pragmatics -the aspects of text that signal interpersonal and situational information, complementing semantics-has been almost totally ignored in Natural Language Processing. But in the past five to eight years there has been a surge of research on the general topic of 'opinion', also called 'sentiment'. Generally, research focuses on the determining the author's opinion\/sentiment about some topic within a given fragment of text. Since opinions may differ, it is granted that the author's opinion is 'subjective', and the effectiveness of an opiniondetermination system is measured by comparing against a gold-standard set of human annotations.\nBut what does 'subjectivity' actually mean? What are 'opinion' and 'sentiment'? Lately, researchers are also starting to talk about 'affect', and even 'emotion'. What are these notions, and how do they differ from one another?","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"van-ess-dykema-etal-2014-novel","url":"https:\/\/aclanthology.org\/2014.amta-users.15","title":"A novel use of MT in the development of a text level analytic for language learning","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"russo-etal-2020-control","url":"https:\/\/aclanthology.org\/2020.findings-emnlp.33","title":"Control, Generate, Augment: A Scalable Framework for Multi-Attribute Text Generation","abstract":"We introduce CGA, a conditional VAE architecture, to control, generate, and augment text. CGA is able to generate natural English sentences controlling multiple semantic and syntactic attributes by combining adversarial learning with a context-aware loss and a cyclical word dropout routine. We demonstrate the value of the individual model components in an ablation study. The scalability of our approach is ensured through a single discriminator, independently of the number of attributes. We show high quality, diversity and attribute control in the generated sentences through a series of automatic and human assessments. As the main application of our work, we test the potential of this new NLG model in a data augmentation scenario. In a downstream NLP task, the sentences generated by our CGA model show significant improvements over a strong baseline, and a classification performance often comparable to adding same amount of additional real data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"CZ and the DS3Lab gratefully acknowledge the support from the Swiss National Science Foundation (Project Number 200021 184628), Swiss Data Science Center, Alibaba, Cisco, eBay, Google Focused Research Awards, Oracle Labs, Swisscom, Zurich Insurance, Chinese Scholarship Council, and the Department of Computer Science at ETH Zurich.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sawhney-etal-2020-time","url":"https:\/\/aclanthology.org\/2020.emnlp-main.619","title":"A Time-Aware Transformer Based Model for Suicide Ideation Detection on Social Media","abstract":"Social media's ubiquity fosters a space for users to exhibit suicidal thoughts outside of traditional clinical settings. Understanding the build-up of such ideation is critical for the identification of at-risk users and suicide prevention. Suicide ideation is often linked to a history of mental depression. The emotional spectrum of a user's historical activity on social media can be indicative of their mental state over time. In this work, we focus on identifying suicidal intent in English tweets by augmenting linguistic models with historical context. We propose STATENet, a timeaware transformer based model for preliminary screening of suicidal risk on social media. STATENet outperforms competitive methods, demonstrating the utility of emotional and temporal contextual cues for suicide risk assessment. We discuss the empirical, qualitative, practical, and ethical aspects of STATENet for suicide ideation detection. 1","label_nlp4sg":1,"task":["Suicide Ideation Detection"],"method":["Transformer"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"We would like to think Alex Polozov, Kawin Ethayarajh, Sebastian Gehrmann, Siva Reddy, and the anonymous reviewers for their extremely helpful feedback and comments.","year":2020,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lu-etal-2021-parameter-efficient","url":"https:\/\/aclanthology.org\/2021.findings-emnlp.325","title":"Parameter-Efficient Domain Knowledge Integration from Multiple Sources for Biomedical Pre-trained Language Models","abstract":"Domain-specific pre-trained language models (PLMs) have achieved great success over various downstream tasks in different domains. However, existing domain-specific PLMs mostly rely on self-supervised learning over large amounts of domain text, without explicitly integrating domain-specific knowledge, which can be essential in many domains. Moreover, in knowledge-sensitive areas such as the biomedical domain, knowledge is stored in multiple sources and formats, and existing biomedical PLMs either neglect them or utilize them in a limited manner. In this work, we introduce an architecture to integrate domain knowledge from diverse sources into PLMs in a parameter-efficient way. More specifically, we propose to encode domain knowledge via adapters, which are small bottleneck feed-forward networks inserted between intermediate transformer layers in PLMs. These knowledge adapters are pre-trained for individual domain knowledge sources and integrated via an attention-based knowledge controller to enrich PLMs. Taking the biomedical domain as a case study, we explore three knowledge-specific adapters for PLMs based on the UMLS Metathesaurus graph, the Wikipedia articles for diseases, and the semantic grouping information for biomedical concepts. Extensive experiments on different biomedical NLP tasks and datasets demonstrate the benefits of the proposed architecture and the knowledge-specific adapters across multiple PLMs.","label_nlp4sg":1,"task":["Domain Knowledge Integration"],"method":["adapters","language models"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"This research has been supported by the Army Research Office (ARO) grant W911NF-21-1-0112","year":2021,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"papp-1982-empirical","url":"https:\/\/aclanthology.org\/C82-1048","title":"Empirical Data and Automatic Analysis","abstract":"The purpose of the present paper is to show the usefulness of (1) the computer processing of the manifold data of lexicographic works; and (2) the normal and reverse alphabetized concordances compiled on the basis of different texts.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1982,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sasha-2008-lexical","url":"https:\/\/aclanthology.org\/W08-1909","title":"Lexical-Functional Correspondences and Their Use in the System of Machine Translation ETAP-3","abstract":"ETAP-3 is a system of machine translation consisting of various types of rules and dictionaries. Those dictionaries, being created especially for NLP system, provide for every lexeme not only data about its characteristics as a separate item, but also different types of information about its syntactic and semantic links to other lexemes. The paper shows how the information about certain types of semantic links between lexemes represented in the dictionaries can be used in a machine translation system. The paper deals with correspondences between lexicalfunctional constructions of different types in the Russian and the English languages. Lexical-functional construction is a word-combination consisting of an argument of a lexical function and a value of this lexical function for this argument. The paper describes the cases when a lexical functional construction in one of these languages corresponds to a lexicalfunctional construction in the other language, but lexical functions represented by these two constructions are different. The paper lists different types of correspondences and gives the reasons for their existence. It also shows how the information about these correspondences can be used to improve the work of the linguistic component of the machine translation system ETAP-3.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gavalda-waibel-1998-growing","url":"https:\/\/aclanthology.org\/C98-1072","title":"Growing Semantic Grammars","abstract":"A critical path in the development of natural language understanding (NLU) modules lles in the difficulty of defining a mapping from words to semantics: Usually it takes in the order of years of highly-skilled labor to develop a semmltic mapping, e.g., in the form of a semantic grammar, that is comprehensive enough for a given domain. Yet, due to the very nature of human language, such mappings invariably fail to achieve full coverage oil unseen data. Acknowledging the impossibility of stating a priori all the surface forms by which a concept can be expressed, we present GSG: an empathic computer system for the rapid deployment of NLU front-ends and their dynamic customization by non-expert end-users. Given a new domain for which an NLU front-end is to be developed, two stages are involved. In the authoring stage, GSG aids the developer in the construction of a simple domain model and a kernel analysis grammar. Then, in the run-time stage, GSG provides the enduser with an interactive environment in which the kernel grammar is dynamically extended. Three lear~ling methods are employed in the acquisition of semantic mappings from mmeen data: (i) parser predictions, (ii) tfidden understanding model, and (iii) end-user paraphr~es. A baseline version of GSG has been implemented and preliminary experiments show promising results.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work reported in this paper was funded in part by a grant from ATR Interpreting Telecommmfieations Research Laboratories of Japan.","year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wiebe-1990-identifying","url":"https:\/\/aclanthology.org\/C90-2069","title":"Identifying Subjective Characters in Narrative","abstract":"Part of understanding fictional narrative text is determining for each sentence whether it takes some character's point of view and, if it does, identifying the character whose point of view is taken. This paper presents part of an algorithm for perfomling the latter. When faced with a sentence that takes a character's point of view, the reader has to decide whether that character is a previously mentioned character or one mentioned in the sentence. We give particular consideration to sentences about private states, such as seeing and wanting, for which both possibilities exist. Our algorithm is based on regularities in the ways that texts initiate, continue, and resume a character's point of view, found during extensive examinations of published novels and short stories. i. INTRODUCTION. Part of understanding ficfiona~ narrative text is determining for each sentence whether it takes some character's point of view and, if it does, identifying the character whose point of view is taken. This paper addresses the latter. We show how structural regularities of third-person fictional narrative text can be exploited to perform this task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgments. I wish to thank the members of the SUNY Buffalo Graduate Group in Cognitive Science and the SNePS Research Group for many discussions and ideas, and William Rapaport, Graeme Hirst, and Diane Horton for helpful comments on earlier drafts of this paper.","year":1990,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wilner-etal-2021-narrative","url":"https:\/\/aclanthology.org\/2021.emnlp-main.105","title":"Narrative Embedding: Re-Contextualization Through Attention","abstract":"Narrative analysis is becoming increasingly important for a number of linguistic tasks including summarization, knowledge extraction, and question answering. We present a novel approach for narrative event representation using attention to re-contextualize events across the whole story. Comparing to previous analysis we find an unexpected attachment of event semantics to predicate tokens within a popular transformer model. We test the utility of our approach on narrative completion prediction, achieving state of the art performance on Multiple Choice Narrative Cloze and scoring competitively on the Story Cloze Task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research and document have been helped immeasurably by input from Ben Franco, Justin Houghton, Jeremy Houghton, Stephanie Horbaczewski, and the rest of the Vody Team. The authors would like to pay special thanks to Josh Houghton whose help in the ideation and produc-tion of this research and document was deeply appreciated.We would also like to extend our deepest thanks to all the reviewers whose insightful comments helped to substantially improve this paper.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"saint-dizier-2014-processing","url":"https:\/\/aclanthology.org\/C14-2006","title":"Processing Discourse in Dislog on the TextCoop Platform","abstract":"This demo presents the TextCoop platform and the Dislog language, based on logic programming, which have primarily been designed for discourse processing. The linguistic architecture and the basics of discourse analysis in TextCoop are introduced. Application demos include: argument mining in opinon texts, dialog analysis, and procedural and requirement texts analysis. Via prototypes in the industry, this framework has now reached the TRL5 level.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cakici-2012-morpheme","url":"https:\/\/aclanthology.org\/W12-3620","title":"Morpheme Segmentation in the METU-Sabanc\\i Turkish Treebank","abstract":"Morphological segmentation data for the METU-Sabanc\u0131 Turkish Treebank is provided in this paper. The generalized lexical forms of the morphemes which the treebank previously lacked are added to the treebank. This data maybe used to train POS-taggers that use stemmer outputs to map these lexical forms to morphological tags.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"waszczuk-etal-2017-multiword","url":"https:\/\/aclanthology.org\/W17-6209","title":"Multiword Expression-Aware A* TAG Parsing Revisited","abstract":"A algorithms enable efficient parsing within the context of large grammars and\/or complex syntactic formalisms. Besides, it has been shown that promoting multiword expressions (MWEs) is a beneficial strategy in dealing with syntactic ambiguity. The state-of-the-art A heuristic for promoting MWEs in tree-adjoining grammar (TAG) parsing has certain drawbacks: it is not monotonic and it composes poorly with grammar compression techniques. In this work, we propose an enhanced version of this heuristic, which copes with these shortcomings.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been supported by the French Ministry of Higher Education and Research via a doctoral grant, by the French Centre-Val de Loire Region Council via the APR-AI 2015-1850 ODIL project, by the French National Research Agency (ANR) via the PARSEME-FR 6 project (ANR-14-CERA-0001), and by the European Framework Programme Horizon 2020 via the PARSEME 7 European COST Action (IC1207).We are grateful to the anonymous reviewers for their insightful comments to the first version of this paper.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ouchi-etal-2014-improving","url":"https:\/\/aclanthology.org\/E14-4030","title":"Improving Dependency Parsers with Supertags","abstract":"Transition-based dependency parsing systems can utilize rich feature representations. However, in practice, features are generally limited to combinations of lexical tokens and part-of-speech tags. In this paper, we investigate richer features based on supertags, which represent lexical templates extracted from dependency structure annotated corpus. First, we develop two types of supertags that encode information about head position and dependency relations in different levels of granularity. Then, we propose a transition-based dependency parser that incorporates the predictions from a CRF-based supertagger as new features. On standard English Penn Treebank corpus, we show that our supertag features achieve parsing improvements of 1.3% in unlabeled attachment, 2.07% root attachment, and 3.94% in complete tree accuracy.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"basile-etal-2012-ugroningen","url":"https:\/\/aclanthology.org\/S12-1040","title":"UGroningen: Negation detection with Discourse Representation Structures","abstract":"We use the NLP toolchain that is used to construct the Groningen Meaning Bank to address the task of detecting negation cue and scope, as defined in the shared task \"Resolving the Scope and Focus of Negation\". This toolchain applies the C&C tools for parsing, using the formalism of Combinatory Categorial Grammar, and applies Boxer to produce semantic representations in the form of Discourse Representation Structures (DRSs). For negation cue detection, the DRSs are converted to flat, non-recursive structures, called Discourse Representation Graphs (DRGs). DRGs simplify cue detection by means of edge labels representing relations. Scope detection is done by gathering the tokens that occur within the scope of a negated DRS. The result is a system that is fairly reliable for cue detection and scope detection. Furthermore, it provides a fairly robust algorithm for detecting the negated event or property within the scope.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lee-etal-2003-language","url":"https:\/\/aclanthology.org\/P03-1051","title":"Language Model Based Arabic Word Segmentation","abstract":"We approximate Arabic's rich morphology by a model that a word consists of a sequence of morphemes in the pattern prefix*-stem-suffix* (* denotes zero or more occurrences of a morpheme). Our method is seeded by a small manually segmented Arabic corpus and uses it to bootstrap an unsupervised algorithm to build the Arabic word segmenter from a large unsegmented Arabic corpus. The algorithm uses a trigram language model to determine the most probable morpheme sequence for a given input. The language model is initially estimated from a small manually segmented corpus of about 110,000 words. To improve the segmentation accuracy, we use an unsupervised algorithm for automatically acquiring new stems from a 155 million word unsegmented corpus, and re-estimate the model parameters with the expanded vocabulary and training corpus. The resulting Arabic word segmentation system achieves around 97% exact match accuracy on a test corpus containing 28,449 word tokens. We believe this is a state-of-the-art performance and the algorithm can be used for many highly inflected languages provided that one can create a small manually segmented corpus of the language of interest. 1 Arabic is presented in both native and Buckwalter transliterated Arabic whenever possible. All native Arabic is to be read from right-to-left, and transliterated Arabic is to be read from left-to-right. The convention of","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partially supported by the Defense Advanced Research Projects Agency and monitored by SPAWAR under contract No. N66001-99-2-8916. The views and findings contained in this material are those of the authors and do not necessarily reflect the position of policy of the Government and no official endorsement should be inferred. We would like to thank Martin Franz for discussions on language model building, and his help with the use of ViaVoice language model toolkit.","year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"claveau-kijak-2016-direct","url":"https:\/\/aclanthology.org\/C16-1173","title":"Direct vs. indirect evaluation of distributional thesauri","abstract":"With the success of word embedding methods in various Natural Language Processing tasks, all the fields of distributional semantics have experienced a renewed interest. Beside the famous word2vec, recent studies have presented efficient techniques to build distributional thesaurus; in particular, Claveau et al. (2014) have already shown that Information Retrieval (IR) tools and concepts can be successfully used to build a thesaurus. In this paper, we address the problem of the evaluation of such thesauri or embedding models. Several evaluation scenarii are considered: direct evaluation through reference lexicons and specially crafted datasets, and indirect evaluation through a third party tasks, namely lexical subsitution and Information Retrieval. For this latter task, we adopt the query expansion framework proposed by Claveau and Kijak (2016). Through several experiments, we first show that the recent techniques for building distributional thesaurus outperform the word2vec approach, whatever the evaluation scenario. We also highlight the differences between the evaluation scenarii, which may lead to very different conclusions when comparing distributional models. Last, we study the effect of some parameters of the distributional models on these various evaluation scenarii.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partly funded by French government supports granted to the FUI project NexGenTV and to the CominLabs excellence laboratory and managed by the National Research Agency in the \"Investing for the Future\" program under reference ANR-10-LABX-07-01.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"collier-etal-1998-machine-translation","url":"https:\/\/aclanthology.org\/P98-1041","title":"Machine Translation vs. Dictionary Term Translation - a Comparison for English-Japanese News Article Alignment","abstract":"Bilingual news article alignment methods based on multilingual information retrieval have been shown to be successful for the automatic production of so-called noisy-parallel corpora. In this paper we compare the use of machine translation (MT) to the commonly used dictionary term lookup (DTL) method for Reuter news article alignment in English and Japanese. The results show the trade-off between improved lexical disambiguation provided by machine translation and extended synonym choice provided by dictionary term lookup and indicate that MT is superior to DTL only at medium and low recall levels. At high recall levels DTL has superior precision.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We gratefully acknowledge the kind permission of Reuters for the use of their newswire articles in our research. We especially thank Miwako Shimazu for evaluating the judgement, set used in our simulations.","year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ha-etal-2003-extension","url":"https:\/\/aclanthology.org\/O03-4004","title":"Extension of Zipf's Law to Word and Character N-grams for English and Chinese","abstract":"It is shown that for a large corpus, Zipf 's law for both words in English and characters in Chinese does not hold for all ranks. The frequency falls below the frequency predicted by Zipf's law for English words for rank greater than about 5,000 and for Chinese characters for rank greater than about 1,000. However, when single words or characters are combined together with n-gram words or characters in one list and put in order of frequency, the frequency of tokens in the combined list follows Zipf's law approximately with the slope close to-1 on a loglog plot for all n-grams, down to the lowest frequencies in both languages. This behaviour is also found for English 2-byte and 3-byte word fragments. It only happens when all n-grams are used, including semantically incomplete n-grams. Previous theories do not predict this behaviour, possibly because conditional probabilities of tokens have not been properly represented.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to express their appreciation to reviewers of this paper whose comments and suggestions made a great improvement to the paper and to Dr Xiaoyu Qiao for her contribution of testing and standardising the Chinese morphology.","year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cochran-2009-darwinised","url":"https:\/\/aclanthology.org\/W09-0906","title":"Darwinised Data-Oriented Parsing - Statistical NLP with Added Sex and Death","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"laubli-etal-2019-post","url":"https:\/\/aclanthology.org\/W19-6626","title":"Post-editing Productivity with Neural Machine Translation: An Empirical Assessment of Speed and Quality in the Banking and Finance Domain","abstract":"Neural machine translation (NMT) has set new quality standards in automatic translation, yet its effect on post-editing productivity is still pending thorough investigation. We empirically test how the inclusion of NMT, in addition to domain-specific translation memories and termbases, impacts speed and quality in professional translation of financial texts. We find that even with language pairs that have received little attention in research settings and small amounts of in-domain data for system adaptation, NMT post-editing allows for substantial time savings and leads to equal or slightly better quality.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"brants-1995-tagset","url":"https:\/\/aclanthology.org\/P95-1039","title":"Tagset Reduction Without Information Loss","abstract":"A technique for reducing a tagset used for n-gram part-of-speech disambiguation is introduced and evaluated in an experiment. The technique ensures that all information that is provided by the original tagset can be restored from the reduced one. This is crucial, since we are interested in the linguistically motivated tags for part-of-speech disambiguation. The reduced tagset needs fewer parameters for its statistical model and allows more accurate parameter estimation. Additionally, there is a slight but not significant improvement of tagging accuracy.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1995,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhang-chiang-2012-exploration","url":"https:\/\/aclanthology.org\/P12-2062","title":"An Exploration of Forest-to-String Translation: Does Translation Help or Hurt Parsing?","abstract":"Syntax-based translation models that operate on the output of a source-language parser have been shown to perform better if allowed to choose from a set of possible parses. In this paper, we investigate whether this is because it allows the translation stage to overcome parser errors or to override the syntactic structure itself. We find that it is primarily the latter, but that under the right conditions, the translation stage does correct parser errors, improving parsing accuracy on the Chinese Treebank.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to the anonymous reviewers for their helpful comments. This research was supported in part by DARPA under contract DOI-NBC D11AP00244.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-etal-2021-miss","url":"https:\/\/aclanthology.org\/2021.emnlp-demo.1","title":"MiSS: An Assistant for Multi-Style Simultaneous Translation","abstract":"In this paper, we present MISS, an assistant for multi-style simultaneous translation. Our proposed translation system has five key features: highly accurate translation, simultaneous translation, translation for multiple text styles, back-translation for translation quality evaluation, and grammatical error correction. With this system, we aim to provide a complete translation experience for machine translation users. Our design goals are high translation accuracy, real-time translation, flexibility, and measurable translation quality. Compared with the free commercial translation systems commonly used, our translation assistance system regards the machine translation application as a more complete and fullyfeatured tool for users. By incorporating additional features and giving the user better control over their experience, we improve translation efficiency and performance. Additionally, our assistant system combines machine translation, grammatical error correction, and interactive edits, and uses a crowdsourcing mode to collect more data for further training to improve both the machine translation and grammatical error correction models. A short video demonstrating our system is available at https:\/\/www.youtube. com\/watch?v=ZGCo7KtRKd8.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mille-etal-2020-case","url":"https:\/\/aclanthology.org\/2020.webnlg-1.1","title":"A Case Study of NLG from Multimedia Data Sources: Generating Architectural Landmark Descriptions","abstract":"In this paper, we present a pipeline system that generates architectural landmark descriptions using textual, visual and structured data. The pipeline comprises five main components: (i) a textual analysis component, which extracts information from Wikipedia pages; (ii) a visual analysis component, which extracts information from copyright-free images; (iii) a retrieval component, which gathers relevant property, subject, object triples from DBpedia; (iv) a fusion component, which stores the contents from the different modalities in a Knowledge Base (KB) and resolves the conflicts that stem from using different sources of information; (v) an NLG component, which verbalises the resulting contents of the KB. We show that thanks to the addition of other modalities, we can make the verbalisation of DBpedia triples more relevant and\/or inspirational.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the European Commission in the context of its H2020 Program under the grant numbers 870930-RIA, 779962-RIA, 825079-RIA, 786731-RIA at Universitat Pompeu Fabra and Information Technologies Institute -CERTH.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"blloshmi-etal-2020-xl","url":"https:\/\/aclanthology.org\/2020.emnlp-main.195","title":"XL-AMR: Enabling Cross-Lingual AMR Parsing with Transfer Learning Techniques","abstract":"Meaning Representation (AMR) is a popular formalism of natural language that represents the meaning of a sentence as a semantic graph. It is agnostic about how to derive meanings from strings and for this reason it lends itself well to the encoding of semantics across languages. However, cross-lingual AMR parsing is a hard task, because training data are scarce in languages other than English and the existing English AMR parsers are not directly suited to being used in a cross-lingual setting. In this work we tackle these two problems so as to enable cross-lingual AMR parsing: we explore different transfer learning techniques for producing automatic AMR annotations across languages and develop a crosslingual AMR parser, XL-AMR. This can be trained on the produced data and does not rely on AMR aligners or source-copy mechanisms as is commonly the case in English AMR parsing. The results of XL-AMR significantly surpass those previously reported in Chinese, German, Italian and Spanish. Finally we provide a qualitative analysis which sheds light on the suitability of AMR across languages. We release XL-AMR at github.com\/SapienzaNLP\/xlamr.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors gratefully acknowledge the support of the ERC Consolidator Grant MOUSSE No. 726487 and the ELEXIS project No. 731015 under the European Union's Horizon 2020 research and innovation programme.This work was partially supported by the MIUR under the grant \"Dipartimenti di eccellenza 2018-2022\" of the Department of Computer Science of Sapienza University.The authors would like to thank Luigi Procopio for the valuable discussions during this work.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"runge-hovy-2020-exploring","url":"https:\/\/aclanthology.org\/2020.blackboxnlp-1.20","title":"Exploring Neural Entity Representations for Semantic Information","abstract":"Neural methods for embedding entities are typically extrinsically evaluated on downstream tasks and, more recently, intrinsically using probing tasks. Downstream task-based comparisons are often difficult to interpret due to differences in task structure, while probing task evaluations often look at only a few attributes and models. We address both of these issues by evaluating a diverse set of eight neural entity embedding methods on a set of simple probing tasks, demonstrating which methods are able to remember words used to describe entities, learn type, relationship and factual information, and identify how frequently an entity is mentioned. We also compare these methods in a unified framework on two entity linking tasks and discuss how they generalize to different model architectures and datasets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mylonakis-simaan-2010-learning","url":"https:\/\/aclanthology.org\/W10-2915","title":"Learning Probabilistic Synchronous CFGs for Phrase-Based Translation","abstract":"Probabilistic phrase-based synchronous grammars are now considered promising devices for statistical machine translation because they can express reordering phenomena between pairs of languages. Learning these hierarchical, probabilistic devices from parallel corpora constitutes a major challenge, because of multiple latent model variables as well as the risk of data overfitting. This paper presents an effective method for learning a family of particular interest to MT, binary Synchronous Context-Free Grammars with inverted\/monotone orientation (a.k.a. Binary ITG). A second contribution concerns devising a lexicalized phrase reordering mechanism that has complimentary strengths to Chiang's model. The latter conditions reordering decisions on the surrounding lexical context of phrases, whereas our mechanism works with the lexical content of phrase pairs (akin to standard phrase-based systems). Surprisingly, our experiments on French-English data show that our learning method applied to far simpler models exhibits performance indistinguishable from the Hiero system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chaudhry-etal-2013-divergences","url":"https:\/\/aclanthology.org\/W13-3705","title":"Divergences in English-Hindi Parallel Dependency Treebanks","abstract":"We present, here, our analysis of systematic divergences in parallel English-Hindi dependency treebanks based on the Computational Paninian Grammar (CPG) framework. Study of structural divergences in parallel treebanks not only helps in developing larger treebanks automatically, but can also be useful for many NLP applications such as data-driven machine translation (MT) systems. Given that the two treebanks are based on the same grammatical model, a study of divergences in them could be of advantage to such tasks, along with making it more interesting to study how and where they diverge. We consider two parallel trees divergent based on differences in constructions, relations marked, frequency of annotation labels and tree depth. Some interesting instances of structural divergences in the treebanks have been discussed in the course of this paper. We also present our task of alignment of the two treebanks, wherein we talk about our extraction of divergent structures in the trees, and discuss the results of this exercise.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We gratefully acknowledge the provision of the useful resource by way of the Hindi Treebank developed under HUTB, of which the Hindi treebank used for our research purpose is a part, and the work for which is supported by the NSF grant (Award Number: CNS 0751202; CFDA Number: 47.070). Also, any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"antognini-faltings-2021-rationalization","url":"https:\/\/aclanthology.org\/2021.findings-acl.68","title":"Rationalization through Concepts","abstract":"Automated predictions require explanations to be interpretable by humans. One type of explanation is a rationale, i.e., a selection of input features such as relevant text snippets from which the model computes the outcome. However, a single overall selection does not provide a complete explanation, e.g., weighing several aspects for decisions. To this end, we present a novel self-interpretable model called ConRAT. Inspired by how human explanations for high-level decisions are often based on key concepts, ConRAT extracts a set of text snippets as concepts and infers which ones are described in the document. Then, it explains the outcome with a linear aggregation of concepts. Two regularizers drive ConRAT to build interpretable concepts. In addition, we propose two techniques to boost the rationale and predictive performance further. Experiments on both single-and multi-aspect sentiment classification tasks show that ConRAT is the first to generate concepts that align with human rationalization while using only the overall label. Further, it outperforms state-of-the-art methods trained on each aspect label independently.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cappelli-lenci-2020-pisa","url":"https:\/\/aclanthology.org\/2020.starsem-1.14","title":"PISA: A measure of Preference In Selection of Arguments to model verb argument recoverability","abstract":"Our paper offers a computational model of the semantic recoverability of verb arguments, tested in particular on direct objects and Instruments. Our fully distributional model is intended to improve on older taxonomy-based models, which require a lexicon in addition to the training corpus. We computed the selectional preferences of 99 transitive verbs and 173 Instrument verbs as the mean value of the pairwise cosine similarity between their arguments (a weighted mean between all the arguments, or an unweighted mean with the topmost k arguments). Results show that our model can predict the recoverability of objects and Instruments, providing a similar result to that of taxonomy-based models but at a much cheaper computational cost.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Ludovica Pannitto for helping us with the computational implementation of our model, Najoung Kim for her contribution to the shaping of the ideas hereby presented, and the anonymous reviewers for their comments and suggestions.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-etal-2019-topic","url":"https:\/\/aclanthology.org\/N19-1015","title":"Topic-Guided Variational Auto-Encoder for Text Generation","abstract":"We propose a topic-guided variational autoencoder (TGVAE) model for text generation. Distinct from existing variational autoencoder (VAE) based approaches, which assume a simple Gaussian prior for the latent code, our model specifies the prior as a Gaussian mixture model (GMM) parametrized by a neural topic module. Each mixture component corresponds to a latent topic, which provides guidance to generate sentences under the topic. The neural topic module and the VAE-based neural sequence module in our model are learned jointly. In particular, a sequence of invertible Householder transformations is applied to endow the approximate posterior of the latent code with high flexibility during model inference. Experimental results show that our TGVAE outperforms alternative approaches on both unconditional and conditional text generation, which can generate semantically-meaningful sentences with various topics.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"malik-etal-2010-transliterating","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/194_Paper.pdf","title":"Transliterating Urdu for a Broad-Coverage Urdu\/Hindi LFG Grammar","abstract":"In this paper, we present a system for transliterating the Arabic-based script of Urdu to a Roman transliteration scheme. The system is integrated into a larger system consisting of a morphology module, implemented via finite state technologies, and a computational LFG grammar of Urdu that was developed with the grammar development platform XLE (Crouch et al. 2008). Our long-term goal is to handle Hindi alongside Urdu; the two languages are very similar with respect to syntax and lexicon and hence, one grammar can be used to cover both languages. However, they are not similar concerning the script-Hindi is written in Devanagari, while Urdu uses an Arabic-based script. By abstracting away to a common Roman transliteration scheme in the respective transliterators, our system can be enabled to handle both languages in parallel. In this paper, we discuss the pipeline architecture of the Urdu-Roman transliterator, mention several linguistic and orthographic issues and present the integration of the transliterator into the LFG parsing system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"shaw-gafos-2010-quantitative","url":"https:\/\/aclanthology.org\/W10-2207","title":"Quantitative Evaluation of Competing Syllable Parses","abstract":"This paper develops computational tools for evaluating competing syllabic parses of a phonological string on the basis of temporal patterns in speech production data. This is done by constructing models linking syllable parses to patterns of coordination between articulatory events. Data simulated from different syllabic parses are evaluated against experimental data from American English and Moroccan Arabic, two languages claimed to parse similar strings of segments into different syllabic structures. Results implicate a tautosyllabic parse of initial consonant clusters in English and a heterosyllabic parse of initial clusters in Arabic, in accordance with theoretical work on the syllable structure of these languages. It is further demonstrated that the model can correctly diagnose syllable structure even when previously proposed phonetic heuristics for such structure do not clearly point to the correct diagnosis.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors gratefully acknowledge support from NSF grant 0922437. This paper was improved by the comments and suggestions of three anonymous reviewers. Remaining errors are solely the responsibility of the authors.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"orasan-etal-2004-comparison","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/362.pdf","title":"A Comparison of Summarisation Methods Based on Term Specificity Estimation","abstract":"In automatic summarisation, knowledge poor methods do not necessarily perform worse than those which employ several knowledge sources to produce a summary. This paper presents a comprehensive comparison of several summarisation methods based on term specificity estimation in order to find out which one performs best. Parameters such as quality of the summary produced and the resources required to produce accurate results are considered in order to find out which of these methods is more appropriate for a real world application. Intrinsic and extrinsic evaluation indicates that TF*RIDF, a variant of the commonly used TF*IDF, is the best performing method.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nagata-1994-stochastic","url":"https:\/\/aclanthology.org\/C94-1032","title":"A Stochastic Japanese Morphological Analyzer Using a Forward-DP Backward-A* N-Best Search Algorithm","abstract":"We present a novel method for segmenting the input sentence into words and assigning parts of speech to the words. It consists of a statistical language model and an efficient two-pa~qs N-best search algorithm. The algorithm does not require delimiters between words. Thus it is suitable for written Japanese. q'he proposed Japanese morphological analyzer achieved 95. l% recall and 94.6% precision for open text when it was trained and tested on the ATI'\u00a2 Corpus.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"shen-etal-2018-comparing","url":"https:\/\/aclanthology.org\/K18-1029","title":"Comparing Models of Associative Meaning: An Empirical Investigation of Reference in Simple Language Games","abstract":"Simple reference games (Wittgenstein, 1953) are of central theoretical and empirical importance in the study of situated language use. Although language provides rich, compositional truth-conditional semantics to facilitate reference, speakers and listeners may sometimes lack the overall lexical and cognitive resources to guarantee successful reference through these means alone. However, language also has rich associational structures that can serve as a further resource for achieving successful reference. Here we investigate this use of associational information in a setting where only associational information is available: a simplified version of the popular game Codenames. Using optimal experiment design techniques, we compare a range of models varying in the type of associative information deployed and in level of pragmatic sophistication against human behavior. In this setting we find that listeners' behavior reflects direct bigram collocational associations more strongly than word-embedding or semantic knowledge graph-based associations and that there is little evidence for pragmatically sophisticated behavior by either speakers or listeners of the type that might be predicted by recursive-reasoning models such as the Rational Speech Acts theory. These results shed light on the nature of the lexical resources that speakers and listeners can bring to bear in achieving reference through associative meaning alone.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by NSF grants BCS-1456081 and BCS-1551866 to RPL. We'd like to thank Iyad Rahwan and the Scalable Cooperation group for their valuable input and support.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ahrenberg-2010-alignment","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/193_Paper.pdf","title":"Alignment-based Profiling of Europarl Data in an English-Swedish Parallel Corpus","abstract":"This paper profiles the Europarl part of an English-Swedish parallel corpus and compares it with three other subcorpora of the same parallel corpus. We first describe our method for comparison which is based on alignments, both at the token level and the structural level. Although two of the other subcorpora contains fiction, it is found that the Europarl part is the one having the highest proportion of many types of restructurings, including additions, deletions and long distance reorderings. We explain this by the fact that the majority of Europarl segments are parallel translations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nayak-etal-2017-v","url":"https:\/\/aclanthology.org\/P17-3005","title":"V for Vocab: An Intelligent Flashcard Application","abstract":"Students choose to use flashcard applications available on the Internet to help memorize word-meaning pairs. This is helpful for tests such as GRE, TOEFL or IELTS, which emphasize on verbal skills. However, monotonous nature of flashcard applications can be diminished with the help of Cognitive Science through Testing Effect. Experimental evidences have shown that memory tests are an important tool for long term retention (Roediger and Karpicke, 2006). Based on these evidences, we developed a novel flashcard application called \"V for Vocab\" that implements short answer based tests for learning new words. Furthermore, we aid this by implementing our short answer grading algorithm which automatically scores the user's answer. The algorithm makes use of an alternate thesaurus instead of traditional Wordnet and delivers state-of-theart performance on popular word similarity datasets. We also look to lay the foundation for analysis based on implicit data collected from our application.","label_nlp4sg":1,"task":["learning new words"],"method":["Flashcard Application","grading algorithm"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their insightful comments. We also thank Dr. Vijaya Kumar B P, Professor at M S Ramaiah Institute of Technology, Bangalore for his valuable suggestions. This research was supported by Department of Electronic Systems Engineering (formerly CEDT), Indian Institute of Science.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mcmahon-smith-1996-improving","url":"https:\/\/aclanthology.org\/J96-2003","title":"Improving Statistical Language Model Performance with Automatically Generated Word Hierarchies","abstract":"An automatic word-classification system has been designed that uses word unigram and bigram frequency statistics to implement a binary top-down form of word clustering and employs an average class mutual information metric. Words are represented as structural tags-n-bit numbers the most significant bit-patterns of which incorporate class information. The classification system has revealed some of the lexical structure of English, as well as some phonemic and semantic structure. The system has been compared-directly and indirectly-with other recent word-classification systems. We see our classification as a means towards the end of constructing multilevel class-based interpolated language models. We have built some of these models and carried out experiments that show a 7% drop in test set perplexity compared to a standard interpolated trigram language model.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Both authors thank the Oxford Text Archive and British Telecom for use of their corpora. The first author wishes to thank British Telecom and the Department of Education for Northern Ireland for their support.","year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-etal-2009-variational","url":"https:\/\/aclanthology.org\/P09-1067","title":"Variational Decoding for Statistical Machine Translation","abstract":"Statistical models in machine translation exhibit spurious ambiguity. That is, the probability of an output string is split among many distinct derivations (e.g., trees or segmentations). In principle, the goodness of a string is measured by the total probability of its many derivations. However, finding the best string (e.g., during decoding) is then computationally intractable. Therefore, most systems use a simple Viterbi approximation that measures the goodness of a string using only its most probable derivation. Instead, we develop a variational approximation, which considers all the derivations but still allows tractable decoding. Our particular variational distributions are parameterized as n-gram models. We also analytically show that interpolating these n-gram models for different n is similar to minimumrisk decoding for BLEU (Tromble et al., 2008). Experiments show that our approach improves the state of the art.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ramesh-anand-2020-outcomes","url":"https:\/\/aclanthology.org\/2020.winlp-1.39","title":"Outcomes of coming out: Analyzing stories of LGBTQ+","abstract":null,"label_nlp4sg":1,"task":[],"method":[],"goal1":"Gender Equality","goal2":"Reduced Inequalities","goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":1,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":1,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"almarwani-diab-2017-arabic","url":"https:\/\/aclanthology.org\/W17-1322","title":"Arabic Textual Entailment with Word Embeddings","abstract":"Determining the textual entailment between texts is important in many NLP tasks, such as summarization, question answering, and information extraction and retrieval. Various methods have been suggested based on external knowledge sources; however, such resources are not always available in all languages and their acquisition is typically laborious and very costly. Distributional word representations such as word embeddings learned over large corpora have been shown to capture syntactic and semantic word relationships. Such models have contributed to improving the performance of several NLP tasks. In this paper, we address the problem of textual entailment in Arabic. We employ both traditional features and distributional representations. Crucially, we do not depend on any external resources in the process. Our suggested approach yields state of the art performance on a standard data set, ArbTE, achieving an accuracy of 76.2 % compared to current state of the art of 69.3 %.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hsieh-etal-2013-predicting","url":"https:\/\/aclanthology.org\/W13-4201","title":"Predicting TV Audience Rating with Social Media","abstract":"In Taiwan, there are different types of TV programs, and each program usually has its broadcast length and frequency. We accumulate the broadcasted TV programs' word-ofmouth on Facebook and apply the Backpropagation Network to predict the latest program audience rating. TV audience rating is an important indicator regarding the popularity of programs and it is also a factor to influence the revenue of broadcast stations via advertisements. Currently, the present media environments are drastically changing our media consumption patterns. We can watch TV programs on YouTube regardless location and timing. In this paper, we develop a model for predicting TV audience rating. We also present the audience rating trend analysis on demo system which is used to describe the relation between predictive audience rating and Nielsen TV rating.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This study is conducted under the \"Social Intelligence Analysis Service Platform\" project of the Institute for Information Industry which is subsidized by the Ministry of Economy Affairs of the Republic of China.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhang-etal-2020-graph","url":"https:\/\/aclanthology.org\/2020.coling-main.5","title":"A Graph Representation of Semi-structured Data for Web Question Answering","abstract":"The abundant semi-structured data on the Web, such as HTML-based tables and lists, provide commercial search engines a rich information source for question answering (QA). Different from plain text passages in Web documents, Web tables and lists have inherent structures, which carry semantic correlations among various elements in tables and lists. Many existing studies treat tables and lists as flat documents with pieces of text and do not make good use of semantic information hidden in structures. In this paper, we propose a novel graph representation of Web tables and lists based on a systematic categorization of the components in semi-structured data as well as their relations. We also develop pre-training and reasoning techniques on the graph model for the QA task. Extensive experiments on several real datasets collected from a commercial engine verify the effectiveness of our approach. Our method improves F1 score by 3.90 points over the state-of-the-art baselines.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"xu-etal-2013-examination","url":"https:\/\/aclanthology.org\/N13-1082","title":"An Examination of Regret in Bullying Tweets","abstract":"Social media users who post bullying related tweets may later experience regret, potentially causing them to delete their posts. In this paper, we construct a corpus of bullying tweets and periodically check the existence of each tweet in order to infer if and when it becomes deleted. We then conduct exploratory analysis in order to isolate factors associated with deleted posts. Finally, we propose the construction of a regrettable posts predictor to warn users if a tweet might cause regret.","label_nlp4sg":1,"task":["Examination of Regret"],"method":["corpus of bullying tweets","exploratory analysis"],"goal1":"Good Health and Well-Being","goal2":"Peace, Justice and Strong Institutions","goal3":null,"acknowledgments":"We thank Kwang-Sung Jun, Angie Calvin, and Charles Dyer for helpful discussions. This work is supported by National Science Foundation grants IIS-1216758 and IIS-1148012.","year":2013,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"bhat-etal-2020-word","url":"https:\/\/aclanthology.org\/2020.repl4nlp-1.4","title":"Word Embeddings as Tuples of Feature Probabilities","abstract":"In this paper, we provide an alternate perspective on word representations, by reinterpreting the dimensions of the vector space of a word embedding as a collection of features. In this reinterpretation, every component of the word vector is normalized against all the word vectors in the vocabulary. This idea now allows us to view each vector as an n-tuple (akin to a fuzzy set), where n is the dimensionality of the word representation and each element represents the probability of the word possessing a feature. Indeed, this representation enables the use fuzzy set theoretic operations, such as union, intersection and difference. Unlike previous attempts, we show that this representation of words provides a notion of similarity which is inherently asymmetric and hence closer to human similarity judgements. We compare the performance of this representation with various benchmarks, and explore some of the unique properties including function word detection, detection of polysemous words, and some insight into the interpretability provided by set theoretic operations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their time and comments which have helped make this paper and its contribution better.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chu-etal-2020-entyfi","url":"https:\/\/aclanthology.org\/2020.emnlp-demos.14","title":"ENTYFI: A System for Fine-grained Entity Typing in Fictional Texts","abstract":"Fiction and fantasy are archetypes of long-tail domains that lack suitable NLP methodologies and tools. We present ENTYFI, a web-based system for fine-grained typing of entity mentions in fictional texts. It builds on 205 automatically induced high-quality type systems for popular fictional domains, and provides recommendations towards reference type systems for given input texts. Users can exploit the richness and diversity of these reference type systems for fine-grained supervised typing, in addition, they can choose among and combine four other typing modules: pre-trained real-world models, unsupervised dependency-based typing, knowledge base lookups, and constraint-based candidate consolidation. The demonstrator is available at https:\/\/d5demos.mpi-inf.mpg. de\/entyfi. Mention Settings Default (Ref. universes + all modules) Default without type consolidation Only real-world typing Elladan & Elrohir men, hybrid peoples, elves of rivendell, real world,","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zupon-etal-2020-analysis","url":"https:\/\/aclanthology.org\/2020.insights-1.10","title":"An Analysis of Capsule Networks for Part of Speech Tagging in High- and Low-resource Scenarios","abstract":"Neural networks are a common tool in NLP, but it is not always clear which architecture to use for a given task. Different tasks, different languages, and different training conditions can all affect how a neural network will perform. Capsule Networks (CapsNets) are a relatively new architecture in NLP. Due to their novelty, CapsNets are being used more and more in NLP tasks. However, their usefulness is still mostly untested. In this paper, we compare three neural network architectures-LSTM, CNN, and CapsNet-on a part of speech tagging task. We compare these architectures in both high-and low-resource training conditions and find that no architecture consistently performs the best. Our analysis shows that our CapsNet performs nearly as well as a more complex LSTM under certain training conditions, but not others, and that our CapsNet almost always outperforms our CNN. We also find that our CapsNet implementation shows faster prediction times than the LSTM for Scottish Gaelic but not for Spanish, highlighting the effect that the choice of languages can have on the models.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-etal-2019-make","url":"https:\/\/aclanthology.org\/P19-1393","title":"Does it Make Sense? And Why? A Pilot Study for Sense Making and Explanation","abstract":"Introducing common sense to natural language understanding systems has received increasing research attention. It remains a fundamental question on how to evaluate whether a system has a sense making capability. Existing benchmarks measures commonsense knowledge indirectly and without explanation. In this paper, we release a benchmark to directly test whether a system can differentiate natural language statements that make sense from those that do not make sense. In addition, a system is asked to identify the most crucial reason why a statement does not make sense. We evaluate models trained over large-scale language modeling tasks as well as human performance, showing that there are different challenges for system sense making.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for constructive suggestions, and the non-author data annotators Run'ge Yan, Chenyan Du, Zinqun Zhou and Qikui Feng. The work is supported by NSFC grant number 61572245. Yue Zhang is the corresponding author.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cui-etal-2010-joint","url":"https:\/\/aclanthology.org\/P10-2002","title":"A Joint Rule Selection Model for Hierarchical Phrase-Based Translation","abstract":"In hierarchical phrase-based SMT systems, statistical models are integrated to guide the hierarchical rule selection for better translation performance. Previous work mainly focused on the selection of either the source side of a hierarchical rule or the target side of a hierarchical rule rather than considering both of them simultaneously. This paper presents a joint model to predict the selection of hierarchical rules. The proposed model is estimated based on four sub-models where the rich context knowledge from both source and target sides is leveraged. Our method can be easily incorporated into the practical SMT systems with the log-linear model framework. The experimental results show that our method can yield significant improvements in performance.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are especially grateful to the anonymous reviewers for their insightful comments. We also thank Hendra Setiawan, Yuval Marton, Chi-Ho Li, Shujie Liu and Nan Duan for helpful discussions.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tseng-2001-highlighting","url":"https:\/\/aclanthology.org\/Y01-1015","title":"Highlighting Utterances in Chinese Spoken Discourse","abstract":"This paper presents results of an empirical analysis on the structuring of spoken discourse focusing upon how some particular utterance components in Chinese spoken dialogues are highlighted. A restricted number of words frequently and regularly found within utterances structure the dialogues by marking certain significant locations. Furthermore, a variety of signals of monitoring and repairing in conversation are also analysed and discussed. This includes discourse particles, speech disfluency as well as their prosodic representation. In this paper, they are considered a kind of \"highlighting-means\" in spoken language, because their function is to strengthen the structuring of discourse as well as to emphasise important functions and positions within utterances in order to support the coordination and the communication between interlocutors in conversation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The study presented in this paper is in part supported by National Science Council (NSC 89-2411-H-001-098). I'd like to thank the Industrial Research Technology Institute (IRTI) for generously providing the TWPTH corpus and Hui-Hsin Tseng for her work on annotating the speech data.","year":2001,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bohnet-wanner-2010-open","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/585_Paper.pdf","title":"Open Soucre Graph Transducer Interpreter and Grammar Development Environment","abstract":"Graph and tree transducers have been applied in many NLP areas-among them, machine translation, summarization, parsing, and text generation. In particular, the successful use of tree rewriting transducers for the introduction of syntactic structures in statistical machine translation contributed to their popularity. However, the potential of such transducers is limited because they do not handle graphs and because they \"consume\" the source structure in that they rewrite it instead of leaving it intact for intermediate consultations. In this paper, we describe an open source tree and graph transducer interpreter, which combines the advantages of graph transducers and two-tape Finite State Transducers and surpasses the limitations of state-of-the-art tree rewriting transducers. Along with the transducer, we present a graph grammar development environment that supports the compilation and maintenance of graph transducer grammatical and lexical resources. Such an environment is indispensable for any effort to create consistent large coverage NLP-resources by human experts.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dong-etal-2020-multi","url":"https:\/\/aclanthology.org\/2020.emnlp-main.749","title":"Multi-Fact Correction in Abstractive Text Summarization","abstract":"Pre-trained neural abstractive summarization systems have dominated extractive strategies on news summarization performance, at least in terms of ROUGE. However, systemgenerated abstractive summaries often face the pitfall of factual inconsistency: generating incorrect facts with respect to the source text. To address this challenge, we propose Span-Fact, a suite of two factual correction models that leverages knowledge learned from question answering models to make corrections in system-generated summaries via span selection. Our models employ single or multimasking strategies to either iteratively or autoregressively replace entities in order to ensure semantic consistency w.r.t. the source text, while retaining the syntactic structure of summaries generated by abstractive summarization models. Experiments show that our models significantly boost the factual consistency of system-generated summaries without sacrificing summary quality in terms of both automatic metrics and human evaluation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by Microsoft Dynamics 365 AI Research and the Canada CI-FAR AI Chair program. We would like to thank","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"morante-etal-2020-annotating","url":"https:\/\/aclanthology.org\/2020.lrec-1.611","title":"Annotating Perspectives on Vaccination","abstract":"In this paper we present the Vaccination Corpus, a corpus of texts related to the online vaccination debate that has been annotated with three layers of information about perspectives: attribution, claims and opinions. Additionally, events related to the vaccination debate are also annotated. The corpus contains 294 documents from the Internet which reflect different views on vaccinations. It has been compiled to study the language of online debates, with the final goal of experimenting with methodologies to extract and contrast perspectives within the vaccination debate.","label_nlp4sg":1,"task":["Annotating Perspectives on Vaccination"],"method":["Vaccination Corpus"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"This research is supported by the Netherlands Organization for Scientific Research (NWO) via the Spinoza-prize awarded to Piek Vossen in the project \"Understanding Language by Machines\" (SPI 30-673, 2014(SPI 30-673, -2019. We would like to thank all the students annotators.","year":2020,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"heemskerk-1993-probabilistic","url":"https:\/\/aclanthology.org\/E93-1023","title":"A Probabilistic Context-free Grammar for Disambiguation in Morphological Parsing","abstract":"One of the major problems one is faced with when decomposing words into their constituent parts is ambiguity: the generation of multiple analyses for one input word, many of which are implausible. In order to deal with ambiguity, the MORphological PArser MORPA is provided with a probabilistic context-free grammar (PCFG), i.e. it combines a \"conventional\" context-free morphological grammar to filter out ungrammatical segmentations with a probability-based scoring function which determines the likelihood of each successful parse. Consequently, remaining analyses can be ordered along a scale of plausibility. Test performance data will show that a PCFG yields good results in morphological parsing. MORPA is a fully implemented parser developed for use in a text-to-speech conversion system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"I wish to thank my former colleagues of the Phonetics Laboratory at Leiden University who contributed to the work on MORPA. Furthermore, I am greatly indebted to Louis ten Bosch for his help with probability theory and Emiel Krahmer and Wessel Kraaij for solving all my IbTEX problems.","year":1993,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"troiano-etal-2018-computational","url":"https:\/\/aclanthology.org\/D18-1367","title":"A Computational Exploration of Exaggeration","abstract":"Several NLP studies address the problem of figurative language, but among non-literal phenomena, they have neglected exaggeration. This paper presents a first computational approach to this figure of speech. We explore the possibility to automatically detect exaggerated sentences. First, we introduce HYPO, a corpus containing overstatements (or hyperboles) collected on the web and validated via crowdsourcing. Then, we evaluate a number of models trained on HYPO, and bring evidence that the task of hyperbole identification can be successfully performed based on a small set of semantic features.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"de-camargo-etal-2015-strategies","url":"https:\/\/aclanthology.org\/W15-5618","title":"On Strategies of Human Multi-Document Summarization","abstract":"In this paper, using a corpus with manual alignments of humanwritten summaries and their source news, we show that such summaries consist of information that has specific linguistic features, revealing human content selection strategies, and that these strategies produce indicative results that are competitive with a state of the art system for Portuguese. Resumo. Neste artigo, a partir de um corpus com alinhamentos manuais entre sum\u00e1rios e suas respectivas not\u00edcias-fonte, evidencia-se que tais sum\u00e1rios s\u00e3o compostos por informa\u00e7\u00f5es que possuem caracter\u00edsticas lingu\u00edsticas espec\u00edficas, revelando estrat\u00e9gias humanas de sumariza\u00e7\u00e3o, e que essas estrat\u00e9gias produzem resultados iniciais que s\u00e3o competitivos com um sistema do estado da arte para o portugu\u00eas.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tymoshenko-moschitti-2021-strong","url":"https:\/\/aclanthology.org\/2021.findings-acl.426","title":"Strong and Light Baseline Models for Fact-Checking Joint Inference","abstract":"How to combine several pieces of evidence to verify a claim is an interesting semantic task. Very complex methods have been proposed, combining different evidence vectors using an evidence interaction graph. In this paper, we show that in case of inference based on transformer models, two effective approaches use either (i) a simple application of max pooling over the Transformer evidence vectors; or (ii) computing a weighted sum of the evidence vectors. Our experiments on the FEVER claim verification task show that the methods above achieve the state of the art, constituting strong baseline for much more computationally complex methods.","label_nlp4sg":1,"task":["Fact - Checking","claim verification"],"method":["transformer"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"liu-etal-2021-mulda","url":"https:\/\/aclanthology.org\/2021.acl-long.453","title":"MulDA: A Multilingual Data Augmentation Framework for Low-Resource Cross-Lingual NER","abstract":"Named Entity Recognition (NER) for lowresource languages is a both practical and challenging research problem. This paper addresses zero-shot transfer for cross-lingual NER, especially when the amount of sourcelanguage training data is also limited. The paper first proposes a simple but effective labeled sequence translation method to translate source-language training data to target languages and avoids problems such as word order change and entity span determination. With the source-language data as well as the translated data, a generation-based multilingual data augmentation method is introduced to further increase diversity by generating synthetic labeled data in multiple languages. These augmented data enable the language model based NER models to generalize better with both the language-specific features from the target-language synthetic data and the language-independent features from multilingual synthetic data. An extensive set of experiments were conducted to demonstrate encouraging cross-lingual transfer performance of the new research on a wide variety of target languages. 1 * Equal contribution, order decided by coin flip. Linlin Liu and Bosheng Ding are under the Joint PhD Program between Alibaba and Nanyang Technological University.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is partly supported by the Alibaba-NTU Singapore Joint Research Institute, Nanyang Technological University. Linlin Liu would like to thank the support from Interdisciplinary Graduate School, Nanyang Technological University. We would like to thank the help from our Alibaba colleagues, Ruidan He and Qingyu Tan in this work as well.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rambow-1990-domain","url":"https:\/\/aclanthology.org\/W90-0112","title":"Domain Communication Knowledge","abstract":"This paper advances the hypothesis that any text planning task relies, explicitly or implicitly, on domainspecific text planning knowledge. This knowledge, \"domain communication knowledge\", is different from both domain knowledge and general knowledge about communication. The paper presents the text generation system Joyce, which represents such knowledge explicitly.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1990,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"egonmwan-chali-2019-transformer-seq2seq","url":"https:\/\/aclanthology.org\/D19-5627","title":"Transformer and seq2seq model for Paraphrase Generation","abstract":"Paraphrase generation aims to improve the clarity of a sentence by using different wording that convey similar meaning. For better quality of generated paraphrases, we propose a framework that combines the effectiveness of two models-transformer and sequence-tosequence (seq2seq). We design a two-layer stack of encoders. The first layer is a transformer model containing 6 stacked identical layers with multi-head self-attention, while the second-layer is a seq2seq model with gated recurrent units (GRU-RNN). The transformer encoder layer learns to capture long-term dependencies, together with syntactic and semantic properties of the input sentence. This rich vector representation learned by the transformer serves as input to the GRU-RNN encoder responsible for producing the state vector for decoding. Experimental results on two datasets-QUORA and MSCOCO using our framework, produces a new benchmark for paraphrase generation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their useful comments. The research re-ported in this paper was conducted at the University of Lethbridge and supported by Alberta Innovates and Alberta Education.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kim-etal-2018-attnconvnet","url":"https:\/\/aclanthology.org\/S18-1019","title":"AttnConvnet at SemEval-2018 Task 1: Attention-based Convolutional Neural Networks for Multi-label Emotion Classification","abstract":"In this paper, we propose an attention-based classifier that predicts multiple emotions of a given sentence. Our model imitates human's two-step procedure of sentence understanding and it can effectively represent and classify sentences. With emoji-to-meaning preprocessing and extra lexicon utilization, we further improve the model performance. We train and evaluate our model with data provided by SemEval-2018 task 1-5, each sentence of which has several labels among 11 given emotions. Our model achieves 5th\/1st rank in English\/Spanish respectively.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"johannsen-etal-2015-language","url":"https:\/\/aclanthology.org\/D15-1245","title":"Any-language frame-semantic parsing","abstract":"We present a multilingual corpus of Wikipedia and Twitter texts annotated with FRAMENET 1.5 semantic frames in nine different languages, as well as a novel technique for weakly supervised cross-lingual frame-semantic parsing. Our approach only assumes the existence of linked, comparable source and target language corpora (e.g., Wikipedia) and a bilingual dictionary (e.g., Wiktionary or BABELNET). Our approach uses a truly interlingual representation, enabling us to use the same model across all nine languages. We present average error reductions over running a state-of-the-art parser on word-to-word translations of 46% for target identification, 37% for frame identification, and 14% for argument identification.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lin-etal-2021-riddlesense","url":"https:\/\/aclanthology.org\/2021.findings-acl.131","title":"RiddleSense: Reasoning about Riddle Questions Featuring Linguistic Creativity and Commonsense Knowledge","abstract":"Question: I have five fingers but I am not alive. What am I? Answer: a glove. Answering such a riddle-style question is a challenging cognitive process, in that it requires complex commonsense reasoning abilities, an understanding of figurative language, and counterfactual reasoning skills, which are all important abilities for advanced natural language understanding (NLU). However, there is currently no dataset aiming to test these abilities. In this paper, we present RIDDLE-SENSE 1 , a new multiple-choice question answering task, which comes with the first large dataset (5.7k examples) for answering riddlestyle commonsense questions. We systematically evaluate a wide range of models over the RIDDLESENSE challenge, and point out that there is a large gap between the bestsupervised model and human performancesuggesting intriguing future research in the direction of higher-order commonsense reasoning and linguistic creativity towards building advanced NLU systems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Contract No. 2019-19051600007, the DARPA MCS program under Contract No. N660011924033 with the United States Office Of Naval Research, the Defense Advanced Research Projects Agency with award W911NF-19-20271, and NSF SMA 18-29268. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. We would like to thank all the collaborators in USC INK research lab and the reviewers for their constructive feedback on the work.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tezcan-vandeghinste-2011-smt","url":"https:\/\/aclanthology.org\/2011.eamt-1.10","title":"SMT-CAT integration in a Technical Domain: Handling XML Markup Using Pre \\& Post-processing Methods","abstract":"The increasing use of eXtensible Markup Language (XML) is bringing additional challenges to statistical machine translation (SMT) and computer assisted translation (CAT) workflow integration in the translation industry. This paper analyzes the need to handle XML markup as a part of the translation material in a technical domain. It explores different ways of handling such markup by applying transducers in pre and post-processing steps. A series of experiments indicates that XML markup needs a specific treatment in certain scenarios. One of the proposed methods not only satisfies the SMT-CAT integration need, but also provides slightly improved translation results on English-to-Spanish and English-to-French translations, compared to having no additional pre or post-processing steps.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"atalla-etal-2011-investigating","url":"https:\/\/aclanthology.org\/W11-3903","title":"Investigating the Applicability of current Machine-Learning based Subjectivity Detection Algorithms on German Texts","abstract":"In the field of subjectivity detection, algorithms automatically classify pieces of text into fact or opinion. Many different approaches have been successfully evaluated on English or Chinese texts. Nevertheless the assumption that these algorithms equally perform on all other languages cannot be verified yet. It is our intention to encourage more research in other languages, making a start with German. Therefore, this work introduces a German corpus for subjectivity detection on German news articles. We carry out this study in which we choose a number of state of the art subjectivity detection approaches and implement them. Finally we show and compare these algorithms' performances and give advice on how to use and extend the introduced dataset.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rusnachenko-etal-2019-distant","url":"https:\/\/aclanthology.org\/R19-1118","title":"Distant Supervision for Sentiment Attitude Extraction","abstract":"News articles often convey attitudes between the mentioned subjects, which is essential for understanding the described situation. In this paper, we describe a new approach to distant supervision for extracting sentiment attitudes between named entities mentioned in texts. Two factors (pair-based and frame-based) were used to automatically label an extensive news collection, dubbed as RuAttitudes. The latter became a basis for adaptation and training convolutional architectures, including piecewise max pooling and full use of information across different sentences. The results show that models, trained with RuAttitudes, outperform ones that were trained with only supervised learning approach and achieve 13.4% increase in F1score on RuSentRel collection. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The reported study was funded by RFBR according to the research project \u2116 19-37-50001. The development of Russian sentiment frames is supported by the RFBR research project \u2116 16-29-09606.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"denero-klein-2007-tailoring","url":"https:\/\/aclanthology.org\/P07-1003","title":"Tailoring Word Alignments to Syntactic Machine Translation","abstract":"Extracting tree transducer rules for syntactic MT systems can be hindered by word alignment errors that violate syntactic correspondences. We propose a novel model for unsupervised word alignment which explicitly takes into account target language constituent structure, while retaining the robustness and efficiency of the HMM alignment model. Our model's predictions improve the yield of a tree transducer extraction system, without sacrificing alignment quality. We also discuss the impact of various posteriorbased methods of reconciling bidirectional alignments.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"attia-etal-2010-automatic","url":"https:\/\/aclanthology.org\/W10-3704","title":"Automatic Extraction of Arabic Multiword Expressions","abstract":"In this paper we investigate the automatic acquisition of Arabic Multiword Expressions (MWE). We propose three complementary approaches to extract MWEs from available data resources. The first approach relies on the correspondence asymmetries between Arabic Wikipedia titles and titles in 21 different languages. The second approach collects English MWEs from Princeton WordNet 3.0, translates the collection into Arabic using Google Translate, and utilizes different search engines to validate the output. The third uses lexical association measures to extract MWEs from a large unannotated corpus. We experimentally explore the feasibility of each approach and measure the quality and coverage of the output against gold standards.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is funded by Enterprise Ireland (PC\/09\/037), the Irish Research Council for Science Engineering and Technology (IRCSET), and the EU projects PANACEA (7FP-ITC-248064) and META-NET (FP7-ICT-249119).","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"flanigan-etal-2013-large","url":"https:\/\/aclanthology.org\/N13-1025","title":"Large-Scale Discriminative Training for Statistical Machine Translation Using Held-Out Line Search","abstract":"We introduce a new large-scale discriminative learning algorithm for machine translation that is capable of learning parameters in models with extremely sparse features. To ensure their reliable estimation and to prevent overfitting, we use a two-phase learning algorithm. First, the contribution of individual sparse features is estimated using large amounts of parallel data. Second, a small development corpus is used to determine the relative contributions of the sparse features and standard dense features. Not only does this two-phase learning approach prevent overfitting, the second pass optimizes corpus-level BLEU of the Viterbi translation of the decoder. We demonstrate significant improvements using sparse rule indicator features in three different translation tasks. To our knowledge, this is the first large-scale discriminative training algorithm capable of showing improvements over the MERT baseline with only rule indicator features in addition to the standard MERT features.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was sponsored by the U. S. Army Research Laboratory and the U. S. Army Research Office under contract\/grant number W911NF-10-1-0533. Jeffrey Flanigan would like to thank his co-advisor Lori Levin for support and encouragement during this work.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ranta-1998-multilingual","url":"https:\/\/aclanthology.org\/W98-1308","title":"A Multilingual Natural-Language Interface to Regular Expressions","abstract":"This report explains a natural-language interface to the formalism of XFST (Xerox Finite State Tool), which is a rich language used for specifying finite state automata and transducers. By using the interface, it is possible to give input to XFST in English and French, as well as to translate formal XFST code into these languages. It is also possible to edit XFST source files and their natural-language equivalents interactively, in parallel. The interface is based on an abstract syntax of the regular expression language and of a corresponding fragment of natural language. The relations between the different components are defined by compositional interpretation and generation functions, and by corresponding combinatory parsers. This design has been inspired by the logical grammar of Montague. The grammar-driven design makes it easy to extend and to modify the interface, and also to link it with other functionalities such as compiling and semantic reasoning. It is also easy to add new languages to the interface. Both the grammatical theory and the interface facilities based on it have been implemented in the functional programming language Haskell, which supports a declarative and modular style of programming. Some of the modules developed for the interface have other uses as well: there is a type system of regular expressions, preventing some compiler errors, a denotational semantics in terms of lazy lists, and an extensio~ of the XFST script language by definitions of functions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yu-hatzivassiloglou-2003-towards","url":"https:\/\/aclanthology.org\/W03-1017","title":"Towards Answering Opinion Questions: Separating Facts from Opinions and Identifying the Polarity of Opinion Sentences","abstract":"Opinion question answering is a challenging task for natural language processing. In this paper, we discuss a necessary component for an opinion question answering system: separating opinions from fact, at both the document and sentence level. We present a Bayesian classifier for discriminating between documents with a preponderance of opinions such as editorials from regular news stories, and describe three unsupervised, statistical techniques for the significantly harder task of detecting opinions at the sentence level. We also present a first model for classifying opinion sentences as positive or negative in terms of the main perspective being expressed in the opinion. Results from a large collection of news stories and a human evaluation of 400 sentences are reported, indicating that we achieve very high performance in document classification (upwards of 97% precision and recall), and respectable performance in detecting opinions and classifying them at the sentence level as positive, negative, or neutral (up to 91% accuracy).","label_nlp4sg":1,"task":["Opinion question answering"],"method":["Bayesian classifier"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"We wish to thank Eugene Agichtein, Sasha Blair-Goldensohn, Roy Byrd, John Chen, Noemie Elhadad, Kathy McKeown, Becky Passonneau, and the anonymous reviewers for valuable input on earlier versions of this paper. We are grateful to the graduate students at Columbia University who participated in our evaluation of sentence-level opinions. This work was supported by ARDA under AQUAINT project MDA908-02-C-0008. Any opinions, findings, or recommendations are those of the authors and do not necessarily reflect ARDA's views.","year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"kuo-yang-2004-incorporating","url":"https:\/\/aclanthology.org\/Y04-1026","title":"Incorporating Pronunciation Variation into Different Strategies of Term Transliteration","abstract":"Term transliteration addresses the problem of converting terms in one language into their phonetic equivalents in the other language via spoken form. It is especially concerned with proper nouns, such as personal names, place names and organization names. Pronunciation variation refers to pronunciation ambiguity frequently encountered in spoken language, which has a serious impact on term transliteration. More than one transliteration variants can be generated by an out-of-vocabulary term due to different kinds of pronunciation variations. It is important to take this issue into account when dealing with term transliteration. Several models, which take pronunciation variation into consideration, are proposed for term transliteration in this paper. They describe transliteration from various viewpoints and utilize the relationships trained from extracted transliterated-term pairs. An experiment in applying the proposed models to term transliteration was conducted and evaluated. The experimental results show promise. These proposed models are not only applicable to term transliteration, but also are helpful in indexing and retrieving spoken document retrieval.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"devault-etal-2005-information","url":"https:\/\/aclanthology.org\/P05-3001","title":"An Information-State Approach to Collaborative Reference","abstract":"We describe a dialogue system that works with its interlocutor to identify objects. Our contributions include a concise, modular architecture with reversible processes of understanding and generation, an information-state model of reference, and flexible links between semantics and collaborative problem solving.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Supported in part by NSF HLC 0308121. Thanks to Paul Tepper.","year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sinha-etal-2014-capturing","url":"https:\/\/aclanthology.org\/W14-4108","title":"Capturing ``attrition intensifying'' structural traits from didactic interaction sequences of MOOC learners","abstract":"This work is an attempt to discover hidden structural configurations in learning activity sequences of students in Massive Open Online Courses (MOOCs). Leveraging combined representations of video clickstream interactions and forum activities, we seek to fundamentally understand traits that are predictive of decreasing engagement over time. Grounded in the interdisciplinary field of network science, we follow a graph based approach to successfully extract indicators of active and passive MOOC participation that reflect persistence and regularity in the overall interaction footprint. Using these rich educational semantics, we focus on the problem of predicting student attrition, one of the major highlights of MOOC literature in the recent years. Our results indicate an improvement over a baseline ngram based approach in capturing \"attrition intensifying\" features from the learning activities that MOOC learners engage in. Implications for some compelling future research are discussed.","label_nlp4sg":1,"task":["predicting student attrition"],"method":["graph based approach"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ehara-2021-extent-lexical","url":"https:\/\/aclanthology.org\/2021.wnut-1.50","title":"To What Extent Does Lexical Normalization Help English-as-a-Second Language Learners to Read Noisy English Texts?","abstract":"How difficult is it for English-as-a-second language (ESL) learners to read noisy English texts? Do ESL learners need lexical normalization to read noisy English texts? These questions may also affect community formation on social networking sites where differences can be attributed to ESL learners and native English speakers. However, few studies have addressed these questions. To this end, we built highly accurate readability assessors to evaluate the readability of texts for ESL learners. We then applied these assessors to noisy English texts to further assess the readability of the texts. The experimental results showed that although intermediate-level ESL learners can read most noisy English texts in the first place, lexical normalization significantly improves the readability of noisy English texts for ESL learners.","label_nlp4sg":1,"task":["evaluate the readability of texts"],"method":["Lexical Normalization"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":"This study was supported by JST ACT-X Grant Number JPMJAX2006 and JSPS KAKENHI Grant Number 18K18118. We used the ABCI infrastructure from AIST for the computational resources. We appreciate anonymous reviewers for their valuable comments.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kawakami-etal-2017-learning","url":"https:\/\/aclanthology.org\/P17-1137","title":"Learning to Create and Reuse Words in Open-Vocabulary Neural Language Modeling","abstract":"Fixed-vocabulary language models fail to account for one of the most characteristic statistical facts of natural language: the frequent creation and reuse of new word types. Although character-level language models offer a partial solution in that they can create word types not attested in the training corpus, they do not capture the \"bursty\" distribution of such words. In this paper, we augment a hierarchical LSTM language model that generates sequences of word tokens character by character with a caching mechanism that learns to reuse previously generated words. To validate our model we construct a new open-vocabulary language modeling corpus (the Multilingual Wikipedia Corpus; MWC) from comparable Wikipedia articles in 7 typologically diverse languages and demonstrate the effectiveness of our model across this range of languages.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the three anonymous reviewers for their valuable feedback. The third author acknowledges the support of the EPSRC and nvidia Corporation.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rao-etal-2018-overview","url":"https:\/\/aclanthology.org\/W18-3706","title":"Overview of NLPTEA-2018 Share Task Chinese Grammatical Error Diagnosis","abstract":"This paper presents the NLPTEA 2018 shared task for Chinese Grammatical Error Diagnosis (CGED) which seeks to identify grammatical error types, their range of occurrence and recommended corrections within sentences written by learners of Chinese as foreign language. We describe the task definition, data preparation, performance metrics, and evaluation results. Of the 20 teams registered for this shared task, 13 teams developed the system and submitted a total of 32 runs. Progress in system performances was obviously, reaching F1 of 36.12% in position level and 25.27% in correction level. All data sets with gold standards and scoring scripts are made publicly available to researchers.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank all the participants for taking part in our shared task. We would like to thank Kuei-Ching Lee for implementing the evaluation program and the usage feedbacks from Bo Zheng (in the CGED 2016). Lung-Hao Lee contributed a lot in consultation and bidding.This study was supported by the projects from Beijing Advanced Innovation Center for Language Resources (KYD17004), Institute Project of Beijing Language and Culture University (18YJ060001), Social Science Funding of China (16AYY007), Social Science Funding of Beijing (15WYA017), National Language Committee Project (ZDI135-58, ZDI135-3), MOE Project of Key Research Institutes in Universities (16JJD740004).","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"afzal-etal-2020-cora","url":"https:\/\/aclanthology.org\/2020.nlpcovid19-2.2","title":"CORA: A Deep Active Learning Covid-19 Relevancy Algorithm to Identify Core Scientific Articles","abstract":"Ever since the COVID-19 pandemic broke out, the academic and scientific research community, as well as industry and governments around the world have joined forces in an unprecedented manner to fight the threat. Clinicians, biologists, chemists, bioinformaticians, nurses, data scientists, and all of the affiliated relevant disciplines have been mobilized to help discover efficient treatments for the infected population, as well as a vaccine solution to prevent further the virus' spread. In this combat against the virus responsible for the pandemic, key for any advancements is the timely, accurate, peer-reviewed, and efficient communication of any novel research findings. In this paper we present a novel framework to address the information need of filtering efficiently the scientific bibliography for relevant literature around COVID-19. The contributions of the paper are summarized in the following: we define and describe the information need that encompasses the major requirements for COVID-19 articles' relevancy, we present and release an expert-curated benchmark set for the task, and we analyze the performance of several state-of-the-art machine learning classifiers that may distinguish the relevant from the non-relevant COVID-19 literature. 1 https:\/\/covid19.who.int\/","label_nlp4sg":1,"task":["Identify Core Scientific Articles"],"method":["machine learning classifiers","benchmark set"],"goal1":"Good Health and Well-Being","goal2":"Industry, Innovation and Infrastructure","goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dong-etal-2014-adaptive","url":"https:\/\/aclanthology.org\/P14-2009","title":"Adaptive Recursive Neural Network for Target-dependent Twitter Sentiment Classification","abstract":"We propose Adaptive Recursive Neural Network (AdaRNN) for target-dependent Twitter sentiment classification. AdaRNN adaptively propagates the sentiments of words to target depending on the context and syntactic relationships between them. It consists of more than one composition functions, and we model the adaptive sentiment propagations as distributions over these composition functions. The experimental studies illustrate that AdaRNN improves the baseline methods. Furthermore, we introduce a manually annotated dataset for target-dependent Twitter sentiment analysis.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"fonollosa-moreno-2000-speechdat","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2000\/pdf\/331.pdf","title":"SpeechDat-Car Fixed Platform","abstract":"SpeechDat-Car aims to develop a set of speech databases to support training and testing of multilingual speech recognition applications in the car environment. Two types of recordings compose the database. The first type consist of wideband audio signals recorded directly in the car while the second type is composed by GSM signals transmitted from the car and recorded simultaneously in a far-end. Therefore, two recording platforms were used, a 'mobile' recording platform installed inside the car and a 'fixed' recording platform located at the far-end fixed side of the GSM communications system. This paper describes the fixed platform software developed by the Universitat Polit\u00e8cnica de Catalunya (ADA-K). This software is able to work with standard inexpensive PC cards for ISDN lines.. ,QWURGXFWLRQ The telephone server presented in this paper to automate the recording of speech databases was developed by the authors in the framework of the SpeechDat-Car EC project LE4-8334 [Moreno (2000)]. Automatic speech recognition (ASR) appears to be a particularly well-adapted technology for providing voicebased interfaces (based on hands-free mode) that will enable new in-car applications to develop while taking care of safety aspects. However, the car environment is known to be particularly noisy (street noise, car engine noise, vibration noises, bubble noise, etc...). To obtain an optimal performance for speech recognition, it is necessary to train the system on large corpora of speech data recorded in context (i.e. directly in the car). The European project SpeechDat-Car 1 aims at providing a set of uniform, coherent databases for nine European languages and for American English.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-etal-1996-mandarin","url":"https:\/\/aclanthology.org\/O96-2003","title":"A Mandarin Text-to-Speech System","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-etal-2019-microblog","url":"https:\/\/aclanthology.org\/N19-1164","title":"Microblog Hashtag Generation via Encoding Conversation Contexts","abstract":"Automatic hashtag annotation plays an important role in content understanding for microblog posts. To date, progress made in this field has been restricted to phrase selection from limited candidates, or word-level hashtag discovery using topic models. Different from previous work considering hashtags to be inseparable, our work is the first effort to annotate hashtags with a novel sequence generation framework via viewing the hashtag as a short sequence of words. Moreover, to address the data sparsity issue in processing short microblog posts, we propose to jointly model the target posts and the conversation contexts initiated by them with bidirectional attention. Extensive experimental results on two large-scale datasets, newly collected from English Twitter and Chinese Weibo, show that our model significantly outperforms state-of-the-art models based on classification. 1 Further studies demonstrate our ability to effectively generate rare and even unseen hashtags, which is however not possible for most existing methods.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (No. CUHK 14208815 and No. CUHK 14210717 of the General Research Fund). We thank NAACL reviewers for their insightful suggestions on various aspects of this work.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kobele-etal-2020-role","url":"https:\/\/aclanthology.org\/2020.scil-1.28","title":"The role of information theory in gap-filler dependencies","abstract":"Filler-gap dependencies are computationally expensive, motivating formally richer operations than constituency formation. Many studies investigate the nature of online sentence processing when the filler is encountered before the gap. Here the difficulty is where a gap should be posited. Comparatively few studies investigate the reverse situation, where the gap is encountered before the filler. This is presumably due to the fact that this is not a natural class of dependencies in English, as it arises only in cases of remnant movement, or rightward movement, the analysis of which is shakier and more theory laden than the converse. In languages with wh-in-situ constructions, like Chinese, the gap-filler construction is systematic, and natural. Sentences (1) and (2) are declarative and matrix\/embedded wh-questions respectively in Mandarin Chinese. Although sentences (1) and (2) have similar word order on the surface, in (2) the in-situ whphrase who takes scope either over the entire sentence (i.e. the matrix question parse) or at the embedded clause (i.e. the embedded question parse). The scope positions precede the wh-phrase, giving rise to the gap-filler dependencies. Gap-filler constructions raise different problems than do fillergap ones. In the latter, an item is encountered, which needs to satisfy other (to-be-encountered) dependencies to be licensed. There is no uncertainty that a gap must be postulated, only where it should be postulated. In gap-filler constructions, a dependency is postulated before the item entering into it appears. In contrast to the filler-gap dependency type, gap-filler dependencies do not require more formal power from the syntax; they can (given a finite upper bound on their number) be analyzed with GPSG-style slash-feature percolation and are thus context-free. In systems with (covert) syntactic movement, the wh-mover is predictably silent, and could be optimized away (into the context-free backbone of the derivation tree). The motivation for the postulation of a syntactic dependency is to streamline the account of sentence processing; while a purely semantic scope taking account could be implemented (e.g. using continuations), the role and resolution of semantic information during parsing is not as well understood.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kang-eshkol-taravella-2020-empirical","url":"https:\/\/aclanthology.org\/2020.lrec-1.608","title":"An Empirical Examination of Online Restaurant Reviews","abstract":"In the wake of (Pang et al., 2002; Turney, 2002; Liu, 2012) inter alia, opinion mining and sentiment analysis have focused on extracting either positive or negative opinions from texts and determining the targets of these opinions. In this study, we go beyond the coarse-grained positive vs. negative opposition and propose a corpus-based scheme that detects evaluative language at a finer-grained level. We classify each sentence into one of four evaluation types based on the proposed scheme: (1) the reviewer's opinion on the restaurant (positive, negative, or mixed); (2) the reviewer's input\/feedback to potential customers and restaurant owners (suggestion, advice, or warning) (3) whether the reviewer wants to return to the restaurant (intention); (4) the factual statement about the experience (description). We apply classical machine learning and deep learning methods to show the effectiveness of our scheme. We also interpret the performances that we obtained for each category by taking into account the specificities of the corpus treated.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"peng-etal-2017-may","url":"https:\/\/aclanthology.org\/E17-1043","title":"May I take your order? A Neural Model for Extracting Structured Information from Conversations","abstract":"In this paper we tackle a unique and important problem of extracting a structured order from the conversation a customer has with an order taker at a restaurant. This is motivated by an actual system under development to assist in the order taking process. We develop a sequence-tosequence model that is able to map from unstructured conversational input to the structured form that is conveyed to the kitchen and appears on the customer receipt. This problem is critically different from other tasks like machine translation where sequence-to-sequence models have been used: the input includes two sides of a conversation; the output is highly structured; and logical manipulations must be performed, for example when the customer changes his mind while ordering. We present a novel sequence-to-sequence model that incorporates a special attention-memory gating mechanism and conversational role markers. The proposed model improves performance over both a phrase-based machine translation approach and a standard sequence-to-sequence model. Hi, how can I help you ? We'd like a large cheese pizza. Any toppings? Yeah, how about pepperoni and two diet cokes. What size? Uh, medium and make that three cokes. Anything else? A small Caesar salad with the dressing on the side Sure, is that it? Yes, that's all, thanks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was done while Baolin Peng was an intern at Microsoft Research.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"shinnou-sasaki-2004-semi","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/253.pdf","title":"Semi-supervised Learning by Fuzzy Clustering and Ensemble Learning","abstract":"This paper proposes a semi-supervised learning method using Fuzzy clustering to solve word sense disambiguation problems. Furthermore, we reduce side effects of semi-supervised learning by ensemble learning. We set AE classes for AE labeled instances. The \u00d2-th labeled instance is used as the prototype of the \u00d2-th class. By using Fuzzy clustering for unlabeled instances, prototypes are moved to more suitable positions. We can classify a test instance by the Nearest Neighbor (k-NN) with the moved prototypes. Moreover, to reduce side effects of semi-supervised learning, we use the ensemble learning combined the k-NN with initial labeled instances, which is initial prototype, and the k-NN with prototypes moved by Fuzzy clustering.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lee-1996-irregular","url":"https:\/\/aclanthology.org\/Y96-1048","title":"On the Irregular Verbs in Korean","abstract":"The aim of this paper is to show how an optimality-theoretic conception of phonology (McCarthy & Prince 1993, 1995; Prince & Smolensky 1993) overcomes some of the limitations of the traditional ways of treating the so-called irregular verbs' in Korean. Building on the notion of OT, I attempt to shed new light on the properties of some general phonological phenomena of Korean. In the literature on Korean, the behavior of stem-final p' and h' is usually left unanalyzed as the alternations are considered to be phonologically unmotivated. I show in this paper that the irregular alternations are not really irregular but phonologically predictable.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"black-etal-2003-learning","url":"https:\/\/aclanthology.org\/W03-2703","title":"Learning to Classify Utterances in a Task-Oriented Dialogue","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tsai-el-ghaoui-2020-sparse","url":"https:\/\/aclanthology.org\/2020.sustainlp-1.8","title":"Sparse Optimization for Unsupervised Extractive Summarization of Long Documents with the Frank-Wolfe Algorithm","abstract":"We address the problem of unsupervised extractive document summarization, especially for long documents. We model the unsupervised problem as a sparse auto-regression one and approximate the resulting combinatorial problem via a convex, norm-constrained problem. We solve it using a dedicated Frank-Wolfe algorithm. To generate a summary with k sentences, the algorithm only needs to execute \u2248 k iterations, making it very efficient. We explain how to avoid explicit calculation of the full gradient and how to include sentence embedding information. We evaluate our approach against two other unsupervised methods using both lexical (standard) ROUGE scores, as well as semantic (embedding-based) ones. Our method achieves better results with both datasets and works especially well when combined with embeddings for highly paraphrased summaries.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank Richard Liou, Tanya Roosta, and Gary Cheng for their construc-tive discussions on drafts of this paper and SumUp Analytics for providing the dataset.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"steinberger-etal-2005-improving","url":"https:\/\/aclanthology.org\/H05-1001","title":"Improving LSA-based Summarization with Anaphora Resolution","abstract":"We propose an approach to summarization exploiting both lexical information and the output of an automatic anaphoric resolver, and using Singular Value Decomposition (SVD) to identify the main terms. We demonstrate that adding anaphoric information results in significant performance improvements over a previously developed system, in which only lexical terms are used as the input to SVD. However, we also show that how anaphoric information is used is crucial: whereas using this information to add new terms does result in improved performance, simple substitution makes the performance worse.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wittenburg-etal-2002-methods","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2002\/pdf\/221.pdf","title":"Methods of Language Documentation in the DOBES project","abstract":"The DOBES program for the documentation of endangered languages, started in September 2000, has just completed its pilot phase. Eight documentation teams and one archiving team worked out agreements on formats, tools, naming conventions, and encoding, especially the linguistic level of encoding. These standards will form the basis for a five-year main phase, which will include about 20 teams. In the pilot phase, strategies to set up an online archive incorporating redundancy and regular backup were developed and implemented. Ethical and legal aspects of the archiving process were discussed and amounted to a number of documents to which all participants have to adhere to. Tools and converters developed within the pilot phase are available to others.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"trommer-1998-optimal","url":"https:\/\/aclanthology.org\/W98-0904","title":"Optimal Morphology","abstract":"Optimal morphology (OM) is a finite state formalism that unifies concepts from Optimality Theory (OT, Prince ~: Smolensky, 1993) and Declarative Phonology (DP, Scobbie, Coleman Bird, 1996) to describe morphophonological alternations in inflectional morphology. Candidate sets are formalized by inviolable lexical constraints which map abstract morpheme signatures to allomorphs. Phonology is implemented as violable rankable constraints selecting optimal candidates from these. Both types of constraints are realized by finite state transducers. Using phonological data from Albanian it is shown that given a finite state lexicalization of candidate outputs for word forms OM allows more natural analyses than unviolable finite state constraints do. Two possible evaluation strategies for OM grammars are considered: the global evaluation procedure from E1lisou (1994) and a simple strategy of local constraint evaluation. While the OM-specific lexicalization of candidate sets allows straightforward generation and a simple method of morphological parsing even under global evaluation, local constraint evaluation is shown to be preferable empirically and to be formally more restrictive. The first point is illustrated by an account of directionality effects in some classical Mende data. A procedure is given that generates a finite state transducer simulating the effects of local constraint evaluation. Thus local as opposed to global evaluation (Frank & Satta, 1998) seems to guarantee the finite-stateness of the input-output-mapping.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"niu-etal-2004-context","url":"https:\/\/aclanthology.org\/W04-0846","title":"Context clustering for Word Sense Disambiguation based on modeling pairwise context similarities","abstract":"Traditionally, word sense disambiguation (WSD) involves a different context model for each individual word. This paper presents a new approach to WSD using weakly supervised learning. Statistical models are not trained for the contexts of each individual word, but for the similarities between context pairs at category level. The insight is that the correlation regularity between the sense distinction and the context distinction can be captured at category level, independent of individual words. This approach only requires a limited amount of existing annotated training corpus in order to disambiguate the entire vocabulary. A context clustering scheme is developed within the Bayesian framework. A maximum entropy model is then trained to represent the generative probability distribution of context similarities based on heterogeneous features, including trigger words and parsing structures. Statistical annealing is applied to derive the final context clusters by globally fitting the pairwise context similarity distribution. Benchmarking shows that this new approach significantly outperforms the existing WSD systems in the unsupervised category, and rivals supervised WSD systems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the Navy SBIR program under contract N00178-03-C-1047.","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kozareva-etal-2007-language","url":"https:\/\/aclanthology.org\/W07-1703","title":"A Language Independent Approach for Name Categorization and Discrimination","abstract":"We present a language independent approach for fine-grained categorization and discrimination of names on the basis of text semantic similarity information. The experiments are conducted for languages from the Romance (Spanish) and Slavonic (Bulgarian) language groups. Despite the fact that these languages have specific characteristics as word-order and grammar, the obtained results are encouraging and show that our name entity method is scalable not only to different categories, but also to different languages. In an exhaustive experimental evaluation, we have demonstrated that our approach yields better results compared to a baseline system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the three anonymous reviewers for their useful comments and suggestions. This work was partially funded by the European Union under the project QALLME number FP6 IST-033860 and by the Spanish Ministry of Science and Technology under the project TEX-MESS number TIN2006-15265-C06-01.","year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"osborne-etal-2014-real","url":"https:\/\/aclanthology.org\/P14-5007","title":"Real-Time Detection, Tracking, and Monitoring of Automatically Discovered Events in Social Media","abstract":"We introduce ReDites, a system for realtime event detection, tracking, monitoring and visualisation. It is designed to assist Information Analysts in understanding and exploring complex events as they unfold in the world. Events are automatically detected from the Twitter stream. Then those that are categorised as being security-relevant are tracked, geolocated, summarised and visualised for the end-user. Furthermore, the system tracks changes in emotions over events, signalling possible flashpoints or abatement. We demonstrate the capabilities of ReDites using an extended use case from the September 2013 Westgate shooting incident. Through an evaluation of system latencies, we also show that enriched events are made available for users to explore within seconds of that event occurring.","label_nlp4sg":1,"task":["event detection"],"method":[],"goal1":"Peace, Justice and Strong Institutions","goal2":"Sustainable Cities and Communities","goal3":null,"acknowledgments":"This work was funded by EPSRC grant EP\/L010690\/1. MO also acknowledges support from grant ERC Advanced Fellowship 249520 GRAMPLUS.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":1,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"senese-etal-2020-mtsi","url":"https:\/\/aclanthology.org\/2020.lrec-1.90","title":"MTSI-BERT: A Session-aware Knowledge-based Conversational Agent","abstract":"In the last years, the state of the art of NLP research has made a huge step forward. Since the release of ELMo (Peters et al., 2018), a new race for the leading scoreboards of all the main linguistic tasks has begun. Several models have been published achieving promising results in all the major NLP applications, from question answering to text classification, passing through named entity recognition. These great research discoveries coincide with an increasing trend for voice-based technologies in the customer care market. One of the next biggest challenges in this scenario will be the handling of multi-turn conversations, a type of conversations that differs from single-turn by the presence of multiple related interactions. The proposed work is an attempt to exploit one of these new milestones to handle multi-turn conversations. MTSI-BERT is a BERT-based model achieving promising results in intent classification, knowledge base action prediction and end of dialogue session detection, to determine the right moment to fulfill the user request. The study about the realization of PuffBot, an intelligent chatbot to support and monitor people suffering from asthma, shows how this type of technique could be an important piece in the development of future chatbots.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"\u2022 Computational resources were provided by HPC@POLITO, a project of Academic Computing within the Department of Control and Computer Engineering at the Politecnico di Torino (http:\/\/www.hpc.polito.it). Testing operations were performed using the Google Colab platform.\u2022 The authors acknowledge the funding received from the European Union's Horizon 2020 Research and Innovation Programme under grant agreement No 870980 \"Enabling immigrants to easily know and exercise their rights\", Call: H2020-SC6-MIGRATION-2018-2019-2020, Topic: DT-MIGRATION-06-2018-2019.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sinha-etal-2005-translation","url":"https:\/\/aclanthology.org\/2005.eamt-1.33","title":"Translation divergence in English-Hindi MT","abstract":"Divergence related to mapping patterns between two or more natural languages is a common phenomenon. The patterns of divergence between two languages need to be identified and strategies devised to handle them to obtain correct translation from one language to another. In the literature on MT, some attempts have been made to classify the types of translation divergence between a pair of natural languages. However, the issue of linguistic divergence is such a complex phenomenon that a lot more need to be done in this area to identify further classes of divergence, their implications and inter-relatedness as well as the approaches to handle them. In this paper, we take Dorr's (1994) classification of translation divergence as base and examine the translation patterns between Hindi and English to locate further details and implications of these divergences. We attempt to identify the potential topics that fall under divergence and cannot directly or indirectly be accounted for or accommodated within the existing classification. Our primary goal is to identify different patterns of translation divergence from Hindi to English and the vice versa and, on the basis of that, suggest an augmentation in the classification of translation divergence.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"schoene-etal-2019-dilated","url":"https:\/\/aclanthology.org\/D19-6217","title":"Dilated LSTM with attention for Classification of Suicide Notes","abstract":"In this paper we present a dilated LSTM with attention mechanism for document-level classification of suicide notes, last statements and depressed notes. We achieve an accuracy of 87.34% compared to competitive baselines of 80.35% (Logistic Model Tree) and 82.27% (Bi-directional LSTM with Attention). Furthermore, we provide an analysis of both the grammatical and thematic content of suicide notes, last statements and depressed notes. We find that the use of personal pronouns, cognitive processes and references to loved ones are most important. Finally, we show through visualisations of attention weights that the Dilated LSTM with attention is able to identify the same distinguishing features across documents as the linguistic analysis.","label_nlp4sg":1,"task":["Classification of Suicide Notes"],"method":["LSTM","attention"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"candito-etal-2010-statistical","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/392_Paper.pdf","title":"Statistical French Dependency Parsing: Treebank Conversion and First Results","abstract":"We first describe the automatic conversion of the French Treebank (Abeill\u00e9 and Barrier, 2004), a constituency treebank, into typed projective dependency trees. In order to evaluate the overall quality of the resulting dependency treebank, and to quantify the cases where the projectivity constraint leads to wrong dependencies, we compare a subset of the converted treebank to manually validated dependency trees. We then compare the performance of two treebank-trained parsers that output typed dependency parses. The first parser is the MST parser (Mcdonald et al., 2006), which we directly train on dependency trees. The second parser is a combination of the Berkeley parser (Petrov et al., 2006) and a functional role labeler: trained on the original constituency treebank, the Berkeley parser first outputs constituency trees, which are then labeled with functional roles, and then converted into dependency trees. We found that used in combination with a high-accuracy French POS tagger, the MST parser performs a little better for unlabeled dependencies (UAS=90.3% versus 89.6%), and better for labeled dependencies (LAS=87.6% versus 85.6%).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Joakim Nivre for useful discussions, and suggestions on previous versions of the converted treebank. We also thank the LREC reviewers for their comments. This work was supported by the French National Research Agency (SEQUOIA project ANR-08-EMER-013).","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"castro-etal-2022-fiber","url":"https:\/\/aclanthology.org\/2022.acl-long.209","title":"FIBER: Fill-in-the-Blanks as a Challenging Video Understanding Evaluation Framework","abstract":"Two children throw _____ at each other as a video is captured in slow motion. _____ sits at a drum set and practices playing the drums. A boy is trying to comb his hair while _____ dries it. Correct answers: balloons, balloons filled with water, balloons of water, pink balloon, pink water balloon, things, water, water balloons, water-filled balloons Correct answers: child, drummer, future drummer, girl, kid, little girl, little kid, musician, small child, young girl Correct answers: another person, friend, girl, his sister, his sister with hairdryer, person, young woman Figure 1: Three examples from the FIBER dataset, each including three video frames, the caption, the blanked answers from the original caption together with the collected answers (all answers normalized, see Section 3.2).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Laura Biester for helping with data quality assurance. We thank the following people for reviewing drafts of this document: Artem Abzaliev, Christine Feak, Victoria Florence, Zhijing Jin, and Max Krogius. We also want to thank the LIT Research Group @ UMich members for feedback on some of the ideas discussed here. This material is based in part upon work supported by the Automotive Research Center (\"ARC\"). Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of ARC or any other related entity.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nguyen-tran-2012-influences","url":"https:\/\/aclanthology.org\/W12-5014","title":"Influences of particles on Vietnamese tonal Co-articulation","abstract":"In continuous speech, the pitch contour exhibits variable patterns and it is strongly influenced by its tone context. Although several effective models have been proposed to improve the accuracy for tonal syllables, the quality of Vietnamese synthesis system is poor by lack of lexical parameters corresponding to each syllable in modelling of fundamental frequency. This problem will be clarified by our experiment in this study. This paper presents our study on tonal co-articulation of particles which are frequently used in Vietnamese language. The obtained results show that tonal co-articulation phenomenon always takes place at the transition between two adjacent syllables, the progressive co-articulation is the basic tonal co-articulation and there is an influence of the function of particles on form of F0 contour of Vietnamese tones.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research leading in the paper was supported by the Vietnamese National Key Project KC03.07\/11-15. We would like to thank the project and people involved in this project","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yang-etal-2019-end-end","url":"https:\/\/aclanthology.org\/N19-4013","title":"End-to-End Open-Domain Question Answering with BERTserini","abstract":"We demonstrate an end-to-end question answering system that integrates BERT with the open-source Anserini information retrieval toolkit. In contrast to most question answering and reading comprehension models today, which operate over small amounts of input text, our system integrates best practices from IR with a BERT-based reader to identify answers from a large corpus of Wikipedia articles in an end-to-end fashion. We report large improvements over previous results on a standard benchmark test collection, showing that fine-tuning pretrained BERT with SQuAD is sufficient to achieve high accuracy in identifying answer spans.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lewis-xia-2009-parsing","url":"https:\/\/aclanthology.org\/E09-2011","title":"Parsing, Projecting \\& Prototypes: Repurposing Linguistic Data on the Web","abstract":"Until very recently, most NLP tasks (e.g., parsing, tagging, etc.) have been confined to a very limited number of languages, the so-called majority languages. Now, as the field moves into the era of developing tools for Resource Poor Languages (RPLs)-a vast majority of the world's 7,000 languages are resource poor-the discipline is confronted not only with the algorithmic challenges of limited data, but also the sheer difficulty of locating data in the first place. In this demo, we present a resource which taps the large body of linguistically annotated data on the Web, data which can be repurposed for NLP tasks. Because the field of linguistics has as its mandate the study of human language-in fact, the study of all human languages-and has wholeheartedly embraced the Web as a means for disseminating linguistic knowledge, the consequence is that a large quantity of analyzed language data can be found on the Web. In many cases, the data is richly annotated and exists for many languages for which there would otherwise be very limited annotated data. The resource, the Online Database of INterlinear text (ODIN), makes this data available and provides additional annotation and structure, making the resource useful to the Computational Linguistic audience.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"modic-petek-2002-contrastive","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2002\/pdf\/19.pdf","title":"A Contrastive Acoustic-Phonetic Analysis of Slovenian and English Diphthongs","abstract":"This paper aims to narrow the gap between specific and general language theory by initiating contrastive acoustic-phonetic research of Slovenian and English diphthongs. Our ultimate goal is to investigate acoustic-phonetic similarities of the diphthongs across languages in the context of possible portability of resources between the languages. In general, the paper addresses the possibility of using language resources of well-resourced languages to efficiently bootstrap the human language technologies (HLT) of the underresourced language. Therefore, as initial step we performed the contrastive analysis using English as an example \"donor\" language that is well researched with extensive language resources and Slovenian, the official and widely used language of Slovenia that is challenged by the significant lack of resources such as spoken, written or multimedia language corpora.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"smith-etal-2012-good","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/335_Paper.pdf","title":"A good space: Lexical predictors in word space evaluation","abstract":"Vector space models benefit from using an outside corpus to train the model. It is, however, unclear what constitutes a good training corpus. We have investigated the effect on summary quality when using various language resources to train a vector space based extraction summarizer. This is done by evaluating the performance of the summarizer utilizing vector spaces built from corpora from different genres, partitioned from the Swedish SUC-corpus. The corpora are also characterized using a variety of lexical measures commonly used in readability studies. The performance of the summarizer is measured by comparing automatically produced summaries to human created gold standard summaries using the ROUGE F-score. Our results show that the genre of the training corpus does not have a significant effect on summary quality. However, evaluating the variance in the F-score between the genres based on lexical measures as independent variables in a linear regression model, shows that vector spaces created from texts with high syntactic complexity, high word variation, short sentences and few long words produce better summaries.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research was funded by Santa Anna IT Research Institute AB and the Swedish Post and Telecom Agency, PTS.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"plepi-flek-2021-perceived-intended-sarcasm","url":"https:\/\/aclanthology.org\/2021.findings-emnlp.408","title":"Perceived and Intended Sarcasm Detection with Graph Attention Networks","abstract":"Existing sarcasm detection systems focus on exploiting linguistic markers, context, or userlevel priors. However, social studies suggest that the relationship between the author and the audience can be equally relevant for the sarcasm usage and interpretation. In this work, we propose a framework jointly leveraging (1) a user context from their historical tweets together with (2) the social information from a user's conversational neighborhood in an interaction graph, to contextualize the interpretation of the post. We use graph attention networks (GAT) over users and tweets in a conversation thread, combined with dense user history representations. Apart from achieving state-of-the-art results on the recently published dataset of 19k Twitter users with 30K labeled tweets, adding 10M unlabeled tweets as context, our results indicate that the model contributes to interpreting the sarcastic intentions of an author more than to predicting the sarcasm perception by others.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hashimoto-kurohashi-2008-blog","url":"https:\/\/aclanthology.org\/P08-2018","title":"Blog Categorization Exploiting Domain Dictionary and Dynamically Estimated Domains of Unknown Words","abstract":"This paper presents an approach to text categorization that i) uses no machine learning and ii) reacts on-the-fly to unknown words. These features are important for categorizing Blog articles, which are updated on a daily basis and filled with newly coined words. We categorize 600 Blog articles into 12 domains. As a result, our categorization method achieved an accuracy of 94.0% (564\/600).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"shapiro-1975-generation","url":"https:\/\/aclanthology.org\/J75-4019","title":"Generation as Parsing from a Network into a Linear String","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The a u t h o r i s indebted t o J o h n Lowrance, who lfiplemnted t h e","year":1975,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-etal-2018-variational","url":"https:\/\/aclanthology.org\/D18-1020","title":"Variational Sequential Labelers for Semi-Supervised Learning","abstract":"We introduce a family of multitask variational methods for semi-supervised sequence labeling. Our model family consists of a latentvariable generative model and a discriminative labeler. The generative models use latent variables to define the conditional probability of a word given its context, drawing inspiration from word prediction objectives commonly used in learning word embeddings. The labeler helps inject discriminative information into the latent space. We explore several latent variable configurations, including ones with hierarchical structure, which enables the model to account for both label-specific and word-specific information. Our models consistently outperform standard sequential baselines on 8 sequence labeling datasets, and improve further with unlabeled data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank NVIDIA for donating GPUs used in this research, the anonymous reviewers for their comments that improved this paper, and Google for a faculty research award to K. Gimpel that partially supported this research. This research was funded by NSF grant 1433485.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"orasmaa-kaalep-2017-create","url":"https:\/\/aclanthology.org\/W17-0222","title":"Can We Create a Tool for General Domain Event Analysis?","abstract":"This study outlines a question about the possibility of creation of a tool for general domain event analysis. We provide reasons for assuming that a TimeML-based event modelling could be a suitable basis for general domain event modelling. We revise and summarise Estonian efforts on TimeML analysis, both at automatic analysis and human analysis, and provide an overview of the current challenges\/limitations of applying a TimeML model in an extensive corpus annotation. We conclude with a discussion on reducing complexity of the (TimeML-based) event model.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by Estonian Ministry of Education and Research (grant IUT 20-56 \"Computational models for Estonian\").","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"glavas-snajder-2014-constructing","url":"https:\/\/aclanthology.org\/W14-3705","title":"Constructing Coherent Event Hierarchies from News Stories","abstract":"News describe real-world events of varying granularity, and recognition of internal structure of events is important for automated reasoning over events. We propose an approach for constructing coherent event hierarchies from news by enforcing document-level coherence over pairwise decisions of spatiotemporal containment. Evaluation on a news corpus annotated with event hierarchies shows that enforcing global spatiotemporal coreference of events leads to significant improvements (7.6% F 1-score) in the accuracy of pairwise decisions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"levin-etal-2000-lessons","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2000\/pdf\/215.pdf","title":"Lessons Learned from a Task-based Evaluation of Speech-to-Speech Machine Translation","abstract":"For several years we have been conducting Accuracy Based Evaluations (ABE) of the JANUS speech-to-speech MT system (Gates et al., 1997) which measure quality and fidelity of translation. Recently we have begun to design a Task Based Evaluation for JANUS (Thomas, 1999) which measures goal completion. This paper describes what we have learned by comparing the two types of evaluation. Both evaluations (ABE and TBE) were conducted on a common set of user studies in the semantic domain of travel planning.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Alexandra Slavkovic for running the user studies and Kavita Thomas for her preliminary work on the design of the TBE.","year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wilks-etal-2004-human","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/42.pdf","title":"Human Dialogue Modelling Using Annotated Corpora","abstract":"We describe two major dialogue system segments: first we describe a Dialogue Manager which uses a representation of stereotypical dialogue patterns that we call Dialogue Action Frames and which, we believe, generate strong and novel constraints on later access to incomplete dialogue topics. Secondly, an analysis module that learns to assign dialogue acts from corpora, but on the basis of limited quantities of data, and up to what seems to be some kind of limit on this task, a fact we also discuss.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgements. This paper is based on work supported in part by the European Commission under the 5th Framework IST\/HLT Program (consortia AMITIES and COMIC) and by the U.S. Defense Advanced Research Projects Agency.","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"magnini-etal-2020-comparing","url":"https:\/\/aclanthology.org\/2020.lrec-1.259","title":"Comparing Machine Learning and Deep Learning Approaches on NLP Tasks for the Italian Language","abstract":"We present a comparison between deep learning and traditional machine learning methods for various NLP tasks in Italian. We carried on experiments using available datasets (e.g., from the Evalita shared tasks) on two sequence tagging tasks (i.e., named entity recognition and nominal entity recognition) and four classification tasks (i.e., lexical relations among words, semantic relations among sentences, sentiment analysis and text classification). We show that deep learning approaches outperform traditional machine learning algorithms in sequence tagging, while for classification tasks that heavily rely on semantics approaches based on feature engineering are still competitive. We think that a similar analysis could be carried out for other languages to provide an assessment of machine learning \/ deep learning models across different languages.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"labutov-etal-2018-multi","url":"https:\/\/aclanthology.org\/P18-1077","title":"Multi-Relational Question Answering from Narratives: Machine Reading and Reasoning in Simulated Worlds","abstract":"Question Answering (QA), as a research field, has primarily focused on either knowledge bases (KBs) or free text as a source of knowledge. These two sources have historically shaped the kinds of questions that are asked over these sources, and the methods developed to answer them. In this work, we look towards a practical use-case of QA over user-instructed knowledge that uniquely combines elements of both structured QA over knowledge bases, and unstructured QA over narrative, introducing the task of multirelational QA over personal narrative. As a first step towards this goal, we make three key contributions: (i) we generate and release TEXTWORLDSQA, a set of five diverse datasets, where each dataset contains dynamic narrative that describes entities and relations in a simulated world, paired with variably compositional questions over that knowledge, (ii) we perform a thorough evaluation and analysis of several state-of-the-art QA models and their variants at this task, and (iii) we release a lightweight Python-based framework we call TEXTWORLDS for easily generating arbitrary additional worlds and narrative, with the goal of allowing the community to create and share a growing collection of diverse worlds as a test-bed for this task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"falk-martin-2017-towards-inferential","url":"https:\/\/aclanthology.org\/W17-6807","title":"Towards an Inferential Lexicon of Event Selecting Predicates for French","abstract":"We present a manually constructed seed lexicon encoding the inferential profiles of French event selecting predicates across different uses. The inferential profile (Karttunen, 1971a) of a verb is designed to capture the inferences triggered by the use of this verb in context. It reflects the influence of the clause-embedding verb on the factuality of the event described by the embedded clause. The resource developed provides evidence for the following three hypotheses: (i) French implicative verbs have an aspect dependent profile (their inferential profile varies with outer aspect), while factive verbs have an aspect independent profile (they keep the same inferential profile with both imperfective and perfective aspect); (ii) implicativity decreases with imperfective aspect: the inferences triggered by French implicative verbs combined with perfective aspect are often weakened when the same verbs are combined with imperfective aspect; (iii) implicativity decreases with an animate (deep) subject: the inferences triggered by a verb which is implicative with an inanimate subject are weakened when the same verb is used with an animate subject. The resource additionally shows that verbs with different inferential profiles display clearly distinct sub-categorisation patterns. In particular, verbs that have both factive and implicative readings are shown to prefer infinitival clauses in their implicative reading, and tensed clauses in their factive reading.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to the anonymous IWCS reviewers for their detailed and constructive feedback, criticisms and suggestions. This work is part of the project B5 of the Collaborative Research Centre 732 hosted by the University of Stuttgart and financed by the Deutsche Forschungsgemeinschaft.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nilsson-nugues-2010-automatic","url":"https:\/\/aclanthology.org\/C10-1093","title":"Automatic Discovery of Feature Sets for Dependency Parsing","abstract":"This paper describes a search procedure to discover optimal feature sets for dependency parsers. The search applies to the shift-reduce algorithm and the feature sets are extracted from the parser configuration. The initial feature is limited to the first word in the input queue. Then, the procedure uses a set of rules founded on the assumption that topological neighbors of significant features in the dependency graph may also have a significant contribution. The search can be fully automated and the level of greediness adjusted with the number of features examined at each iteration of the discovery procedure. Using our automated feature discovery on two corpora, the Swedish corpus in CoNLL-X and the English corpus in CoNLL 2008, and a single parser system, we could reach results comparable or better than the best scores reported in these evaluations. The CoNLL 2008 test set contains, in addition to a Wall Street Journal (WSJ) section, an out-of-domain sample from the Brown corpus. With sets of 15 features, we obtained a labeled attachment score of 84.21 for Swedish, 88.11 on the WSJ test set, and 81.33 on the Brown test set.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research leading to these results has received funding from the European community's seventh framework program FP7\/2007-2013, challenge 2, cognitive systems, interaction, robotics, under grant agreement No 230902-ROSETTA.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"villavicencio-1999-representing","url":"https:\/\/aclanthology.org\/E99-1039","title":"Representing a System of Lexical Types Using Default Unification","abstract":"Default inheritance is a useful tool for encoding linguistic generalisations that have exceptions. In this paper we show how the use of an order independent typed default unification operation can provide non-redundant highly structured and concise representation to specify a network of lexical types, that encodes linguistic information about verbal subcategorisation. The system of lexical types is based on the one proposed by Pollard and Sag (1987), but uses the more expressive typed default feature structures, is more succinct, and able to express linguistic sub-regularities more elegantly.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"I would like to thank Ted Briscoe, Ann Copestake and Fabio Nemetz for their comments and advice on this paper. Thanks also to the anonymous reviewers for their comments. The research reported on this paper is supported by doctoral studentship from CAPES\/Brazil.","year":1999,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-etal-2021-multi-lingual","url":"https:\/\/aclanthology.org\/2021.findings-acl.199","title":"Multi-Lingual Question Generation with Language Agnostic Language Model","abstract":"Question generation is the task of generating coherent and relevant question given context paragraph. Recently, with the development of large-scale question answering datasets such as SQuAD, the English question generation has been rapidly developed. However, for other languages such as Chinese, the available training data is limited, which hinders the development of question generation in the corresponding language. To investigate the multilingual question generation, in this paper, we develop a language-agnostic language model, which learns the shared representation from several languages in a single architecture. We propose an adversarial training objective to encourage the model to learn both language-specific and language-independent information. We utilize abundant monolingual text to improve the multilingual question generation via pre-training. With the languageagnostic language model, we achieve significant improvement in multilingual question generation over five languages. In addition, we propose a large-scale Chinese question generation dataset containing more than 220k human-generated questions to benefit the multilingual question generation research.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their insightful comments. And we appereate the dedicated labeling efforts contributed by the annoators, which makes the large-scale Chinese QG datasets avaliable for the community.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"brandschain-etal-2008-speaker","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2008\/pdf\/902_paper.pdf","title":"Speaker Recognition: Building the Mixer 4 and 5 Corpora","abstract":"The original Mixer corpus was designed to satisfy developing commercial and forensic needs. The resulting Mixer corpora, Phases 1 through 5, have evolved to support and increasing variety of research tasks, including multilingual and cross-channel recognition. The Mixer Phases 4 and 5 corpora feature a wider variety of channels and greater variation in the situations under which the speech is recorded. This paper focuses on the plans, progress and results of Mixer 4 and 5.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kobayashi-etal-2011-topic","url":"https:\/\/aclanthology.org\/W11-3905","title":"Topic Models with Logical Constraints on Words","abstract":"This paper describes a simple method to achieve logical constraints on words for topic models based on a recently developed topic modeling framework with Dirichlet forest priors (LDA-DF). Logical constraints mean logical expressions of pairwise constraints, Must-links and Cannot-Links, used in the literature of constrained clustering. Our method can not only cover the original constraints of the existing work, but also allow us easily to add new customized constraints. We discuss the validity of our method by defining its asymptotic behaviors. We verify the effectiveness of our method with comparative studies on a synthetic corpus and interactive topic analysis on a real corpus.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"st-arnaud-etal-2017-identifying","url":"https:\/\/aclanthology.org\/D17-1267","title":"Identifying Cognate Sets Across Dictionaries of Related Languages","abstract":"We present a system for identifying cognate sets across dictionaries of related languages. The likelihood of a cognate relationship is calculated on the basis of a rich set of features that capture both phonetic and semantic similarity, as well as the presence of regular sound correspondences. The similarity scores are used to cluster words from different languages that may originate from a common protoword. When tested on the Algonquian language family, our system detects 63% of cognate sets while maintaining cluster purity of 70%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the following students that contributed to our cognate-related projects over the last few years (in alphabetical order): Matthew Darling, Jacob Denson, Philip Dilts, Bradley Hauer, Mildred Lau, Tyler Lazar, Garrett Nicolai, Dylan Stankievech, Nicholas Tam, and Cindy Xiao. This research was partially funded by the Natural Sciences and Engineering Research Council of Canada.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sterckx-etal-2016-supervised","url":"https:\/\/aclanthology.org\/D16-1198","title":"Supervised Keyphrase Extraction as Positive Unlabeled Learning","abstract":"The problem of noisy and unbalanced training data for supervised keyphrase extraction results from the subjectivity of keyphrase assignment, which we quantify by crowdsourcing keyphrases for news and fashion magazine articles with many annotators per document. We show that annotators exhibit substantial disagreement, meaning that single annotator data could lead to very different training sets for supervised keyphrase extractors. Thus, annotations from single authors or readers lead to noisy training data and poor extraction performance of the resulting supervised extractor. We provide a simple but effective solution to still work with such data by reweighting the importance of unlabeled candidate phrases in a two stage Positive Unlabeled Learning setting. We show that performance of trained keyphrase extractors approximates a classifier trained on articles labeled by multiple annotators, leading to higher average F 1 scores and better rankings of keyphrases. We apply this strategy to a variety of test collections from different backgrounds and show improvements over strong baseline models.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors like to thank the anonymous reviewers for their helpful comments. The research presented in this article relates to STEAMER (http:\/\/www.iminds.be\/en\/projects\/ 2014\/07\/12\/steamer), a MiX-ICON project facilitated by iMinds Media and funded by IWT and Innoviris.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"blanco-moldovan-2011-semantic","url":"https:\/\/aclanthology.org\/P11-1059","title":"Semantic Representation of Negation Using Focus Detection","abstract":"Negation is present in all human languages and it is used to reverse the polarity of part of statements that are otherwise affirmative by default. A negated statement often carries positive implicit meaning, but to pinpoint the positive part from the negative part is rather difficult. This paper aims at thoroughly representing the semantics of negation by revealing implicit positive meaning. The proposed representation relies on focus of negation detection. For this, new annotation over PropBank and a learning algorithm are proposed.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"trivedi-etal-2018-iit","url":"https:\/\/aclanthology.org\/W18-3220","title":"IIT (BHU) Submission for the ACL Shared Task on Named Entity Recognition on Code-switched Data","abstract":"This paper describes the best performing system for the shared task on Named Entity Recognition (NER) on code-switched data for the language pair Spanish-English (ENG-SPA). We introduce a gated neural architecture for the NER task. Our final model achieves an F1 score of 63.76%, outperforming the baseline by 10%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"fitzpatrick-sager-1974-lexical","url":"https:\/\/aclanthology.org\/J74-1002","title":"The Lexical Subclasses of the Linguistic String Parser","abstract":"Thc N T T T Linguistic String 1>arscr (1,SI') is a ~vorldng system for the syntactic analysis of Ehglish scicntifie tests. It consists of a parsing program, a large-coverage b~glish grammar, and a Icsicon. Thc gramnlarls effcctivcncss in parsing texts is due in large part to a substantial Imdy of detail cvl \\vc.Il-formedness rcst rictions which eliminate most incorrect syntactic parses which would be allo~vecl by a weaker grammar. The restrictions mainly test for compatible combinations of word subclasses. This paper dcfines the 109 adjective, noun and verb suhclasscs. These subolasses, as \\'ell as others not prcscnted herc, are defined Ln such a way that they can he used as a guide for dassifymg new entries to the LSP lexicon and as a lingpistic reference twl. Fach definition lncludcs a statement of the intent of tllc subclass, a diawostic frame, sentence examples and a worcl list draun from tho present dictionary. The subclasses are defined tro reflect precisely the grammatical propertics tested for by the restrictiol~s of the grammar Where necessary for clariking the intent of the subclass, three additiollal criteria are employed: excision, implicit and corcfcrcncc, and paraphrase. The subclasscs have been defined so as to be consistent with a subsecluent stage of transformational analysis rilzicl~ is currently being imp1 ementd. An illustration of the trcatmcnt of a subclass is: AASP: an aclicctive is in AASP i f it occurs only with the non-sentehtial hon-Sh7 right adjunct to V OBJ (SN an emlxdded, or contained, sentence) (DSNG , 7) : John is able to walk. John is able for Bill t o walk. $John i s able that Bill walks. 2 John i s able whether Bill ~vall;s.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1974,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"fudholi-suominen-2018-importance","url":"https:\/\/aclanthology.org\/W18-3711","title":"The Importance of Recommender and Feedback Features in a Pronunciation Learning Aid","abstract":"Verbal communication-and pronunciation as its part-is a core skill that can be developed through guided learning. An artificial intelligence system can take a role in these guided learning approaches as an enabler of an application for pronunciation learning with a recommender system to guide language learners through exercises and feedback system to correct their pronunciation. In this paper, we report on a user study on language learners' perceived usefulness of the application. 16 international students who spoke non-native English and lived in Australia participated. 13 of them said they need to improve their pronunciation skills in English because of their foreign accent. The feedback system with features for pronunciation scoring, speech replay, and giving a pronunciation example was deemed essential by most of the respondents. In contrast, a clear dichotomy between the recommender system perceived as useful or useless existed; the system had features to prompt new common words or old poorly-scored words. These results can be used to target research and development from information retrieval and reinforcement learning for better and better recommendations to speech recognition and speech analytics for accent acquisition.","label_nlp4sg":1,"task":["Pronunciation Learning Aid"],"method":["Recommender and Feedback Features"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":"LPDP is an Indonesian state agency who manage scholarship that is funded by the Indonesia Endowment Fund for Education. LPDP scholarships are for postgraduate level and open to any Indonesian residence including fresh graduates. We acknowledge Greg Cassagne for his contribution.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mohankumar-khapra-2022-active","url":"https:\/\/aclanthology.org\/2022.acl-long.600","title":"Active Evaluation: Efficient NLG Evaluation with Few Pairwise Comparisons","abstract":"Recent studies have shown the advantages of evaluating NLG systems using pairwise comparisons as opposed to direct assessment. Given k systems, a naive approach for identifying the top-ranked system would be to uniformly obtain pairwise comparisons from all k 2 pairs of systems. However, this can be very expensive as the number of human annotations required would grow quadratically with k. In this work, we introduce Active Evaluation, a framework to efficiently identify the top-ranked system by actively choosing system pairs for comparison using dueling bandit algorithms. We perform extensive experiments with 13 dueling bandits algorithms on 13 NLG evaluation datasets spanning 5 tasks and show that the number of human annotations can be reduced by 80%. To further reduce the number of human annotations, we propose model-based dueling bandit algorithms which combine automatic evaluation metrics with human evaluations. Specifically, we eliminate sub-optimal systems even before the human annotation process and perform human evaluations only on test examples where the automatic metric is highly uncertain. This reduces the number of human annotations required further by 89%. In effect, we show that identifying the top-ranked system requires only a few hundred human annotations, which grow linearly with k. Lastly, we provide practical recommendations and best practices to identify the top-ranked system efficiently. Our code has been made publicly available at https: \/\/github.com\/akashkm99\/duelnlg","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the Department of Computer Science and Engineering, IIT Madras, and the Robert Bosch Center for Data Science and Artificial Intelligence, IIT Madras (RBC-DSAI), for providing us resources required to carry out this research. We also wish to thank Google for providing access to TPUs through the TFRC program. We thank the anonymous reviewers for their constructive feedback in enhancing the work.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"petasis-2019-segmentation","url":"https:\/\/aclanthology.org\/W19-4501","title":"Segmentation of Argumentative Texts with Contextualised Word Representations","abstract":"The segmentation of argumentative units is an important subtask of argument mining, which is frequently addressed at a coarse granularity, usually assuming argumentative units to be no smaller than sentences. Approaches focusing at the clause-level granularity, typically address the task as sequence labeling at the token level, aiming to classify whether a token begins, is inside, or is outside of an argumentative unit. Most approaches exploit highly engineered, manually constructed features, and algorithms typically used in sequential tagging-such as Conditional Random Fields, while more recent approaches try to exploit manually constructed features in the context of deep neural networks. In this context, we examined to what extend recent advances in sequential labelling allow to reduce the need for highly sophisticated, manually constructed features, and whether limiting features to embeddings, pre-trained on large corpora is a promising approach. Evaluation results suggest the examined models and approaches can exhibit comparable performance, minimising the need for feature engineering.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We acknowledge support of this work by the project \"APOLLONIS: Greek Infrastructure for Digital Arts, Humanities and Language Research and Innovation\" (MIS 5002738) which is implemented under the Action \"Reinforcement of the Research and Innovation Infrastructure\", funded by the Operational Programme \"Competitiveness, Entrepreneurship and Innovation\" (NSRF 2014-2020) and co-financed by Greece and the European Union (European Regional Development Fund).","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"marques-beuls-2016-evaluation","url":"https:\/\/aclanthology.org\/C16-1108","title":"Evaluation Strategies for Computational Construction Grammars","abstract":"Despite the growing number of Computational Construction Grammar implementations, the field is still lacking evaluation methods to compare grammar fragments across different platforms. Moreover, the hand-crafted nature of most grammars requires profiling tools to understand the complex interactions between constructions of different types. This paper presents a number of evaluation measures, partially based on existing measures in the field of semantic parsing, that are especially relevant for reversible grammar formalisms. The measures are tested on a grammar fragment for European Portuguese clitic placement that is currently under development.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research presented in this paper has been funded by the European Community's Seventh Framework Programme (FP7\/2007-2013) under grant agreement no. 607062 ESSENCE: Evolution of Shared Semantics in Computational Environments (http:\/\/www.essence-network.com\/).","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"han-toner-2017-qub","url":"https:\/\/aclanthology.org\/S17-2063","title":"QUB at SemEval-2017 Task 6: Cascaded Imbalanced Classification for Humor Analysis in Twitter","abstract":"This paper presents our submission to SemEval-2017 Task 6: #HashtagWars: Learning a Sense of Humor. There are two subtasks: A. Pairwise Comparison, and B. Semi-Ranking. Our assumption is that the distribution of humorous and non-humorous texts in real life language is naturally imbalanced. Using Na\u00efve Bayes Multinomial with standard text-representation features, we approached Subtask B as a sequence of imbalanced classification problems, and optimized our system per the macro-average recall. Subtask A was then solved via the Semi-Ranking results. On the final test, our system was ranked 10 th for Subtask A, and 3 rd for Subtask B.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is sponsored by the Leverhulme Trust project with grant number of RPG-2015-089.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"barbu-poesio-2009-unsupervised","url":"https:\/\/aclanthology.org\/R09-1006","title":"Unsupervised Knowledge Extraction for Taxonomies of Concepts from Wikipedia","abstract":"A novel method for unsupervised acquisition of knowledge for taxonomies of concepts from raw Wikipedia text is presented. We assume that the concepts classified under the same node in a taxonomy are described in a comparable way in Wikipedia. The concepts in 6 taxonomies extracted from WordNet are mapped onto Wikipedia pages and the lexico-syntactic patterns describing semantic structures expressing relevant knowledge for the concepts are automatically learnt.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank to Verginica Barbu Mititelu and Gianluca Lebani for support in the data collection and data rating. We also want to thank three anonymous reviewers for suggestions and criticism.","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"gupta-etal-2018-transliteration","url":"https:\/\/aclanthology.org\/W18-3205","title":"Transliteration Better than Translation? Answering Code-mixed Questions over a Knowledge Base","abstract":"Humans can learn multiple languages. If they know a fact in one language, they can answer a question in another language they understand. They can also answer Code-mix (CM) questions: questions which contain both languages. This ability is attributed to the unique learning ability of humans. Our task aims to study if machines can achieve this. We demonstrate how effectively a machine can answer CM questions. In this work, we adopt a two-step approach: candidate generation and candidate re-ranking to answer questions. We propose a Triplet-Siamese-Hybrid CNN (TSHCNN) to re-rank candidate answers. We show experiments on the SimpleQuestions dataset. Our network is trained only on English questions provided in this dataset and noisy Hindi translations of these questions and can answer English-Hindi CM questions effectively without the need of translation into English. Back-transliterated CM questions outperform their lexical and sentence level translated counterparts by 5% & 35% respectively, highlighting the efficacy of our approach in a resource-constrained setting.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-matsumoto-2004-trajectory","url":"https:\/\/aclanthology.org\/C04-1130","title":"Trajectory Based Word Sense Disambiguation","abstract":"Classifier combination is a promising way to improve performance of word sense disambiguation. We propose a new combinational method in this paper. We first construct a series of Na\u00efve Bayesian classifiers along a sequence of orderly varying sized windows of context, and perform sense selection for both training samples and test samples using these classifiers. We thus get a sense selection trajectory along the sequence of context windows for each sample. Then we make use of these trajectories to make final k-nearest-neighbors-based sense selection for test samples. This method aims to lower the uncertainty brought by classifiers using different context windows and make more robust utilization of context while perform well. Experiments show that our approach outperforms some other algorithms on both robustness and performance.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"shirai-ookawa-2006-compiling","url":"https:\/\/aclanthology.org\/P06-2099","title":"Compiling a Lexicon of Cooking Actions for Animation Generation","abstract":"This paper describes a system which generates animations for cooking actions in recipes, to help people understand recipes written in Japanese. The major goal of this research is to increase the scalability of the system, i.e., to develop a system which can handle various kinds of cooking actions. We designed and compiled the lexicon of cooking actions required for the animation generation system. The lexicon includes the action plan used for animation generation, and the information about ingredients upon which the cooking action is taken. Preliminary evaluation shows that our lexicon contains most of the cooking actions that appear in Japanese recipes. We also discuss how to handle linguistic expressions in recipes, which are not included in the lexicon, in order to generate animations for them.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"k-lalitha-devi-2014-automatic","url":"https:\/\/aclanthology.org\/W14-2805","title":"Automatic Conversion of Dialectal Tamil Text to Standard Written Tamil Text using FSTs","abstract":"We present an efficient method to automatically transform spoken language text to standard written language text for various dialects of Tamil. Our work is novel in that it explicitly addresses the problem and need for processing dialectal and spoken language Tamil. Written language equivalents for dialectal and spoken language forms are obtained using Finite State Transducers (FSTs) where spoken language suffixes are replaced with appropriate written language suffixes. Agglutination and compounding in the resultant text is handled using Conditional Random Fields (CRFs) based word boundary identifier. The essential Sandhi corrections are carried out using a heuristic Sandhi Corrector which normalizes the segmented words to simpler sensible words. During experimental evaluations dialectal spoken to written transformer (DSWT) achieved an encouraging accuracy of over 85% in transformation task and also improved the translation quality of Tamil-English machine translation system by 40%. It must be noted that there is no published computational work on processing Tamil dialects. Ours is the first attempt to study various dialects of Tamil in a computational point of view. Thus, the nature of the work reported here is pioneering.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hathout-tanguy-2002-webaffix","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2002\/pdf\/187.pdf","title":"Webaffix: Discovering Morphological Links on the WWW","abstract":"This paper presents a new language-independent method for finding morphological links between newly appeared words (i.e. absent from reference word lists). Using the WWW as a corpus, the Webaffix tool detects the occurrences of new derived lexemes based on a given suffix, proposes a base lexeme following a standard scheme (such as noun-verb), and then performs a compatibility test on the word pairs produced, using the Web again, but as a source of cooccurrences. The resulting pairs of words are used to build generic morphological databases useful for a number of NLP tasks. We develop and comment an example use of Webaffix to find new noun\/verb pairs in French.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dale-haddock-1991-generating","url":"https:\/\/aclanthology.org\/E91-1028","title":"Generating Referring Expressions Involving Relations","abstract":"In this paper, we review Dale's [1989] algorithm for determining the content of a referring expression. The algorithm, which only permits the use of one-place predicates, is revised and extended to deal with n-ary predicates. We investigate the problem of blocking 'recursion' in complex noun phrases and propose a solution in the context of our algorithm.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work reported here \u2022 was prompted by a conversation with Breck Baldwin. Both authors would like to thank colleagues at each of their institutions for numerous comments that have improved this paper.","year":1991,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yin-etal-2016-abcnn","url":"https:\/\/aclanthology.org\/Q16-1019","title":"ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs","abstract":"How to model a pair of sentences is a critical issue in many NLP tasks such as answer selection (AS), paraphrase identification (PI) and textual entailment (TE). Most prior work (i) deals with one individual task by fine-tuning a specific system; (ii) models each sentence's representation separately, rarely considering the impact of the other sentence; or (iii) relies fully on manually designed, task-specific linguistic features. This work presents a general Attention Based Convolutional Neural Network (ABCNN) for modeling a pair of sentences. We make three contributions. (i) The ABCNN can be applied to a wide variety of tasks that require modeling of sentence pairs. (ii) We propose three attention schemes that integrate mutual influence between sentences into CNNs; thus, the representation of each sentence takes into consideration its counterpart. These interdependent sentence pair representations are more powerful than isolated sentence representations. (iii) ABCNNs achieve state-of-the-art performance on AS, PI and TE tasks. We release code at: https:\/\/github.com\/ yinwenpeng\/Answer_Selection.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We gratefully acknowledge the support of Deutsche Forschungsgemeinschaft (DFG): grant SCHU 2246\/8-2.We would like to thank the anonymous reviewers for their helpful comments.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yasaswini-etal-2021-iiitt","url":"https:\/\/aclanthology.org\/2021.dravidianlangtech-1.25","title":"IIITT@DravidianLangTech-EACL2021: Transfer Learning for Offensive Language Detection in Dravidian Languages","abstract":"This paper demonstrates our work for the shared task on Offensive Language Identification in Dravidian Languages-EACL 2021. Offensive language detection in the various social media platforms was identified previously. However, with the increase in the diversity of users, there is a need to identify the offensive language in multilingual posts which are largely code-mixed or written in a non-native script. We approach this challenge with various transfer learning-based models to classify a given post or comment in Dravidian languages (Malayalam, Tamil and Kannada) into 6 categories. The source codes for our systems are published 1 .","label_nlp4sg":1,"task":["Offensive Language Detection"],"method":["Transfer Learning"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"morgado-da-costa-etal-2016-syntactic","url":"https:\/\/aclanthology.org\/W16-4914","title":"Syntactic Well-Formedness Diagnosis and Error-Based Coaching in Computer Assisted Language Learning using Machine Translation","abstract":"We present a novel approach to Computer Assisted Language Learning (CALL), using deep syntactic parsers and semantic based machine translation (MT) in diagnosing and providing explicit feedback on language learners' errors. We are currently developing a proof of concept system showing how semantic-based machine translation can, in conjunction with robust computational grammars, be used to interact with students, better understand their language errors, and help students correct their grammar through a series of useful feedback messages and guided language drills. Ultimately, we aim to prove the viability of a new integrated rule-based MT approach to disambiguate students intended meaning in a CALL system. This is a necessary step to provide accurate coaching on how to correct ungrammatical input, and it will allow us to overcome a current bottleneck in the field-an exponential burst of ambiguity caused by ambiguous lexical items (Flickinger, 2010). From the users interaction with the system, we will also produce a richly annotated Learner Corpus, annotated automatically with both syntactic and semantic information.","label_nlp4sg":1,"task":["Computer Assisted Language Learning"],"method":["deep syntactic parsers","semantic based machine translation"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"levinboim-etal-2021-quality","url":"https:\/\/aclanthology.org\/2021.naacl-main.253","title":"Quality Estimation for Image Captions Based on Large-scale Human Evaluations","abstract":"Automatic image captioning has improved significantly over the last few years, but the problem is far from being solved, with state of the art models still often producing low quality captions when used in the wild. In this paper, we focus on the task of Quality Estimation (QE) for image captions, which attempts to model the caption quality from a human perspective and without access to groundtruth references, so that it can be applied at prediction time to detect low-quality captions produced on previously unseen images. For this task, we develop a human evaluation process that collects coarse-grained caption annotations from crowdsourced users, which is then used to collect a large scale dataset spanning more than 600k caption quality ratings. We then carefully validate the quality of the collected ratings and establish baseline models for this new QE task. Finally, we further collect fine-grained caption quality annotations from trained raters, and use them to demonstrate that QE models trained over the coarse ratings can effectively detect and filter out lowquality image captions, thereby improving the user experience from captioning systems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yao-etal-2021-connect","url":"https:\/\/aclanthology.org\/2021.emnlp-main.610","title":"Connect-the-Dots: Bridging Semantics between Words and Definitions via Aligning Word Sense Inventories","abstract":"Word Sense Disambiguation (WSD) aims to automatically identify the exact meaning of one word according to its context. Existing supervised models struggle to make correct predictions on rare word senses due to limited training data and can only select the best definition sentence from one predefined word sense inventory (e.g., WordNet). To address the data sparsity problem and generalize the model to be independent of one predefined inventory, we propose a gloss alignment algorithm that can align definition sentences (glosses) with the same meaning from different sense inventories to collect rich lexical knowledge. We then train a model to identify semantic equivalence between a target word in context and one of its glosses using these aligned inventories, which exhibits strong transfer capability to many WSD tasks 1. Experiments on benchmark datasets show that the proposed method improves predictions on both frequent and rare word senses, outperforming prior work by 1.2% on the All-Words WSD Task and 4.3% on the Low-Shot WSD Task. Evaluation on WiC Task also indicates that our method can better capture word meanings in context.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rudzewitz-etal-2018-generating","url":"https:\/\/aclanthology.org\/W18-0513","title":"Generating Feedback for English Foreign Language Exercises","abstract":"While immediate feedback on learner language is often discussed in the Second Language Acquisition literature (e.g., Mackey 2006), few systems used in real-life educational settings provide helpful, metalinguistic feedback to learners. In this paper, we present a novel approach leveraging task information to generate the expected range of well-formed and ill-formed variability in learner answers along with the required diagnosis and feedback. We combine this offline generation approach with an online component that matches the actual student answers against the pre-computed hypotheses. The results obtained for a set of 33 thousand answers of 7th grade German high school students learning English show that the approach successfully covers frequent answer patterns. At the same time, paraphrases and meaning errors require a more flexible alignment approach, for which we are planning to complement the method with the CoMiC approach successfully used for the analysis of reading comprehension answers (Meurers et al., 2011).","label_nlp4sg":1,"task":["Generating Feedback"],"method":[],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":"We are grateful to our research assistants Madeesh Kannan and Tobias P\u00fctz for their contributions to the implementation of the feedback architecture. We would also like to thank the three anonymous reviewers for their detailed and helpful comments. This work has been funded through a transfer project grant by the Deutsche Forschungsgemeinschaft in connection with the SFB 833.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"johansen-socher-2017-learning","url":"https:\/\/aclanthology.org\/W17-2631","title":"Learning when to skim and when to read","abstract":"Many recent advances in deep learning for natural language processing have come at increasing computational cost, but the power of these state-of-the-art models is not needed for every example in a dataset. We demonstrate two approaches to reducing unnecessary computation in cases where a fast but weak baseline classier and a stronger, slower model are both available. Applying an AUC-based metric to the task of sentiment classification, we find significant efficiency gains with both a probability-threshold method for reducing computational cost and one that uses a secondary decision network. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"minkov-cohen-2008-learning","url":"https:\/\/aclanthology.org\/D08-1095","title":"Learning Graph Walk Based Similarity Measures for Parsed Text","abstract":"We consider a parsed text corpus as an instance of a labelled directed graph, where nodes represent words and weighted directed edges represent the syntactic relations between them. We show that graph walks, combined with existing techniques of supervised learning, can be used to derive a task-specific word similarity measure in this graph. We also propose a new path-constrained graph walk method, in which the graph walk process is guided by high-level knowledge about meaningful edge sequences (paths). Empirical evaluation on the task of named entity coordinate term extraction shows that this framework is preferable to vector-based models for smallsized corpora. It is also shown that the pathconstrained graph walk algorithm yields both performance and scalability gains.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors wish to thank the anonymous reviewers and Hanghang Tong for useful advice. This material is based upon work supported by Yahoo! Research.","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"koh-etal-2001-test","url":"https:\/\/aclanthology.org\/2001.mtsummit-papers.35","title":"A test suite for evaluation of English-to-Korean machine translation systems","abstract":"This paper describes KORTERM's test suite and their practicability. The test-sets have been being constructed on the basis of finegrained classification of linguistic phenomena to evaluate the technical status of English-to-Korean MT systems systematically. They consist of about 5000 test-sets and are growing. Each test-set contains an English sentence, a model Korean translation, a linguistic phenomenon category, and a yes\/no question about the linguistic phenomenon. Two commercial systems were evaluated with a yes\/no test of prepared questions. Total accuracy rates of the two systems were different (50% vs. 66%). In addition, a comprehension test was carried out. We found that one system was more comprehensible than the other system. These results seem to show that our test suite is practicable.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research has been being carried out by the technology service fund of Ministry of Science & Technology in Korea for 1999-2001 under the title \"Large-scale Speech\/Language\/Image Database and Evaluation\". We would like to appreciate Dr. Hyo-Sik Shin and Jong-Hoon Oh's comments.","year":2001,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jorgensen-1997-recognition","url":"https:\/\/aclanthology.org\/W97-1416","title":"Recognition of referring expressions","abstract":"Computational models of referring such as those of Kronfeld (1990) and Heeman & Hirst (1995) , being based on the view of language as goal-directed behaviour, assume that the act of referring includes making the hearer recognize the speaker's communicative goal. My present work centres on the question of how we recognize an expression as a referring expression, i.e. how do we know that the use of a noun phrase is intended to indicate a particular object? Working in a Kronfeldian framework, I assume that referring consists of a \"literal goal\" (making the hearer recognize the np as a referring expression) and a \"discourse purpose\" (making the hearer recognize and apply the right \"identification constraints\", such as the requirement that the referent should be identified perceptually, or should be identified with an entity introduced at a previous stage in the discourse). I propose that in some cases the recognition of the np as a referring expression depends on the recognition of the identification constraints. I work primarily with referring in the literary mode, looking for computationally manageable triggers such as lexical anchorings and information derivable from knowledge of genre, but I am also concerned with the questions of whether a unified approach to referring is possible and the extent to which text-based models can be generalized to a multimodal context.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"burukina-2014-translating","url":"https:\/\/aclanthology.org\/2014.tc-1.25","title":"Translating implicit elements in RBMT","abstract":"The present paper addresses MT of asymmetrical linguistic markers, in particular zero possessives. English <-> Russian MT was chosen as an example; however, obtained results can be applied to other language pairs (English-German \/ Spanish\/Norwegian etc.).Overt pronouns are required to mark possessive relations in English. On the contrary, in Russian implicit possessives are regularly used, thus making it very important to analyze them properly, not only for MT but also for other NLP tasks such as NER, Fact extraction, etc. However, concerning modern NLP systems the task remains practically unsolved. The paper examines how modern English <-> Russian MT systems process implicit possessives and explores main problems that exist concerning the issue. As no SB approach can process IP constructions properly, linguistic rules need to be developed for their analysis and synthesis;the main properties of IPs are analyzed to that end. Finally, several rules to apply to RB or model-based MT are introduced that help to increase translation accuracy. The present research is based on ABBYY Compreno \u00a9 multilanguage NLP technologies that include MT module. 1. Marco calent\u00f3 el agua del t\u00e9. Ahora tiene miedo de quemarse. Marco warmed water for tea. Now he is afraid to burn himself.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are particularly grateful to Vladimir Selegey (ABBYY) and Alexey Leontyev (ABBYY) for their help and support. A special thank goes to prof. Yakov Testelets (RSUH) for his help and advices on theoretical part of research.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mitrovic-etal-2019-nlpup","url":"https:\/\/aclanthology.org\/S19-2127","title":"nlpUP at SemEval-2019 Task 6: A Deep Neural Language Model for Offensive Language Detection","abstract":"This paper presents our submission for the SemEval shared task 6, sub-task A on the identification of offensive language. Our proposed model, C-BiGRU, combines a Convolutional Neural Network (CNN) with a bidirectional Recurrent Neural Network (RNN). We utilize word2vec to capture the semantic similarities between words. This composition allows us to extract long term dependencies in tweets and distinguish between offensive and non-offensive tweets. In addition, we evaluate our approach on a different dataset and show that our model is capable of detecting online aggressiveness in both English and German tweets. Our model achieved a macro F1-score of 79.40% on the SemEval dataset.","label_nlp4sg":1,"task":["Offensive Language Detection"],"method":["Convolutional Neural Network","bidirectional Recurrent Neural Network","word2vec"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"caballero-etal-2002-multidialectal","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2002\/pdf\/278.pdf","title":"Multidialectal Spanish Modeling for ASR","abstract":"EQUATION\nEQUATION","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chiril-etal-2020-said","url":"https:\/\/aclanthology.org\/2020.acl-main.373","title":"He said ``who's gonna take care of your children when you are at ACL?'': Reported Sexist Acts are Not Sexist","abstract":"In a context of offensive content mediation on social media now regulated by European laws, it is important not only to be able to automatically detect sexist content but also to identify if a message with a sexist content is really sexist or is a story of sexism experienced by a woman. We propose: (1) a new characterization of sexist content inspired by speech acts theory and discourse analysis studies, (2) the first French dataset annotated for sexism detection, and (3) a set of deep learning experiments trained on top of a combination of several tweet's vectorial representations (word embeddings, linguistic features, and various generalization strategies). Our results are encouraging and constitute a first step towards offensive content moderation.","label_nlp4sg":1,"task":["sexism detection","offensive content mediation"],"method":["dataset","deep learning","speech acts theory"],"goal1":"Gender Equality","goal2":null,"goal3":null,"acknowledgments":"This work is funded by the Institut Carnot Cognition under the project SESAME.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":1,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"xiong-etal-2007-dependency","url":"https:\/\/aclanthology.org\/W07-0706","title":"A Dependency Treelet String Correspondence Model for Statistical Machine Translation","abstract":"This paper describes a novel model using dependency structures on the source side for syntax-based statistical machine translation: Dependency Treelet String Correspondence Model (DTSC). The DTSC model maps source dependency structures to target strings. In this model translation pairs of source treelets and target strings with their word alignments are learned automatically from the parsed and aligned corpus. The DTSC model allows source treelets and target strings with variables so that the model can generalize to handle dependency structures with the same head word but with different modifiers and arguments. Additionally, target strings can be also discontinuous by using gaps which are corresponding to the uncovered nodes which are not included in the source treelets. A chart-style decoding algorithm with two basic operationssubstituting and attaching-is designed for the DTSC model. We argue that the DTSC model proposed here is capable of lexicalization, generalization, and handling discontinuous phrases which are very desirable for machine translation. We finally evaluate our current implementation of a simplified version of DTSC for statistical machine translation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by National Natural Science Foundation of China, Contract No. 60603095 and 60573188.","year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhang-etal-2021-aspect","url":"https:\/\/aclanthology.org\/2021.findings-emnlp.390","title":"Aspect-based Sentiment Analysis in Question Answering Forums","abstract":"Aspect-based sentiment analysis (ABSA) typically focuses on extracting aspects and predicting their sentiments on individual sentences such as customer reviews. Recently, another kind of opinion sharing platform, namely question answering (QA) forum, has received increasing popularity, which accumulates a large number of user opinions towards various aspects. This motivates us to investigate the task of ABSA on QA forums (ABSA-QA), aiming to jointly detect the discussed aspects and their sentiment polarities for a given QA pair. Unlike review sentences, a QA pair is composed of two parallel sentences, which requires interaction modeling to align the aspect mentioned in the question and the associated opinion clues in the answer. To this end, we propose a model with a specific design of crosssentence aspect-opinion interaction modeling to address this task. The proposed method is evaluated on three real-world datasets and the results show that our model outperforms several strong baselines adopted from related state-of-the-art models.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dong-etal-2021-parasci","url":"https:\/\/aclanthology.org\/2021.eacl-main.33","title":"ParaSCI: A Large Scientific Paraphrase Dataset for Longer Paraphrase Generation","abstract":"We propose ParaSCI, the first large-scale paraphrase dataset in the scientific field, including 33,981 paraphrase pairs from ACL (ParaSCI-ACL) and 316,063 pairs from arXiv (ParaSCI-arXiv). Digging into characteristics and common patterns of scientific papers, we construct this dataset though intra-paper and inter-paper methods, such as collecting citations to the same paper or aggregating definitions by scientific terms. To take advantage of sentences paraphrased partially, we put up PDBERT as a general paraphrase discovering method. The major advantages of paraphrases in ParaSCI lie in the prominent length and textual diversity, which is complementary to existing paraphrase datasets. ParaSCI obtains satisfactory results on human evaluation and downstream tasks, especially long paraphrase generation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by National Natural Science Foundation of China (61772036), Beijing Academy of Artificial Intelligence (BAAI) and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We appreciate the anonymous reviewers for their helpful comments. Xiaojun Wan is the corresponding author.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-2019-measuring","url":"https:\/\/aclanthology.org\/P19-2004","title":"Measuring the Value of Linguistics: A Case Study from St. Lawrence Island Yupik","abstract":"The adaptation of neural approaches to NLP is a landmark achievement that has called into question the utility of linguistics in the development of computational systems. This research proposal consequently explores this question in the context of a neural morphological analyzer for a polysynthetic language, St. Lawrence Island Yupik. It asks whether incorporating elements of Yupik linguistics into the implementation of the analyzer can improve performance, both in low-resource settings and in high-resource settings, where rich quantities of data are readily available.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Portions of this work were funded by NSF Documenting Endangered Languages Grant #BCS 1761680, and a University of Illinois Graduate College Illinois Distinguished Fellowship. Special thanks to the Yupik speakers who have shared their language and culture with us.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"barzegar-etal-2018-semr","url":"https:\/\/aclanthology.org\/L18-1618","title":"SemR-11: A Multi-Lingual Gold-Standard for Semantic Similarity and Relatedness for Eleven Languages","abstract":"This work describes SemR-11, a multilingual dataset for evaluating semantic similarity and relatedness for 11 languages (German, French, Russian, Italian, Dutch, Chinese, Portuguese, Swedish, Spanish, Arabic and Persian). Semantic similarity and relatedness gold standards have been initially used to support the evaluation of semantic distance measures in the context of linguistic and knowledge resources and distributional semantic models. SemR-11 builds upon the English gold-standards of Miller & Charles (MC), Rubenstein & Goodenough (RG), WordSimilarity 353 (WS-353), and Simlex-999, providing a canonical translation for them. The final dataset consists of 15,917 word pairs and can be used to support the construction and evaluation of semantic similarity\/relatedness and distributional semantic models. As a case study, the SemR-11 test collections was used to investigate how different distributional semantic models built from corpora in different languages and with different sizes perform in computing semantic relatedness similarity and relatedness tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-etal-2015-translation","url":"https:\/\/aclanthology.org\/Y15-1003","title":"Translation of Unseen Bigrams by Analogy Using an SVM Classifier","abstract":"Detecting language divergences and predicting possible sub-translations is one of the most essential issues in machine translation. Since the existence of translation divergences, it is impractical to straightforward translate from source sentence into target sentence while keeping the high degree of accuracy and without additional information. In this paper, we investigate the problem from an emerging and special point of view: bigrams and the corresponding translations. We first profile corpora and explore the constituents of bigrams in the source language. Then we translate unseen bigrams based on proportional analogy and filter the outputs using an Support Vector Machine (SVM) classifier. The experiment results also show that even a small set of features from analogous can provide meaningful information in translating by analogy.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported in part by China Scholarship Council (CSC) under the CSC Grant No.201406890026 is acknowledged. We also thank the anonymous reviewers for their insightful comments.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"beloucif-etal-2014-improving","url":"https:\/\/aclanthology.org\/2014.iwslt-evaluation.4","title":"Improving MEANT based semantically tuned SMT","abstract":"We discuss various improvements to our MEANT tuned system, previously presented at IWSLT 2013. In our 2014 system, we incorporate this year's improved version of MEANT, improved Chinese word segmentation, Chinese named entity recognition and dedicated proper name translation, and number expression handling. This results in a significant performance jump compared to last year's system. We also ran preliminary experiments on tuning to IMEANT, our new ITG based variant of MEANT. The performance of tuning to IMEANT is comparable to tuning on MEANT (differences are statistically insignificant). We are presently investigating if tuning on IMEANT can produce even better results, since IMEANT was actually shown to correlate with human adequacy judgment more closely than MEANT. Finally, we ran experiments applying our new architectural improvements to a contrastive system tuned to BLEU. We observed a slightly higher jump in comparison to last year, possibly due to mismatches of MEANT's similarity models to our new entity handling.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"merlo-2003-generalised","url":"https:\/\/aclanthology.org\/E03-1079","title":"Generalised PP-attachment Disambiguation Using Corpus-based Linguistic Diagnostics","abstract":"We propose a new formulation of the PP attachment problem as a 4-way classification which takes into account the argument or adjunct status of the PP. Based on linguistic diagnostics, we train a 4-way classifier that reaches an average accuracy of 73.9% (baseline 66.2%). Compared to a sequence of binary classifiers, the 4-way classifier reaches better performance and individuates a verb's arguments more accurately, thus improving the acquisition of a crucial piece of information for many NLP applications.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was made possible by Swiss NSF grant no. 11-65328.01. I would like to thank Eva Esteve Ferrer for her collaboration.","year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"matsunaga-kohda-1988-linguistic","url":"https:\/\/aclanthology.org\/C88-1082","title":"Linguistic Processing Using a Dependency Structure Grammar for Speech Recognition and Understanding","abstract":"This paper proposes an efficient linguistic processing strategy for speech recognition and understanding using a dependency structure grammar. The strategy includes parsing and phrase prediction algorithms. After speech processing and phrase recognition based on phoneme recognition, the parser extracts the sentence with the best likelihood taking account of the phonetic likelihood of phrase candidates and the linguistic likelihood of the semantic inter-phrase dependency relationships. A fast parsing algorithm using breadth-first search is also proposed. The predictor pre-selects the p~.ase candidates using transition rules combined with a dependency structure to reduce the amount of phonetic processing. The proposed linguistic processor has been tested through speech recognition experiments. The experimental results show that it greatly increases the accuracy of speech recognitions, and the breadth-first parsing algorithm and predictor increase processing speed.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1988,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-risteski-2021-limitations","url":"https:\/\/aclanthology.org\/2021.acl-long.208","title":"The Limitations of Limited Context for Constituency Parsing","abstract":"Incorporating syntax into neural approaches in NLP has a multitude of practical and scientific benefits. For instance, a language model that is syntax-aware is likely to be able to produce better samples; even a discriminative model like BERT with a syntax module could be used for core NLP tasks like unsupervised syntactic parsing. Rapid progress in recent years was arguably spurred on by the empirical success of the Parsing-Reading-Predict architecture of (Shen et al., 2018a), later simplified by the Order Neuron LSTM of (Shen et al., 2019). Most notably, this is the first time neural approaches were able to successfully perform unsupervised syntactic parsing (evaluated by various metrics like F-1 score).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sherif-kondrak-2007-substring","url":"https:\/\/aclanthology.org\/P07-1119","title":"Substring-Based Transliteration","abstract":"Transliteration is the task of converting a word from one alphabetic script to another. We present a novel, substring-based approach to transliteration, inspired by phrasebased models of machine translation. We investigate two implementations of substringbased transliteration: a dynamic programming algorithm, and a finite-state transducer. We show that our substring-based transducer not only outperforms a state-of-the-art letterbased approach by a significant margin, but is also orders of magnitude faster.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Colin Cherry and the other members of the NLP research group at the University of Alberta for their helpful comments. This research was supported by the Natural Sciences and Engineering Research Council of Canada.","year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nicholson-etal-2008-evaluating","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2008\/pdf\/794_paper.pdf","title":"Evaluating and Extending the Coverage of HPSG Grammars: A Case Study for German","abstract":"In this work, we examine and attempt to extend the coverage of a German HPSG grammar. We use the grammar to parse a corpus of newspaper text and evaluate the proportion of sentences which have a correct attested parse, and analyse the cause of errors in terms of lexical or constructional gaps which prevent parsing. Then, using a maximum entropy model, we evaluate prediction of lexical types in the HPSG type hierarchy for unseen lexemes. By automatically adding entries to the lexicon, we observe that we can increase coverage without substantially decreasing precision.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rose-etal-2002-reuters","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2002\/pdf\/80.pdf","title":"The Reuters Corpus Volume 1 -from Yesterday's News to Tomorrow's Language Resources","abstract":"Reuters, the global information, news and technology group, has for the first time made available free of charge, large quantities of archived Reuters news stories for use by research communities around the world. The Reuters Corpus Volume 1 (RCV1) includes over 800,000 news stories-typical of the annual English language news output of Reuters. This paper describes the origins of RCV1, the motivations behind its creation, and how it differs from previous corpora. In addition we discuss the system of category coding, whereby each story is annotated for topic, region and industry sector. We also discuss the process by which these codes were applied, and examine the issues involved in maintaining quality and consistency of coding in an operational, commercial environment.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This paper owes much to the work of Chris Harris, who performed the original analysis of the inter-coder consistency described in Section 5.This paper has also benefited greatly from discussions with Dave Lewis, whose unfeasibly large appetite for detail on RCV1 never ceases to amaze us. We are also grateful for the input and efforts of other Reuters personnel (past and present), notably Trevor Bartlett, Dave Beck, Chris Porter, Jo Rabin, Richard Willis and Andrew Young.","year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pretkalnina-etal-2011-prague","url":"https:\/\/aclanthology.org\/W11-4645","title":"A Prague Markup Language profile for the SemTi-Kamols grammar model","abstract":"In this paper we demonstrate a hybrid treebank encoding format, derived from the dependency-based format used in Prague Dependency Treebank (PDT). We have specified a Prague Markup Language (PML) profile for the SemTi-Kamols hybrid grammar model that has been developed for languages with relatively free word order (e.g. Latvian). This has allowed us to exploit the tree editor TrEd that has been used in PDT development. As a proof of concept, a small Latvian treebank has been created by annotating 100 sentences from-Sophie's World\u2016.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is funded by the State Research Programme \"National Identity\" (project No 3) and the Latvian Council of Sciences project -Application of Factored Methods in English-Latvian Statistical Machine Translation System\u2016.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"muller-2002-probabilistic","url":"https:\/\/aclanthology.org\/W02-0608","title":"Probabilistic Context-Free Grammars for Phonology","abstract":"We present a phonological probabilistic contextfree grammar, which describes the word and syllable structure of German words. The grammar is trained on a large corpus by a simple supervised method, and evaluated on a syllabification task achieving 96.88% word accuracy on word tokens, and 90.33% on word types. We added rules for English phonemes to the grammar, and trained the enriched grammar on an English corpus. Both grammars are evaluated qualitatively showing that probabilistic context-free grammars can contribute linguistic knowledge to phonology. Our formal approach is multilingual, while the training data is language-dependent.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-etal-2022-gpt","url":"https:\/\/aclanthology.org\/2022.acl-long.131","title":"GPT-D: Inducing Dementia-related Linguistic Anomalies by Deliberate Degradation of Artificial Neural Language Models","abstract":"Deep learning (DL) techniques involving finetuning large numbers of model parameters have delivered impressive performance on the task of discriminating between language produced by cognitively healthy individuals, and those with Alzheimer's disease (AD). However, questions remain about their ability to generalize beyond the small reference sets that are publicly available for research. As an alternative to fitting model parameters directly, we propose a novel method by which a Transformer DL model (GPT-2) pre-trained on general English text is paired with an artificially degraded version of itself (GPT-D), to compute the ratio between these two models' perplexities on language from cognitively healthy and impaired individuals. This technique approaches state-ofthe-art performance on text data from a widely used \"Cookie Theft\" picture description task, and unlike established alternatives also generalizes well to spontaneous conversations. Furthermore, GPT-D generates text with characteristics known to be associated with AD, demonstrating the induction of dementia-related linguistic anomalies. Our study is a step toward better understanding of the relationships between the inner workings of generative neural language models, the language that they produce, and the deleterious effects of dementia on human speech and language characteristics.","label_nlp4sg":1,"task":["Inducing Dementia - related Linguistic Anomalies"],"method":["GPT - 2","Language Models"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"This research was supported by grants from the National Institute on Aging (AG069792) and Administrative Supplement (LM011563-S1) from the National Library of Medicine","year":2022,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"poesio-etal-2002-acquiring","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2002\/pdf\/117.pdf","title":"Acquiring Lexical Knowledge for Anaphora Resolution","abstract":"The lack of adequate bases of commonsense or even lexical knowledge is perhaps the main obstacle to the development of highperformance, robust tools for semantic interpretation. It is also generally accepted that, notwithstanding the increasing availability in recent years of substantial hand-coded lexical resources such as WordNet and EuroWordNet, addressing the commonsense knowledge bottleneck will eventually require the development of effective techniques for acquiring such information automatically, e.g., from corpora. We discuss research aimed at improving the performance of anaphora resolution systems by acquiring the commonsense knowledge require to resolve the more complex cases of anaphora, such as bridging references. We focus in particular on the problem of acquiring information about part-of relations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work in part supported by an EPSRC Advanced Research Fellowship (Massimo Poesio). Massimo Poesio wishes to thank Chris Brew, Will Lowe, Scott MacDonald, Maria Teresa Pazienza, and Peter Wiemer-Hastings for comments. Thanks also to audiences at the University of Edinburgh, University of Essex, and Universita' di Roma Tor Vergata. ","year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"artale-etal-1997-lexical","url":"https:\/\/aclanthology.org\/W97-0805","title":"Lexical Discrimination with the Italian Version of WordNet","abstract":"We present a prototype of the Italian version of WORDNET, a general computational lexical resource. Some relevant extensions are discussed to make it usable for parsing: in particular we add verbal selectional restrictions to make lexical discrimination effective. Italian WORDNET has been coupled with a parser and a number of experiments have been performed to individuate the methodology with the best trade-off between disambiguation rate and precision. Results confirm intuitive hypothesis on the role of selectional restrictions and show evidences for a WORDNET-Iike organization of lexical senses.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yasuda-etal-2004-automatic","url":"https:\/\/aclanthology.org\/W04-1708","title":"Automatic Measuring of English Language Proficiency using MT Evaluation Technology","abstract":"Assisting in foreign language learning is one of the major areas in which natural language processing technology can contribute. This paper proposes a computerized method of measuring communicative skill in English as a foreign language. The proposed method consists of two parts. The first part involves a test sentence selection part to achieve precise measurement with a small test set. The second part is the actual measurement, which has three steps. Step one asks proficiency-known human subjects to translate Japanese sentences into English. Step two gauges the match between the translations of the subjects and correct translations based on the n-gram overlap or the edit distance between translations. Step three learns the relationship between proficiency and match. By regression it finds a straight-line fitting for the scatter plot representing the proficiency and matches of the subjects. Then, it estimates proficiency of proficiency-unknown users by using the line and the match. Based on this approach, we conducted experiments on estimating the Test of English for International Communication (TOEIC) score. We collected two sets of data consisting of English sentences translated from Japanese. The first set consists of 330 sentences, each translated to English by 29 subjects with varied English proficiency. The second set consists of 510 sentences translated in a similar manner by a separate group of 18 subjects. We found that the estimated scores correlated with the actual scores.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research reported here was supported in part by a contract with the National Institute of Information and Communications Technology entitled \"A study of speech dialogue translation technology based on a large corpus\".","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-etal-2019-dynamically","url":"https:\/\/aclanthology.org\/P19-1123","title":"Dynamically Composing Domain-Data Selection with Clean-Data Selection by ``Co-Curricular Learning'' for Neural Machine Translation","abstract":"Noise and domain are important aspects of data quality for neural machine translation. Existing research focus separately on domaindata selection, clean-data selection, or their static combination, leaving the dynamic interaction across them not explicitly examined. This paper introduces a \"co-curricular learning\" method to compose dynamic domain-data selection with dynamic clean-data selection, for transfer learning across both capabilities. We apply an EM-style optimization procedure to further refine the \"co-curriculum\". Experiment results and analysis with two domains demonstrate the effectiveness of the method and the properties of data scheduled by the cocurriculum.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank Yuan Cao for his help and advice, the three anonymous reviewers for their constructive reviews, Melvin Johnson for early discussions, Jason Smith, Orhan Firat, Macduff Hughes for comments on an earlier draft, and Wolfgang Macherey for his early work on the topic.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sheang-saggion-2021-controllable","url":"https:\/\/aclanthology.org\/2021.inlg-1.38","title":"Controllable Sentence Simplification with a Unified Text-to-Text Transfer Transformer","abstract":"Recently, a large pre-trained language model called T5 (A Unified Text-to-Text Transfer Transformer) has achieved state-of-the-art performance in many NLP tasks. However, no study has been found using this pre-trained model on Text Simplification. Therefore in this paper, we explore the use of T5 fine-tuning on Text Simplification combining with a controllable mechanism to regulate the system outputs that can help generate adapted text for different target audiences. Our experiments show that our model achieves remarkable results with gains of between +0.69 and +1.41 over the current state-of-the-art (BART+ACCESS). We argue that using a pre-trained model such as T5, trained on several tasks with large amounts of data, can help improve Text Simplification. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We acknowledge support from the project Context-aware Multilingual Text Simplifi-","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"sporleder-etal-2006-spotting","url":"https:\/\/aclanthology.org\/W06-2206","title":"Spotting the `Odd-one-out': Data-Driven Error Detection and Correction in Textual Databases","abstract":"We present two methods for semiautomatic detection and correction of errors in textual databases. The first method (horizontal correction) aims at correcting inconsistent values within a database record, while the second (vertical correction) focuses on values which were entered in the wrong column. Both methods are data-driven and language-independent. We utilise supervised machine learning, but the training data is obtained automatically from the database; no manual annotation is required. Our experiments show that a significant proportion of errors can be detected by the two methods. Furthermore, both methods were found to lead to a precision that is high enough to make semi-automatic error correction feasible.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgments The research reported in this paper was funded by NWO (Netherlands Organisation for Scientific Research) and carried out at the Naturalis Research Labs in Leiden. We would like to thank Pim Arntzen and Erik van Nieukerken from Naturalis for guidance and helpful discussions. We are also grateful to two anonymous reviewers for useful comments.","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"liberman-1986-questions","url":"https:\/\/aclanthology.org\/P86-1026","title":"Questions about Connectionist Models of Natural Language","abstract":"STATEMENT My role as interlocutor for this ACL Forum on Connectionism is to promote discussion by asking questions and making provocative comments. I will begin by asking some questions that I will attempt to answer myself, in order to define some terms. I will then pose some questions for the panel and the audience to discuss, if they are interested, and I will make a few critical comments on the abstracts submitted by Waltz and Sejnowski, intended to provoke responses from them.\nThe basic metaphor involves a finite set of nodes interconnected by a finite set of directed arcs. Each node transmits on its output arcs some function of what it receives on its input arcs; these transfer functions are usually described parametrically, for instance in terms of a linear combination of the inputs composed with some nonlinear threshold-like function; the transfer function may involve a random variable.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1986,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"white-2006-ccg","url":"https:\/\/aclanthology.org\/W06-1403","title":"CCG Chart Realization from Disjunctive Inputs","abstract":"This paper presents a novel algorithm for efficiently generating paraphrases from disjunctive logical forms. The algorithm is couched in the framework of Combinatory Categorial Grammar (CCG) and has been implemented as an extension to the OpenCCG surface realizer. The algorithm makes use of packed representations similar to those initially proposed by Shemtov (1997), generalizing the approach in a more straightforward way than in the algorithm ultimately adopted therein.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The author thanks Mary Ellen Foster, Amy Isard, Johanna Moore, Mark Steedman and the anonymous reviewers for helpful feedback and discussion, and the University of Edinburgh's Institute for Communicating and Collaborative Systems for partially supporting this work.","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hartmann-gurevych-2013-framenet","url":"https:\/\/aclanthology.org\/P13-1134","title":"FrameNet on the Way to Babel: Creating a Bilingual FrameNet Using Wiktionary as Interlingual Connection","abstract":"We present a new bilingual FrameNet lexicon for English and German. It is created through a simple, but powerful approach to construct a FrameNet in any language using Wiktionary as an interlingual representation. Our approach is based on a sense alignment of FrameNet and Wiktionary, and subsequent translation disambiguation into the target language. We perform a detailed evaluation of the created resource and a discussion of Wiktionary as an interlingual connection for the cross-language transfer of lexicalsemantic resources. The created resource","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been supported by the Volkswagen Foundation as part of the Lichtenberg-Professorship Program under grant No. I\/82806 and by the German Research Foundation under grant No. GU 798\/3-1 and grant No. GU 798\/9-1.We thank Christian Meyer and Judith-Eckle Kohler for insightful discussions and comments, and Christian Wirth for contributions in the early stage of this project. We also thank the anonymous reviewers for their helpful remarks.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"durrani-etal-2016-qcris","url":"https:\/\/aclanthology.org\/2016.iwslt-1.18","title":"QCRI's Machine Translation Systems for IWSLT'16","abstract":"This paper describes QCRI's machine translation systems for the IWSLT 2016 evaluation campaign. We participated in the Arabic\u2192English and English\u2192Arabic tracks. We built both Phrase-based and Neural machine translation models, in an effort to probe whether the newly emerged NMT framework surpasses the traditional phrase-based systems in Arabic-English language pairs. We trained a very strong phrase-based system including, a big language model, the Operation Sequence Model, Neural Network Joint Model and Class-based models along with different domain adaptation techniques such as MML filtering, mixture modeling and using fine tuning over NNJM model. However, a Neural MT system, trained by stacking data from different genres through fine-tuning, and applying ensemble over 8 models, beat our very strong phrase-based system by a significant 2 BLEU points margin in Arabic\u2192English direction. We did not obtain similar gains in the other direction but were still able to outperform the phrase-based system. We also applied system combination on phrase-based and NMT outputs.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"panicheva-etal-2010-personal","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/491_Paper.pdf","title":"Personal Sense and Idiolect: Combining Authorship Attribution and Opinion Analysis","abstract":"Subjectivity analysis and authorship attribution are very popular areas of research. However, work in these two areas has been done separately. Our conjecture is that by combining information about subjectivity in texts and authorship, the performance of both tasks can be improved. In the paper a personalized approach to opinion mining is presented, in which the notions of personal sense and idiolect are introduced; the approach is applied to the polarity classification task. It is assumed that different authors express their private states in text individually, and opinion mining results could be improved by analyzing texts by different authors separately. The hypothesis is tested on a corpus of movie reviews by ten authors. The results of applying the personalized approach to opinion mining are presented, confirming that the approach increases the performance of the opinion mining task. Automatic authorship attribution is further applied to model the personalized approach, classifying documents by their assumed authorship. Although the automatic authorship classification imposes a number of limitations on the dataset for further experiments, after overcoming these issues the authorship attribution technique modeling the personalized approach confirms the increase over the baseline with no authorship information used.","label_nlp4sg":1,"task":["Authorship Attribution","Opinion Analysis"],"method":["Model combination"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"The work of the third author is supported by the TEXT-ENTERPRISE 2.0 TIN2009-13391-C04-03 research project.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"mori-2002-information","url":"https:\/\/aclanthology.org\/C02-1018","title":"Information Gain Ratio as Term Weight: The case of Summarization of IR Results","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kotnis-etal-2022-milie","url":"https:\/\/aclanthology.org\/2022.acl-long.478","title":"MILIE: Modular \\& Iterative Multilingual Open Information Extraction","abstract":"Open Information Extraction (OpenIE) is the task of extracting (subject, predicate, object) triples from natural language sentences. Current OpenIE systems extract all triple slots independently. In contrast, we explore the hypothesis that it may be beneficial to extract triple slots iteratively: first extract easy slots, followed by the difficult ones by conditioning on the easy slots, and therefore achieve a better overall extraction. Based on this hypothesis, we propose a neural OpenIE system, MILIE, that operates in an iterative fashion. Due to the iterative nature, the system is also modular-it is possible to seamlessly integrate rule based extraction systems with a neural end-to-end system, thereby allowing rule based systems to supply extraction slots which MILIE can leverage for extracting the remaining slots. We confirm our hypothesis empirically: MILIE outperforms SOTA systems on multiple languages ranging from Chinese to Arabic. Additionally, we are the first to provide an OpenIE test dataset for Arabic and Galician.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bos-2008-wide","url":"https:\/\/aclanthology.org\/W08-2222","title":"Wide-Coverage Semantic Analysis with Boxer","abstract":"Boxer is an open-domain software component for semantic analysis of text, based on Combinatory Categorial Grammar (CCG) and Discourse Representation Theory (DRT). Used together with the C&C tools, Boxer reaches more than 95% coverage on newswire texts. The semantic representations produced by Boxer, known as Discourse Representation Structures (DRSs), incorporate a neo-Davidsonian representations for events, using the VerbNet inventory of thematic roles. The resulting DRSs can be translated to ordinary first-order logic formulas and be processing by standard theorem provers for first-order logic. Boxer's performance on the shared task for comparing semantic represtations was promising. It was able to produce complete DRSs for all seven texts. Manually inspecting the output revealed that: (a) the computed predicate argument structure was generally of high quality, in particular dealing with hard constructions involving control or coordination; (b) discourse structure triggered by conditionals, negation or discourse adverbs was overall correctly computed; (c) some measure and time expressions are correctly analysed, others aren't; (d) several shallow analyses are given for lexical phrases that require deep analysis; (e) bridging references and pronouns are not resolved in most cases. Boxer is distributed with the C&C tools and freely available for research purposes.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kontos-etal-2000-arista","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2000\/pdf\/360.pdf","title":"ARISTA Generative Lexicon for Compound Greek Medical Terms","abstract":"A Generative Lexicon for Compound Greek Medical Terms based on the ARISTA method is proposed in this paper. The concept of a representation independent definition-generating lexicon for compound words is introduced in this paper following the ARISTA method. This concept is used as a basis for developing a generative lexicon of Greek compound medical terminology using the senses of their component words expressed in natural language and not in a formal language. A Prolog program that was implemented for this task is presented that is capable of computing implicit relations between the components words in a sublanguage using linguistic and extra linguistic knowledge. An extra linguistic knowledge base containing knowledge derived from the domain or microcosm of the sublanguage is used for supporting the computation of the implicit relations. The performance of the system was evaluated by generating possible senses of the compound words automatically and judging the correctness of the results by comparing them with definitions given in a medical lexicon expressed in the language of the lexicographer.","label_nlp4sg":1,"task":["Compound Greek Medical Terms"],"method":["Generative Lexicon"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"de-cea-etal-2002-rdf","url":"https:\/\/aclanthology.org\/W02-1701","title":"RDF(S)\/XML Linguistic Annotation of Semantic Web Pages","abstract":"Although with the Semantic Web initiative much research on web page semantic annotation has already been done by AI researchers, linguistic text annotation, including the semantic one, was originally developed in Corpus Linguistics and its results have been somehow neglected by AI. The purpose of the research presented in this proposal is to prove that integration of results in both fields is not only possible, but also highly useful in order to make Semantic Web pages more machine-readable. A multi-level (possibly multipurpose and multilanguage) annotation model based on EAGLES standards and Ontological Semantics, implemented with last generation Semantic Web languages (RDF(S)\/XML) is being developed to fit the needs of both communities; the present paper focuses on its semantic level.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research described in this paper is supported by MCyT (Spanish Ministry of Science and Technology) under the project name: ContentWeb: \"PLATAFORMA TECNOL\u00d3GICA PARA LA WEB SEM\u00c1NTICA: ONTOLOG\u00cdAS, AN\u00c1LISIS DE LENGUAJE NATURAL Y COMERCIO ELECTR\u00d3NICO\" -TIC2001-2745 (\"ContentWeb: Semantic Web Technologic Platform: Ontologies, Natural Language Analysis and E-Business\").We would also like to thank \u00d3scar Corcho, Socorro Bernardos and Mariano Fern\u00e1ndez for their help with the ontological aspects of this paper.","year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"leunbach-1979-om","url":"https:\/\/aclanthology.org\/W79-0105","title":"Om automatisk orddeling. Forslag til en unders\\ogelse. (About automatic word-splitting. A survey proposal.) [In Danish]","abstract":"Om n u t o n in t is k o r d d e l i n g. F o r s l a g '; i. i 1 ori und o r s o g o l s. o ,","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1979,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bansal-klein-2012-coreference","url":"https:\/\/aclanthology.org\/P12-1041","title":"Coreference Semantics from Web Features","abstract":"To address semantic ambiguities in coreference resolution, we use Web n-gram features that capture a range of world knowledge in a diffuse but robust way. Specifically, we exploit short-distance cues to hypernymy, semantic compatibility, and semantic context, as well as general lexical co-occurrence. When added to a state-of-the-art coreference baseline, our Web features give significant gains on multiple datasets (ACE 2004 and ACE 2005) and metrics (MUC and B 3), resulting in the best results reported to date for the end-to-end task of coreference resolution.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Nathan Gilbert, Adam Pauls, and the anonymous reviewers for their helpful suggestions. This research is supported by Qualcomm via an Innovation Fellowship to the first author and by BBN under DARPA contract HR0011-12-C-0014.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hogenhout-matsumoto-1996-fast","url":"https:\/\/aclanthology.org\/Y96-1040","title":"Fast Statistical Grammar Induction","abstract":"The statistical induction of context free grammars from bracketed corpora with the Inside Outside Algorithm has often inspired researchers, but the computational complexity has made it impossible to generate a large scale grammar. The method we suggest achieves the same results as earlier research, but at a much smaller expense in computer time. We explain the modifications needed to the algorithm, give results of experiments and compare these to results reported in other literature.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"poria-etal-2015-deep","url":"https:\/\/aclanthology.org\/D15-1303","title":"Deep Convolutional Neural Network Textual Features and Multiple Kernel Learning for Utterance-level Multimodal Sentiment Analysis","abstract":"We present a novel way of extracting features from short texts, based on the activation values of an inner layer of a deep convolutional neural network. We use the extracted features in multimodal sentiment analysis of short video clips representing one sentence each. We use the combined feature vectors of textual, visual, and audio modalities to train a classifier based on multiple kernel learning, which is known to be good at heterogeneous data. We obtain 14% performance improvement over the state of the art and present a parallelizable decision-level data fusion method, which is much faster, though slightly less accurate.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"marom-zukerman-2006-automating","url":"https:\/\/aclanthology.org\/W06-0706","title":"Automating Help-desk Responses: A Comparative Study of Information-gathering Approaches","abstract":"We present a comparative study of corpusbased methods for the automatic synthesis of email responses to help-desk requests. Our methods were developed by considering two operational dimensions: (1) information-gathering technique, and (2) granularity of the information. In particular, we investigate two techniques-retrieval and prediction-applied to information represented at two levels of granularity-sentence-level and document level. We also developed a hybrid method that combines prediction with retrieval. Our results show that the different approaches are applicable in different situations, addressing a combined 72% of the requests with either complete or partial responses.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by grant LP0347470 from the Australian Research Council and by an endowment from Hewlett-Packard. The authors also thank Hewlett-Packard for the extensive help-desk data, and Tony Tony for assistance with the sentence-segmentation software, and Kerri Morgan and Michael Niemann for developing the syntactic feature extraction code.","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"larson-etal-2019-evaluation","url":"https:\/\/aclanthology.org\/D19-1131","title":"An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction","abstract":"Task-oriented dialog systems need to know when a query falls outside their range of supported intents, but current text classification corpora only define label sets that cover every example. We introduce a new dataset that includes queries that are out-of-scopei.e., queries that do not fall into any of the system's supported intents. This poses a new challenge because models cannot assume that every query at inference time belongs to a system-supported intent class. Our dataset also covers 150 intent classes over 10 domains, capturing the breadth that a production taskoriented agent must handle. We evaluate a range of benchmark classifiers on our dataset along with several different out-of-scope identification schemes. We find that while the classifiers perform well on in-scope intent classification, they struggle to identify out-of-scope queries. Our dataset and evaluation fill an important gap in the field, offering a way of more rigorously and realistically benchmarking text classification in task-driven dialog systems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"adams-etal-2016-distributed","url":"https:\/\/aclanthology.org\/W16-4904","title":"Distributed Vector Representations for Unsupervised Automatic Short Answer Grading","abstract":"We address the problem of automatic short answer grading, evaluating a collection of approaches inspired by recent advances in distributional text representations. In addition, we propose an unsupervised approach for determining text similarity using one-to-many alignment of word vectors. We evaluate the proposed technique across two datasets from different domains, namely, computer science and English reading comprehension, that additionally vary between highschool level and undergraduate students. Experiments demonstrate that the proposed technique often outperforms other compositional distributional semantics approaches as well as vector space methods such as latent semantic analysis. When combined with a scoring scheme, the proposed technique provides a powerful tool for tackling the complex problem of short answer grading. We also discuss a number of other key points worthy of consideration in preparing viable, easy-to-deploy automatic short-answer grading systems for the real-world.","label_nlp4sg":1,"task":["Short Answer Grading"],"method":["Distributed Vector Representations","unsupervised approach"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bobic-etal-2013-scai","url":"https:\/\/aclanthology.org\/S13-2111","title":"SCAI: Extracting drug-drug interactions using a rich feature vector","abstract":"Automatic relation extraction provides great support for scientists and database curators in dealing with the extensive amount of biomedical textual data. The DDIExtraction 2013 challenge poses the task of detecting drugdrug interactions and further categorizing them into one of the four relation classes. We present our machine learning system which utilizes lexical, syntactical and semantic based feature sets. Resampling, balancing and ensemble learning experiments are performed to infer the best configuration. For general drugdrug relation extraction, the system achieves 70.4% in F 1 score.","label_nlp4sg":1,"task":["Extracting drug - drug interactions","relation extraction"],"method":["feature vector","lexical , syntactical and semantic based feature sets"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank Roman Klinger for fruitful discussions. T. Bobi\u0107 was funded by the Bonn-Aachen International Center for Information Technology (B-IT) Research School.","year":2013,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jiang-etal-2021-towards","url":"https:\/\/aclanthology.org\/2021.emnlp-main.589","title":"Towards Automatic Evaluation of Dialog Systems: A Model-Free Off-Policy Evaluation Approach","abstract":"Reliable automatic evaluation of dialogue systems under an interactive environment has long been overdue. An ideal environment for evaluating dialog systems, also known as the Turing test, needs to involve human interaction, which is usually not affordable for large scale experiments. Though researchers have attempted to use metrics for language generation tasks (e.g., perplexity, BLEU) or some model-based reinforcement learning methods (e.g., self-play evaluation) for automatic evaluation, these methods only show very weak correlation with the actual human evaluation in practice. To bridge such a gap, we propose a new framework named ENIGMA for estimating human evaluation scores based on recent advances of off-policy evaluation in reinforcement learning. ENIGMA only requires a handful of pre-collected experience data, and therefore does not involve human interaction with the target policy during the evaluation, making automatic evaluations feasible. More importantly, ENIGMA is model-free and agnostic to the behavior policies for collecting the experience data (see details in Section 2), which significantly alleviates the technical difficulties of modeling complex dialogue environments and human behaviors. Our experiments show that ENIGMA significantly outperforms existing methods in terms of correlation with human evaluation scores.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ahlswede-1985-tool","url":"https:\/\/aclanthology.org\/P85-1033","title":"A Tool Kit for Lexicon Building","abstract":"This paper describes a set of interactive routines that can be used to create, maintain, and update a computer lexicon. The routines are available to the user as a set of commands resembling a simple operating system. The lexicon produced by this system is based on lexical-semantic relations, but is compatible with a variety of other models of lexicon structure. The lexicon builder is suitable for the generation of moderate-sized vocabularies and has been used to construct a lexicon for a small medical expert system. A future version of the lexicon builder will create a much larger lexicon by parsing definitions from machine-readable dictionaries.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1985,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"benson-1969-nexus","url":"https:\/\/aclanthology.org\/C69-0301","title":"Nexus a Linguistic Technique for Precoordination","abstract":"A method for automatically precoordinating index terms was devised to form combinations of terms which are stored as subject headings. A computer program accepts lists of auto-indexed terms and by applying linguistic and sequence rules combines appropriate terms, thereby effecting improved searchability of an information storage and retrieval system. A serious falling exists in many indexing systems in that index terms authorized for use are too general for use by technically-knowledgeable searchers. A search conducted using these terms frequently produces too many documents not specifically related to the users' requirements. An indexing method using the language in which the document was written corrects this failing, but eliminates the generality of the previous approach. A compromise between indexing generality and specificity is offered by NEXUS precoordination which combines specific terms into subject-headings, eliminating improper coordination of terms when matching search requirements with document term sets. NEXUS examines the suffix morpheme of each input term and determines whether or not the term should be a member of an index term combination or preeoordination. If insufficient evidence is present to make such a Although some suggestions arc made for applying this technique along with a possible output format for a bibliographic application, the chief value of this effort, however, has been to further study those aspects of language that are amenable to computerized analysis for the purpose of improving input and output functions in information retrieval. SECTION 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1969,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hassan-etal-2007-supertagged","url":"https:\/\/aclanthology.org\/P07-1037","title":"Supertagged Phrase-Based Statistical Machine Translation","abstract":"Until quite recently, extending Phrase-based Statistical Machine Translation (PBSMT) with syntactic structure caused system performance to deteriorate. In this work we show that incorporating lexical syntactic descriptions in the form of supertags can yield significantly better PBSMT systems. We describe a novel PBSMT model that integrates supertags into the target language model and the target side of the translation model. Two kinds of supertags are employed: those from Lexicalized Tree-Adjoining Grammar and Combinatory Categorial Grammar. Despite the differences between these two approaches, the supertaggers give similar improvements. In addition to supertagging, we also explore the utility of a surface global grammaticality measure based on combinatory operators. We perform various experiments on the Arabic to English NIST 2005 test set addressing issues such as sparseness, scalability and the utility of system subcomponents. Our best result (0.4688 BLEU) improves by 6.1% relative to a state-of-theart PBSMT model, which compares very favourably with the leading systems on the NIST 2005 task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Srinivas Bangalore and the anonymous reviewers for useful comments on earlier versions of this paper. This work is partially funded by Science Foundation Ireland Principal Investigator Award 05\/IN\/1732, and Netherlands Organization for Scientific Research (NWO) VIDI Award.","year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"xu-etal-2021-contrastive-document","url":"https:\/\/aclanthology.org\/2021.findings-emnlp.327","title":"Contrastive Document Representation Learning with Graph Attention Networks","abstract":"Recent progress in pretrained Transformerbased language models has shown great success in learning contextual representation of text. However, due to the quadratic selfattention complexity, most of the pretrained Transformers models can only handle relatively short text. It is still a challenge when it comes to modeling very long documents. In this work, we propose to use a graph attention network on top of the available pretrained Transformers model to learn document embeddings. This graph attention network allows us to leverage the high-level semantic structure of the document. In addition, based on our graph document model, we design a simple contrastive learning strategy to pretrain our models on a large amount of unlabeled corpus. Empirically, we demonstrate the effectiveness of our approaches in document classification and document retrieval tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"vincze-etal-2017-universal","url":"https:\/\/aclanthology.org\/E17-1034","title":"Universal Dependencies and Morphology for Hungarian - and on the Price of Universality","abstract":"In this paper, we present how the principles of universal dependencies and morphology have been adapted to Hungarian. We report the most challenging grammatical phenomena and our solutions to those. On the basis of the adapted guidelines, we have converted and manually corrected 1,800 sentences from the Szeged Treebank to universal dependency format. We also introduce experiments on this manually annotated corpus for evaluating automatic conversion and the added value of language-specific, i.e. non-universal, annotations. Our results reveal that converting to universal dependencies is not necessarily trivial, moreover, using languagespecific morphological features may have an impact on overall performance.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research of Rich\u00e1rd Farkas was funded by the J\u00e1nos Bolyai Scholarship.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"koay-etal-2021-sliding","url":"https:\/\/aclanthology.org\/2021.naacl-srw.10","title":"A Sliding-Window Approach to Automatic Creation of Meeting Minutes","abstract":"Meeting minutes record any subject matters discussed, decisions reached and actions taken at meetings. The importance of minuting cannot be overemphasized in a time when a significant number of meetings take place in the virtual space. In this paper, we present a sliding window approach to automatic generation of meeting minutes. It aims to tackle issues associated with the nature of spoken text, including lengthy transcripts and lack of document structure, which make it difficult to identify salient content to be included in the meeting minutes. Our approach combines a sliding window and a neural abstractive summarizer to navigate through the transcripts to find salient content. The approach is evaluated on transcripts of natural meeting conversations, where we compare results obtained for human transcripts and two versions of automatic transcripts and discuss how and to what extent the summarizer succeeds at capturing salient content.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their helpful feedback. We would also like to thank Kaiqiang Song for helping transcribe the meetings using Google's Speech-to-Text API. Xiaojin Dai was supported by NSF DUE-1643835. We thank Amazon for partially sponsoring the research and computation in this study through the Amazon AWS Machine Learning Research Award.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"smith-1997-evaluation","url":"https:\/\/aclanthology.org\/A97-1008","title":"An Evaluation of Strategies for Selective Utterance Verification for Spoken Natural Language Dialog","abstract":"As with human-human interaction, spoken human-computer dialog will contain situations where there is miscommunication. In experimental trials consisting of eight different users, 141 problem-solving dialogs, and 2840 user utterances, the Circuit Fix-It Shop natural language dialog system misinterpreted 18.5% of user utterances. These miscommunications created various problems for the dialog interaction, ranging from repetitive dialog to experimenter intervention to occasional failure of the dialog. One natural strategy for reducing the impact of miscommunication is selective verification of the user's utterances. This paper reports on both context-independent and context-dependent strategies for utterance verification that show that the use of dialog context is crucial for intelligent selection of which utterances to verify.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The author expresses his appreciation to D. Richard Hipp for his work on the error-correcting parser and for his initial work on context-independent verification. The author also wishes to express his thanks to Steven A. Gordon and Robert D. Hoggard for their suggestions concerning this work and an earlier draft of this paper. Other researchers who contributed to the development of the experimental system include Alan W. Biermann, Robert D. Rodman, Ruth S. Day, Dania Egedi, and Robin Gambill. This research has been supported by National Science Foundation Grant IRI-9501571.","year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jhamtani-clark-2020-learning","url":"https:\/\/aclanthology.org\/2020.emnlp-main.10","title":"Learning to Explain: Datasets and Models for Identifying Valid Reasoning Chains in Multihop Question-Answering","abstract":"Despite the rapid progress in multihop question-answering (QA), models still have trouble explaining why an answer is correct, with limited explanation training data available to learn from. To address this, we introduce three explanation datasets in which explanations formed from corpus facts are annotated. Our first dataset, eQASC, contains over 98K explanation annotations for the multihop question answering dataset QASC, and is the first that annotates multiple candidate explanations for each answer. The second dataset eQASC-perturbed is constructed by crowd-sourcing perturbations (while preserving their validity) of a subset of explanations in QASC, to test consistency and generalization of explanation prediction models. The third dataset eOBQA is constructed by adding explanation annotations to the OBQA dataset to test generalization of models trained on eQASC. We show that this data can be used to significantly improve explanation quality (+14% absolute F1 over a strong retrieval baseline) using a BERT-based classifier, but still behind the upper bound, offering a new challenge for future research. We also explore a delexicalized chain representation in which repeated noun phrases are replaced by variables, thus turning them into generalized reasoning chains (for example: \"X is a Y\" AND \"Y has Z\" IMPLIES \"X has Z\"). We find that generalized chains maintain performance while also being more robust to certain perturbations. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Ashish Sabharwal, Tushar Khot, Dirk Groeneveld, Taylor Berg-Kirkpatrick, and anonymous reviewers for useful comments and feedback. We thank Michal Guerquin for helping with the QA2D tool. This work was partly carried out when HJ was interning at AI2. HJ is funded in part by a Adobe Research Fellowship.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"vauquois-etal-1965-syntaxe","url":"https:\/\/aclanthology.org\/C65-1030","title":"Syntaxe Et Interpretation","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1965,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"putra-szabo-2013-uds","url":"https:\/\/aclanthology.org\/W13-3612","title":"UdS at CoNLL 2013 Shared Task","abstract":"This paper describes our submission for the CoNLL 2013 Shared Task, which aims to to improve the detection and correction of the five most common grammatical error types in English text written by non-native speakers. Our system concentrates only on two of them; it employs machine learning classifiers for the ArtOrDet-, and a fully deterministic rule based workflow for the SVA error type.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"jiang-etal-2019-improving","url":"https:\/\/aclanthology.org\/P19-1523","title":"Improving Open Information Extraction via Iterative Rank-Aware Learning","abstract":"Open information extraction (IE) is the task of extracting open-domain assertions from natural language sentences. A key step in open IE is confidence modeling, ranking the extractions based on their estimated quality to adjust precision and recall of extracted assertions. We found that the extraction likelihood, a confidence measure used by current supervised open IE systems, is not well calibrated when comparing the quality of assertions extracted from different sentences. We propose an additional binary classification loss to calibrate the likelihood to make it more globally comparable, and an iterative learning process, where extractions generated by the open IE model are incrementally included as training samples to help the model learn from trial and error. Experiments on OIE2016 demonstrate the effectiveness of our method. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by gifts from Bosch Research, and the Carnegie Bosch Institute.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"matthies-sogaard-2013-blinkers","url":"https:\/\/aclanthology.org\/D13-1075","title":"With Blinkers on: Robust Prediction of Eye Movements across Readers","abstract":"Nilsson and Nivre (2009) introduced a treebased model of persons' eye movements in reading. The individual variation between readers reportedly made application across readers impossible. While a tree-based model seems plausible for eye movements, we show that competitive results can be obtained with a linear CRF model. Increasing the inductive bias also makes learning across readers possible. In fact we observe next-to-no performance drop when evaluating models trained on gaze records of multiple readers on new readers.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"park-etal-2021-unsupervised","url":"https:\/\/aclanthology.org\/2021.acl-long.225","title":"Unsupervised Neural Machine Translation for Low-Resource Domains via Meta-Learning","abstract":"Unsupervised machine translation, which utilizes unpaired monolingual corpora as training data, has achieved comparable performance against supervised machine translation. However, it still suffers from data-scarce domains. To address this issue, this paper presents a novel meta-learning algorithm for unsupervised neural machine translation (UNMT) that trains the model to adapt to another domain by utilizing only a small amount of training data. We assume that domain-general knowledge is a significant factor in handling datascarce domains. Hence, we extend the metalearning algorithm, which utilizes knowledge learned from high-resource domains, to boost the performance of low-resource UNMT. Our model surpasses a transfer learning-based approach by up to 2-3 BLEU scores. Extensive experimental results show that our proposed algorithm is pertinent for fast adaptation and consistently outperforms other baselines.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"martinez-camara-etal-2017-neural","url":"https:\/\/aclanthology.org\/W17-6927","title":"Neural Disambiguation of Causal Lexical Markers Based on Context","abstract":"Causation is a psychological tool of humans to understand the world and it is projected in natural language. Causation relates two events, so in order to understand the causal relation of those events and the causal reasoning of humans, the study of causality classification is required. We claim that the use of linguistic features may restrict the representation of causality, and dense vector spaces can provide a better encoding of the causal meaning of an utterance. Herein, we propose a neural network architecture only fed with word embeddings for the task of causality classification. Our results show that our claim holds, and we outperform the state-of-the-art on the AltLex corpus. The source code of our experiments is publicly available. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600\/1-1 and grant GU 798\/17-1). Calculations for this research were conducted on the Lichtenberg high performance computer of the TU Darmstadt.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wagner-filho-etal-2018-brwac","url":"https:\/\/aclanthology.org\/L18-1686","title":"The brWaC Corpus: A New Open Resource for Brazilian Portuguese","abstract":"In this work, we present the construction process of a large Web corpus for Brazilian Portuguese, aiming to achieve a size comparable to the state of the art in other languages. We also discuss our updated sentence-level approach for the strict removal of duplicated content. Following the pipeline methodology, more than 60 million pages were crawled and filtered, with 3.5 million being selected. The obtained multi-domain corpus, named brWaC, is composed by 2.7 billion tokens, and has been annotated with tagging and parsing information. The incidence of non-unique long sentences, an indication of replicated content, which reaches 9% in other Web corpora, was reduced to only 0.5%. Domain diversity was also maximized, with 120,000 different websites contributing content. We are making our new resource freely available for the research community, both for querying and downloading, in the expectation of aiding in new advances for the processing of Brazilian Portuguese.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors acknowledge the partial funding of this research by CNPq-Brazil (grants n.400715\/2014-7, 312114\/2015-0 and 423843\/2016-8) and the Walloon Region (Project BEWARE n. 1610378).","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"simov-2013-invited","url":"https:\/\/aclanthology.org\/W13-2401","title":"Invited Talk: Ontologies and Linked Open Data for Acquisition and Exploitation of Language Resources","abstract":"Recent developments in Natural Language Processing (NLP) are heading towards knowledge rich resources and technology. Integration of linguistically sound grammars, sophisticated machine learning settings and world knowledge background is possible given the availability of the appropriate resources: deep multilingual treebanks, representing detailed syntactic and semantic information; and vast quantities of world knowledge information encoded within ontologies and Linked Open Data datasets (LOD). Thus, the addition of world knowledge facts provides a substantial extension of the traditional semantic resources like WordNet, FrameNet and others. This extension comprises numerous types of Named Entities (Persons, Locations, Events, etc.), their properties (Person has a birthDate; birthPlace, etc.), relations between them (Person works for an Organization), events in which they participated (Person participated in war, etc.), and many other facts. This huge amount of structured knowledge can be considered the missing ingredient of the knowledgebased NLP of 80's and the beginning of 90's.\nThe integration of world knowledge within language technology is defined as an ontology-to-text relation comprising different language and world knowledge in a common model. We assume that the lexicon is based on the ontology, i.e. the word senses are represented by concepts, relations or instances. The problem of lexical gaps is solved by allowing the storage of not only lexica, but also free phrases. The gaps in the ontology (a missing concept for a word sense) are solved by appropriate extensions of the ontology. The mapping is partial in the sense that both elements (the lexicon and the ontology) are artefacts and thusthey are never complete. The integration of the in-terlinked ontology and lexicon with the grammar theory, on the other hand, requires some additional and non-trivial reasoning over the world knowledge. We will discuss phenomena like selectional constraints, metonymy, regular polysemy, bridging relations, which live in the intersective areas between world facts and their language reflection. Thus, the actual text annotation on the basis of ontology-to-text relation requires the explication of additional knowledge like co-occurrence of conceptual information, discourse structure, etc.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"king-etal-2016-unbnlp","url":"https:\/\/aclanthology.org\/S16-1113","title":"UNBNLP at SemEval-2016 Task 1: Semantic Textual Similarity: A Unified Framework for Semantic Processing and Evaluation","abstract":"In this paper we consider several approaches to predicting semantic textual similarity using word embeddings, as well as methods for forming embeddings for larger units of text. We compare these methods to several baselines, and find that none of them outperform the baselines. We then consider both a supervised and unsupervised approach to combining these methods which achieve modest improvements over the baselines.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is financially supported by the Natural Sciences and Engineering Research Council of Canada, the New Brunswick Innovation Foundation, and the University of New Brunswick.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wiegand-ruppenhofer-2015-opinion","url":"https:\/\/aclanthology.org\/K15-1022","title":"Opinion Holder and Target Extraction based on the Induction of Verbal Categories","abstract":"We present an approach for opinion role induction for verbal predicates. Our model rests on the assumption that opinion verbs can be divided into three different types where each type is associated with a characteristic mapping between semantic roles and opinion holders and targets. In several experiments, we demonstrate the relevance of those three categories for the task. We show that verbs can easily be categorized with semi-supervised graphbased clustering and some appropriate similarity metric. The seeds are obtained through linguistic diagnostics. We evaluate our approach against a new manuallycompiled opinion role lexicon and perform in-context classification.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank Stephanie K\u00f6ser for annotating parts of the resources presented in this paper. For proofreading the paper, the authors would also like to thank Ines Rehbein, Asad Sayeed and Marc Schulder. We also thank Ashutosh Modi for advising us on word embeddings. The authors were partially supported by the German Research Foundation (DFG) under grants RU 1873\/2-1 and WI 4204\/2-1.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"yimam-biemann-2018-par4sim","url":"https:\/\/aclanthology.org\/C18-1028","title":"Par4Sim -- Adaptive Paraphrasing for Text Simplification","abstract":"Learning from a real-world data stream and continuously updating the model without explicit supervision is a new challenge for NLP applications with machine learning components. In this work, we have developed an adaptive learning system for text simplification, which improves the underlying learning-to-rank model from usage data, i.e. how users have employed the system for the task of simplification. Our experimental result shows that, over a period of time, the performance of the embedded paraphrase ranking model increases steadily improving from a score of 62.88% up to 75.70% based on the NDCG@10 evaluation metrics. To our knowledge, this is the first study where an NLP component is adaptively improved through usage.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been partially supported by the SEMSCH project at the University of Hamburg, funded by the German Research Foundation (DFG).We would like to thank the PC chairs, ACs, and reviewers for their detailed comments and suggestions for our paper. We also would like to thank colleagues at LT lab for testing the user interface. Special thanks goes to Rawda Assefa and Sisay Adugna for the proofreading of the Amharic abstract translation.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"liparas-etal-2014-concept","url":"https:\/\/aclanthology.org\/W14-5404","title":"Concept-oriented labelling of patent images based on Random Forests and proximity-driven generation of synthetic data","abstract":"Patent images are very important for patent examiners to understand the contents of an invention. Therefore there is a need for automatic labelling of patent images in order to support patent search tasks. Towards this goal, recent research works propose classification-based approaches for patent image annotation. However, one of the main drawbacks of these methods is that they rely upon large annotated patent image datasets, which require substantial manual effort to be obtained. In this context, the proposed work performs extraction of concepts from patent images building upon a supervised machine learning framework, which is trained with limited annotated data and automatically generated synthetic data. The classification is realised with Random Forests (RF) and a combination of visual and textual features. First, we make use of RF's implicit ability to detect outliers to rid our data of unnecessary noise. Then, we generate new synthetic data cases by means of Synthetic Minority Over-sampling Technique (SMOTE). We evaluate the different retrieval parts of the framework by using a dataset from the footwear domain. The results of the experiments indicate the benefits of using the proposed methodology.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"lai-etal-2021-joint","url":"https:\/\/aclanthology.org\/2021.acl-long.488","title":"Joint Biomedical Entity and Relation Extraction with Knowledge-Enhanced Collective Inference","abstract":"Compared to the general news domain, information extraction (IE) from biomedical text requires much broader domain knowledge. However, many previous IE methods do not utilize any external knowledge during inference. Due to the exponential growth of biomedical publications, models that do not go beyond their fixed set of parameters will likely fall behind. Inspired by how humans look up relevant information to comprehend a scientific text, we present a novel framework that utilizes external knowledge for joint entity and relation extraction named KECI (Knowledge-Enhanced Collective Inference). Given an input text, KECI first constructs an initial span graph representing its initial understanding of the text. It then uses an entity linker to form a knowledge graph containing relevant background knowledge for the the entity mentions in the text. To make the final predictions, KECI fuses the initial span graph and the knowledge graph into a more refined graph using an attention mechanism. KECI takes a collective approach to link mention spans to entities by integrating global relational information into local representations using graph convolutional networks. Our experimental results show that the framework is highly effective, achieving new state-of-theart results in two different benchmark datasets: BioRelEx (binding interaction detection) and ADE (adverse drug event extraction). For example, KECI achieves absolute improvements of 4.59% and 4.91% in F1 scores over the stateof-the-art on the BioRelEx entity and relation extraction tasks 1 .","label_nlp4sg":1,"task":["Biomedical Entity and Relation Extraction"],"method":["attention","graph convolutional networks"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wiebe-1997-writing","url":"https:\/\/aclanthology.org\/W97-0214","title":"Writing Annotation Instructions","abstract":"In two corpus annotation projects, we followed similar strategies for developing annotation instructions and obtained good inter-coder reliability results for both (the instructions are similar in style to Allen & Core 1996). Our goal in developing the annotation instructions was that they can be used reliably, after a reasonable amount of training, by taggers who are non-experts but who have good language skills and the ability to pay close attention to detail. The instructions were developed iteratively, applying the current scheme and then revising it in light of dlt~culties that arose. We did not attempt to specify a formal set of rules for the taggers to follow. Rather, we give representative examples and appeal to the taggers' intuitions, asking them to generali,~ from the examples to new situations encountered in the text or dialog. An important strategy is to acknowledge, in the instructions, the weM~nesses of the task definition and the dit~iculties the tagger is likely to face. If, for example, the taggers are being asked to categorize objects into one of a set of mutually exclusive, exhaustive classes, for most NLP problems, the taggers will be faced with borderline, ambiguous, and vague instances. We give the taggers strategies for dealing with such problems, such as asking themselves what is the most focal meaning component of the word in that particular context. The taggers should also be assisted in targeting exactly which distinctions they are to make. We have \u2022 observed taggers' desires to take into account all aspects of the general problem surrounding the task. If there are closely related distinctions that are not to be tagged for, such as, for example, distinctions related to syntactic function, what we do is outline a related tagging task, to contrast it with the one the taggers are performing and to help them zero in on the particular distinctions they are to make.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"delmonte-etal-2010-deep","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/383_Paper.pdf","title":"Deep Linguistic Processing with GETARUNS for Spoken Dialogue Understanding","abstract":"In this paper we will present work carried out to scale up the system for text understanding called GETARUNS, and port it to be used in dialogue understanding. The current goal is that of extracting automatically argumentative information in order to build argumentative structure. The long term goal is using argumentative structure to produce automatic summarization of spoken dialogues. Very much like other deep linguistic processing systems, our system is a generic text\/dialogue understanding system that can be used in connection with an ontology-WordNet-and other similar repositories of commonsense knowledge. We will present the adjustments we made in order to cope with transcribed spoken dialogues like those produced in the ICSI Berkeley project. In a final section we present preliminary evaluation of the system on two tasks: the task of automatic argumentative labeling and another frequently addressed task: referential vs. non-referential pronominal detection. Results obtained fair much higher than those reported in similar experiments with machine learning approaches.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"andre-rist-1997-planning","url":"https:\/\/aclanthology.org\/W97-1409","title":"Planning Referential Acts for Animated Presentation Agents","abstract":"Computer-based presentation systems enable the realization of effective and dynamic presentation styles that incorporate multiple media. In particular, they allow for the emulation of conversational styles known from personal human-human communication. In this paper, we argue that life-like characters are an effective means of encoding references to world objects in a presentation. We present a two-phase approach which first generates high-level referential acts and then transforms them into fine-grained animation sequences.\n\u2022 effectively establish cross-references between presentation parts which are conveyed by different media possibly being displayed in different windows;","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been supported by the BMBF under the grants ITW 9400 7 and 9701 0. We would like to thank Jochen Mfiller for his work on the Persona server and the overall system integration.","year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"feilmayr-etal-2012-evaliex","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/204_Paper.pdf","title":"EVALIEX --- A Proposal for an Extended Evaluation Methodology for Information Extraction Systems","abstract":"Assessing the correctness of extracted data requires performance evaluation, which is accomplished by calculating quality metrics. The evaluation process must cope with the challenges posed by information extraction and natural language processing. In the previous work most of the existing methodologies have been shown that they support only traditional scoring metrics. Our research work addresses requirements, which arose during the development of three productive rule-based information extraction systems. The main contribution is twofold: First, we developed a proposal for an evaluation methodology that provides the flexibility and effectiveness needed for comprehensive performance measurement. The proposal extends state-of-the-art scoring metrics by measuring string and semantic similarities and by parameterization of metric scoring, and thus simulating with human judgment. Second, we implemented an IE evaluation tool named EVALIEX, which integrates these measurement concepts and provides an efficient user interface that supports evaluation control and the visualization of IE results. To guarantee domain independence, the tool additionally provides a Generic Mapper for XML Instances (GeMap) that maps domain-dependent XML files containing IE results to generic ones. Compared to other tools, it provides more flexible testing and better visualization of extraction results for the comparison of different (versions of) information extraction systems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-harrison-2021-error","url":"https:\/\/aclanthology.org\/2021.alvr-1.2","title":"Error Causal inference for Multi-Fusion models","abstract":"In this paper, we propose an error causal inference method that could be used for finding dominant features for a faulty instance under a well-trained multi-modality input model, which could apply to any testing instance. We evaluate our method using a well-trained multimodalities stylish caption generation model and find those causal inferences that could provide us the insights for next step optimization.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bilbao-jayo-almeida-2018-political","url":"https:\/\/aclanthology.org\/W18-3513","title":"Political discourse classification in social networks using context sensitive convolutional neural networks","abstract":"In this study we propose a new approach to analyse the political discourse in online social networks such as Twitter. To do so, we have built a discourse classifier using Convolutional Neural Networks. Our model has been trained using election manifestos annotated manually by political scientists following the Regional Manifestos Project (RMP) methodology. In total, it has been trained with more than 88,000 sentences extracted from more that 100 annotated manifestos. Our approach takes into account the context of the phrase in order to classify it, like what was previously said and the political affiliation of the transmitter. To improve the classification results we have used a simplified political message taxonomy developed within the Electronic Regional Manifestos Project (E-RMP). Using this taxonomy, we have validated our approach analysing the Twitter activity of the main Spanish political parties during 2015 and 2016 Spanish general election and providing a study of their discourse.","label_nlp4sg":1,"task":["Political discourse classification"],"method":["convolutional neural networks"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"We gratefully acknowledge the support of the Basque Government's Department of Education for the predoctoral funding; the Ministry of Economy, Industry and Competitiveness of Spain under Grant No. CSO2015-64495-R (Electronic Regional Manifestos Project); and NVIDIA Corporation with the donation of the Titan X used for this research. We thank the Regional Manifestos Project team (Braulio G\u00f3mez Fortes and Matthias Scantamburlo) for making available their dataset of annotated political manifestos and tweets.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"schuster-manning-2016-enhanced","url":"https:\/\/aclanthology.org\/L16-1376","title":"Enhanced English Universal Dependencies: An Improved Representation for Natural Language Understanding Tasks","abstract":"Many shallow natural language understanding tasks use dependency trees to extract relations between content words. However, strict surface-structure dependency trees tend to follow the linguistic structure of sentences too closely and frequently fail to provide direct relations between content words. To mitigate this problem, the original Stanford Dependencies representation also defines two dependency graph representations which contain additional and augmented relations that explicitly capture otherwise implicit relations between content words. In this paper, we revisit and extend these dependency graph representations in light of the recent Universal Dependencies (UD) initiative and provide a detailed account of an enhanced and an enhanced++ English UD representation. We further present a converter from constituency to basic, i.e., strict surface structure, UD trees, and a converter from basic UD trees to enhanced and enhanced++ English UD graphs. We release both converters as part of Stanford CoreNLP and the Stanford Parser.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kwok-deng-2002-corpus","url":"https:\/\/aclanthology.org\/W02-1809","title":"Corpus-Based Pinyin Name Resolution","abstract":"For readers of English text who know some Chinese, Pinyin codes that spell out Chinese names are often ambiguous as to their original Chinese character representations if the names are new or not well known. For English-Chinese cross language retrieval, failure to accurately translate Pinyin names in a query to Chinese characters can lead to dismal retrieval effectiveness. This paper presents an approach of extracting Pinyin names from English text, suggesting translations to these Pinyin using a database of names and their characters with usage probabilities, followed with IR techniques with a corpus as a disambiguation tool to resolve the translation candidates.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partially sponsored by the Space and Naval Warfare Systems Center San Diego, under Grant No. N66001-00-1-8912. We thank BBN for the use of their IdentiFinder software.","year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ovchinnikova-etal-2009-automatic","url":"https:\/\/aclanthology.org\/D09-1144","title":"Automatic Acquisition of the \\textitArgument-Predicate Relations from a Frame-Annotated Corpus","abstract":"This paper presents an approach to automatic acquisition of the argumentpredicate relations from a semantically annotated corpus. We use SALSA, a German newspaper corpus manually annotated with role-semantic information based on frame semantics. Since the relatively small size of SALSA does not allow to estimate the semantic relatedness in the extracted argument-predicate pairs, we use a larger corpus for ranking. Two experiments have been performed in order to evaluate the proposed approach. In the first experiment we compare automatically extracted argument-predicate relations with the gold standard formed from associations provided by human subjects. In the second experiment we calculate correlation between automatic relatedness measure and human ranking of the extracted relations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"singh-ambati-2010-integrated","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/48_Paper.pdf","title":"An Integrated Digital Tool for Accessing Language Resources","abstract":"Language resources can be classified under several categories. To be able to query and operate on all (or most of) these categories using a single digital tool would be very helpful for a large number of researchers working on languages. We describe such a tool in this paper. It is different from other such tools in that it allows querying and transformation on different kinds of resources (such as corpora, lexicon and language models) with the same framework. Search options can be given based on the kind of resource being queried. It is possible to select a matched resource and open it for editing in the specialized interfaces with which that resource is associated. The tool also allows the extracted or modified data to be saved separately, apart from having the usual facilities like displaying the results in KeyWord-In-Context (KWIC) format. We also present the notation used for querying and transformation, which is comparable to but different from the Corpus Query Language (CQL).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"friedberg-2011-turn","url":"https:\/\/aclanthology.org\/P11-3017","title":"Turn-Taking Cues in a Human Tutoring Corpus","abstract":"Most spoken dialogue systems are still lacking in their ability to accurately model the complex process that is human turntaking. This research analyzes a humanhuman tutoring corpus in order to identify prosodic turn-taking cues, with the hopes that they can be used by intelligent tutoring systems to predict student turn boundaries. Results show that while there was variation between subjects, three features were significant turn-yielding cues overall. In addition, a positive relationship between the number of cues present and the probability of a turn yield was demonstrated.","label_nlp4sg":1,"task":["identify prosodic turn - taking cues"],"method":["Analysis"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":"This work was supported by the NSF (#0631930). I would like to thank Diane Litman, my advisor, Scott Silliman, for software assistance, Joanna Drummond, for many helpful comments on this paper, and the ITSPOKE research group for their feedback on my work.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"saint-dizier-1988-default","url":"https:\/\/aclanthology.org\/C88-2117","title":"Default Logic, Natural Language and Generalized Quantifiers","abstract":"Tha use of default logic to represent various linguistic constru,'tions is explored in this paper. Default logic is then integrated into a theory of natural language semanflee, namely Generalized Quantifiers. Finally, propertiee of haterest to the AI community such as the characterization of troth peraietence and come inferential patterns are emphasized.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgements I am very grateful to Philippe Besnard, Mario Borillo and Jim Delgrande for their useful comments on this work. I thank also Dag Vesterstahl for providing me several of his publications mentioned below. This work was supported by the French INRIA and the PRC CNRS Communication Homme-machine, both civilian public institutions.","year":1988,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ohtani-kurafuji-2011-quantification","url":"https:\/\/aclanthology.org\/Y11-1005","title":"Quantification and the Garden Path Effect Reduction: The Case of Universally Quantified Subjects","abstract":"This paper investigates the effect of quantification in sentence processing. The experimental results show that temporarily ambiguous sentences that begin with the universally quantified NPs reduced the garden path effect in contrast to the ones that begin with bare NPs. This fact is accounted for by assuming that discourse representation structures are incrementally constructed, and a tripartite structure introduced by the universal quantifier gives a room for temporal ambiguity while a single box associated with a bare NP forces one interpretation and to get the correct interpretation, the single box must be rewritten, which results in the garden path effect.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"shen-etal-2021-reservoir","url":"https:\/\/aclanthology.org\/2021.acl-long.331","title":"Reservoir Transformers","abstract":"We demonstrate that transformers obtain impressive performance even when some of the layers are randomly initialized and never updated. Inspired by old and well-established ideas in machine learning, we explore a variety of non-linear \"reservoir\" layers interspersed with regular transformer layers, and show improvements in wall-clock compute time until convergence, as well as overall performance, on various machine translation and (masked) language modelling tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Eric Wallace, Zhewei Yao, Kevin Lin, Zhiqing Sun, Zhuohan Li, Angela Fan, Shaojie Bai, and anonymous reviewers for their comments and suggestions. SS and KK were supported by grants from Samsung, Facebook, and the Berkeley Deep Drive Consortium.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"fort-etal-2018-fingers","url":"https:\/\/aclanthology.org\/W18-4923","title":"``Fingers in the Nose'': Evaluating Speakers' Identification of Multi-Word Expressions Using a Slightly Gamified Crowdsourcing Platform","abstract":"This article presents the results we obtained in crowdsourcing French speakers' intuition concerning multi-work expressions (MWEs). We developed a slightly gamified crowdsourcing platform, part of which is designed to test users' ability to identify MWEs with no prior training. The participants perform relatively well at the task, with a recall reaching 65% for MWEs that do not behave as function words.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"caselli-etal-2021-dalc","url":"https:\/\/aclanthology.org\/2021.woah-1.6","title":"DALC: the Dutch Abusive Language Corpus","abstract":"As socially unacceptable language become pervasive in social media platforms, the need for automatic content moderation become more pressing. This contribution introduces the Dutch Abusive Language Corpus (DALC v1.0), a new dataset with tweets manually annotated for abusive language. The resource address a gap in language resources for Dutch and adopts a multi-layer annotation scheme modeling the explicitness and the target of the abusive messages. Baselines experiments on all annotation layers have been conducted, achieving a macro F1 score of 0.748 for binary classification of the explicitness layer and .489 for target classification.","label_nlp4sg":1,"task":["Data collection"],"method":["annotation scheme"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"cohan-goharian-2016-revisiting","url":"https:\/\/aclanthology.org\/L16-1130","title":"Revisiting Summarization Evaluation for Scientific Articles","abstract":"Evaluation of text summarization approaches have been mostly based on metrics that measure similarities of system generated summaries with a set of human written gold-standard summaries. The most widely used metric in summarization evaluation has been the ROUGE family. ROUGE solely relies on lexical overlaps between the terms and phrases in the sentences; therefore, in cases of terminology variations and paraphrasing, ROUGE is not as effective. Scientific article summarization is one such case that is different from general domain summarization (e.g. newswire data). We provide an extensive analysis of ROUGE's effectiveness as an evaluation metric for scientific summarization; we show that, contrary to the common belief, ROUGE is not much reliable in evaluating scientific summaries. We furthermore show how different variants of ROUGE result in very different correlations with the manual Pyramid scores. Finally, we propose an alternative metric for summarization evaluation which is based on the content relevance between a system generated summary and the corresponding human written summaries. We call our metric SERA (Summarization Evaluation by Relevance Analysis). Unlike ROUGE, SERA consistently achieves high correlations with manual scores which shows its effectiveness in evaluation of scientific article summarization.","label_nlp4sg":1,"task":["Summarization Evaluation for Scientific Articles"],"method":["extensive analysis","alternative metric"],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":"We would like to thank all three anonymous reviewers for their feedback and comments, and Maryam Iranmanesh for helping in annotation. This work was partially supported by National Science Foundation (NSF) through grant CNS-1204347.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kotani-etal-2005-useful","url":"https:\/\/aclanthology.org\/2005.mtsummit-posters.14","title":"A Useful-based Evaluation of Reading Support Systems: Comprehension, Reading Speed and Effective Speed","abstract":"This paper reports the result of our experiment, the aim of which is to examine the efficiency of reading support systems such as a sentencemachine translation system, a word-machine translation system, and so on. Our evaluation method used in the experiment is able to handle the different reading support systems by assessing the usability of the systems, i.e., comprehension, reading speed, and effective speed. The result shows that the reading-speed procedure is able to evaluate the support systems as well as the comprehension-based procedure proposed by Ohguro (1993) and Fuji et al. (2001).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kamalloo-etal-2022-chosen","url":"https:\/\/aclanthology.org\/2022.findings-acl.84","title":"When Chosen Wisely, More Data Is What You Need: A Universal Sample-Efficient Strategy For Data Augmentation","abstract":"Data Augmentation (DA) is known to improve the generalizability of deep neural networks. Most existing DA techniques naively add a certain number of augmented samples without considering the quality and the added computational cost of these samples. To tackle this problem, a common strategy, adopted by several state-of-the-art DA methods, is to adaptively generate or re-weight augmented samples with respect to the task objective during training. However, these adaptive DA methods: (1) are computationally expensive and not sample-efficient, and (2) are designed merely for a specific setting. In this work, we present a universal DA technique, called Glitter, to overcome both issues. Glitter can be plugged into any DA method, making training sample-efficient without sacrificing performance. From a pre-generated pool of augmented samples, Glitter adaptively selects a subset of worst-case samples with maximal loss, analogous to adversarial DA. Without altering the training strategy, the task objective can be optimized on the selected subset. Our thorough experiments on the GLUE benchmark, SQuAD, and HellaSwag in three widely used training setups including consistency training, self-distillation and knowledge distillation reveal that Glitter is substantially faster to train and achieves a competitive performance, compared to strong baselines. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"habash-dorr-2001-large","url":"https:\/\/aclanthology.org\/2001.mtsummit-papers.26","title":"Large scale language independent generation using thematic hierarchies","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2001,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"leal-etal-2014-cisuc","url":"https:\/\/aclanthology.org\/S14-2025","title":"CISUC-KIS: Tackling Message Polarity Classification with a Large and Diverse Set of Features","abstract":"This paper presents the approach of the CISUC-KIS team to the SemEval 2014 task on Sentiment Analysis in Twitter, more precisely subtask B-Message Polarity Classification. We followed a machine learning approach where a SVM classifier was trained from a large and diverse set of features that included lexical, syntactic, sentiment and semantic-based aspects. This led to very interesting results which, in different datasets, put us always in the top-7 scores, including second position in the LiveJournal2014 dataset. This work is licenced under a Creative Commons Attribution 4.0 International License.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the iCIS project (CENTRO-07-ST24-FEDER-002003), cofinanced by QREN, in the scope of the Mais Centro Program and European Union's FEDER.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"weller-etal-2015-target","url":"https:\/\/aclanthology.org\/W15-4923","title":"Target-Side Generation of Prepositions for SMT","abstract":"We present a translation system that models the selection of prepositions in a targetside generation component. This novel approach allows the modeling of all subcategorized elements of a verb as either NPs or PPs according to target-side requirements relying on source and target side features. The BLEU scores are encouraging, but fail to surpass the baseline. We additionally evaluate the preposition accuracy for a carefully selected subset and discuss how typical problems of translating prepositions can be modeled with our method.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 644402, the DFG grants Distributional Approaches to Semantic Relatedness and Models of Morphosyntax for Statistical Machine Translation and a DFG Heisenberg Fellowship.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hong-etal-2021-avocado","url":"https:\/\/aclanthology.org\/2021.emnlp-main.385","title":"AVocaDo: Strategy for Adapting Vocabulary to Downstream Domain","abstract":"During the fine-tuning phase of transfer learning, the pretrained vocabulary remains unchanged, while model parameters are updated. The vocabulary generated based on the pretrained data is suboptimal for downstream data when domain discrepancy exists. We propose to consider the vocabulary as an optimizable parameter, allowing us to update the vocabulary by expanding it with domain-specific vocabulary based on a tokenization statistic. Furthermore, we preserve the embeddings of the added words from overfitting to downstream data by utilizing knowledge learned from a pretrained language model with a regularization term. Our method achieved consistent performance improvements on diverse domains (i.e., biomedical, computer science, news, and reviews).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"feng-etal-2011-learning","url":"https:\/\/aclanthology.org\/D11-1101","title":"Learning General Connotation of Words using Graph-based Algorithms","abstract":"In this paper, we introduce a connotation lexicon, a new type of lexicon that lists words with connotative polarity, i.e., words with positive connotation (e.g., award, promotion) and words with negative connotation (e.g., cancer, war). Connotation lexicons differ from much studied sentiment lexicons: the latter concerns words that express sentiment, while the former concerns words that evoke or associate with a specific polarity of sentiment. Understanding the connotation of words would seem to require common sense and world knowledge. However, we demonstrate that much of the connotative polarity of words can be inferred from natural language text in a nearly unsupervised manner. The key linguistic insight behind our approach is selectional preference of connotative predicates. We present graphbased algorithms using PageRank and HITS that collectively learn connotation lexicon together with connotative predicates. Our empirical study demonstrates that the resulting connotation lexicon is of great value for sentiment analysis complementing existing sentiment lexicons.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We wholeheartedly thank the reviewers for very helpful and insightful comments.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wedekind-1996-inference","url":"https:\/\/aclanthology.org\/C96-2165","title":"On Inference-Based Procedures for Lexical Disambiguation","abstract":"In this paper we sketch a decidable inference-based procedure for lexical disambiguation which operates on semantic representations of discourse and conceptual knowledge, In contrast to other approaches which use a classical logic for the disambiguating inferences and run into decidability problems, we argue on the basis of empirical evidence that the underlying iifference mechanism has to be essentially incomplete in order to be (cognitively) adequate. Since our conceptual knowledge can be represented in a rather restricted representation language, it is then possible to show that the restrictions satisfied by the conceptual knowledge and the inferences ensure in an empirically adequate ww the decidability of the problem, although a fully expressive language is used to represent discourse.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Thanks to Ede Zimmerinann an(l Hans Kamp for useflll discussions and to the anonymous reviewers for (;oinulel\/ts.","year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hemati-etal-2016-textimager","url":"https:\/\/aclanthology.org\/C16-2013","title":"TextImager: a Distributed UIMA-based System for NLP","abstract":"More and more disciplines require NLP tools for performing automatic text analyses on various levels of linguistic resolution. However, the usage of established NLP frameworks is often hampered for several reasons: in most cases, they require basic to sophisticated programming skills, interfere with interoperability due to using non-standard I\/O-formats and often lack tools for visualizing computational results. This makes it difficult especially for humanities scholars to use such frameworks. In order to cope with these challenges, we present TextImager, a UIMA-based framework that offers a range of NLP and visualization tools by means of a user-friendly GUI. Using TextImager requires no programming skills.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We gratefully acknowledge financial support of this project via the BMBF Project CEDIFOR (https: \/\/www.cedifor.de\/en\/).","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dong-etal-2021-discourse","url":"https:\/\/aclanthology.org\/2021.eacl-main.93","title":"Discourse-Aware Unsupervised Summarization for Long Scientific Documents","abstract":"We propose an unsupervised graph-based ranking model for extractive summarization of long scientific documents. Our method assumes a two-level hierarchical graph representation of the source document, and exploits asymmetrical positional cues to determine sentence importance. Results on the PubMed and arXiv datasets show that our approach 1 outperforms strong unsupervised baselines by wide margins in automatic metrics and human evaluation. In addition, it achieves performance comparable to many state-of-the-art supervised approaches which are trained on hundreds of thousands of examples. These results suggest that patterns in the discourse structure are a strong signal for determining importance in scientific articles. * Equal contribution. 1 Link to our code: https:\/\/github.com\/ mirandrom\/HipoRank. Introduction anxiety affects quality of life in those living with parkinson's disease (pd) more so than overall cognitive status, motor deficits, apathy, and depression.","label_nlp4sg":1,"task":["Summarization for Long Scientific Documents"],"method":["unsupervised graph - based ranking model"],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":"This work is supported by the Natural Sciences and Engineering Research Council of Canada, Compute Canada, and the CIFAR Canada AI Chair program. We would like to thank Hao Zheng, Wen Xiao, and Sandeep Subramanian for useful discussions.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rowe-1994-statistical","url":"https:\/\/aclanthology.org\/W94-0114","title":"Statistical versus Symbolic Parsing for Captioned-Information Retrieval","abstract":"We discuss implementation issues of MARIE-l, a mostly symbolic parser fully implemented, and MARIE-2, a more statistical parser partially implemented. They address a corpus of 100,000 picture captions. We argue that the mixed approach of MARIE-2 should be better for this corpus because its algorithms (not data) are simpler.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mittal-etal-2021-think","url":"https:\/\/aclanthology.org\/2021.emnlp-main.789","title":"``So You Think You're Funny?'': Rating the Humour Quotient in Standup Comedy","abstract":"Computational Humour (CH) has attracted the interest of Natural Language Processing and Computational Linguistics communities. Creating datasets for automatic measurement of humour quotient is difficult due to multiple possible interpretations of the content. In this work, we create a multi-modal humourannotated dataset (\u223c40 hours) using stand-up comedy clips. We devise a novel scoring mechanism to annotate the training data with a humour quotient score using the audience's laughter. The normalized duration (laughter duration divided by the clip duration) of laughter in each clip is used to compute this humour coefficient score on a five-point scale (0-4). This method of scoring is validated by comparing with manually annotated scores, wherein a quadratic weighted kappa of 0.6 is obtained. We use this dataset to train a model that provides a \"funniness\" score, on a five-point scale, given the audio and its corresponding text. We compare various neural language models for the task of humour-rating and achieve an accuracy of 0.813 in terms of Quadratic Weighted Kappa (QWK). Our \"Open Mic\" dataset is released for further research along with the code.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"dalan-sharoff-2016-genre","url":"https:\/\/aclanthology.org\/W16-2611","title":"Genre classification for a corpus of academic webpages","abstract":"In this paper we report our analysis of the similarities between webpages that are crawled from European academic websites, and comparison of their distribution in terms of the English language variety (native English vs English as a lingua franca) and their language family (based on the country's official language). After building a corpus of university webpages, we selected a set of relevant descriptors that can represent their text types using the framework of the Functional Text Dimensions. Manual annotation of a random sample of academic pages provides the basis for classifying the remaining texts on each dimension. Reliable thresholds are then determined in order to evaluate precision and assess the distribution of text types by each dimension, with the ultimate goal of analysing language features over English varieties and language families.","label_nlp4sg":1,"task":["Genre classification"],"method":["analysis","Manual annotation"],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":"We are grateful to Silvia Bernardini for her extensive comments on the earlier drafts.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"vashisth-etal-2019-exploring","url":"https:\/\/aclanthology.org\/W19-5037","title":"Exploring Diachronic Changes of Biomedical Knowledge using Distributed Concept Representations","abstract":"In research best practices can change over time as new discoveries are made and novel methods are implemented. Scientific publications reporting about the latest facts and current state-of-the-art can be possibly outdated after some years or even proved to be false. A publication usually sheds light only on the knowledge of the period it has been published. Thus, the aspect of time can play an essential role in the reliability of the presented information. In Natural Language Processing many methods focus on information extraction from text, such as detecting entities and their relationship to each other. Those methods mostly focus on the facts presented in the text itself and not on the aspects of knowledge which changes over time. This work instead examines the evolution in biomedical knowledge over time using scientific literature in terms of diachronic change. Mainly the usage of temporal and distributional concept representations are explored and evaluated by a proof-of-concept.","label_nlp4sg":1,"task":["Exploring Diachronic Changes of Biomedical Knowledge"],"method":["Distributed Concept Representations"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"This project has received funding from the European Unions Horizon 2020 research and innovation programme under grant agreement No 780495 (BigMedilytics). In addition to that we would like to thank our colleagues for their feedback and suggestions.","year":2019,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kozareva-2013-multilingual","url":"https:\/\/aclanthology.org\/P13-1067","title":"Multilingual Affect Polarity and Valence Prediction in Metaphor-Rich Texts","abstract":"Metaphor is an important way of conveying the affect of people, hence understanding how people use metaphors to convey affect is important for the communication between individuals and increases cohesion if the perceived affect of the concrete example is the same for the two individuals. Therefore, building computational models that can automatically identify the affect in metaphor-rich texts like \"The team captain is a rock.\", \"Time is money.\", \"My lawyer is a shark.\" is an important challenging problem, which has been of great interest to the research community. To solve this task, we have collected and manually annotated the affect of metaphor-rich texts for four languages. We present novel algorithms that integrate triggers for cognitive, affective, perceptual and social processes with stylistic and lexical information. By running evaluations on datasets in English, Spanish, Russian and Farsi, we show that the developed affect polarity and valence prediction technology of metaphor-rich texts is portable and works equally well for different languages.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The author would like to thank the reviewers for their helpful comments as well as the LCC annotators who have prepared the data and made this work possible. This research is supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Defense US Army Research Laboratory contract number W911NF-12-C-0025. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoD\/ARL, or the U.S. Government.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mesquita-etal-2013-effectiveness","url":"https:\/\/aclanthology.org\/D13-1043","title":"Effectiveness and Efficiency of Open Relation Extraction","abstract":"A large number of Open Relation Extraction approaches have been proposed recently, covering a wide range of NLP machinery, from \"shallow\" (e.g., part-of-speech tagging) to \"deep\" (e.g., semantic role labeling-SRL). A natural question then is what is the tradeoff between NLP depth (and associated computational cost) versus effectiveness. This paper presents a fair and objective experimental comparison of 8 state-of-the-art approaches over 5 different datasets, and sheds some light on the issue. The paper also describes a novel method, EXEMPLAR, which adapts ideas from SRL to less costly NLP machinery, resulting in substantial gains both in efficiency and effectiveness, over binary and n-ary relation extraction tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for useful suggestions to improve the paper, and the Natural Sciences and Engineering Council of Canada, through the NSERC Business Intelligence Network, for financial support.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"mehrotra-etal-2016-deconstructing","url":"https:\/\/aclanthology.org\/N16-1073","title":"Deconstructing Complex Search Tasks: a Bayesian Nonparametric Approach for Extracting Sub-tasks","abstract":"Search tasks, comprising a series of search queries serving a common informational need, have steadily emerged as accurate units for developing the next generation of task-aware web search systems. Most prior research in this area has focused on segmenting chronologically ordered search queries into higher level tasks. A more naturalistic viewpoint would involve treating query logs as convoluted structures of tasks-subtasks, with complex search tasks being decomposed into more focused sub-tasks. In this work, we focus on extracting sub-tasks from a given collection of on-task search queries. We jointly leverage insights from Bayesian nonparametrics and word embeddings to identify and extract sub-tasks from a given collection of ontask queries. Our proposed model can inform the design of the next generation of task-based search systems that leverage user's task behavior for better support and personalization.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by a Google Faculty Research Award.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rehbein-van-genabith-2006-german","url":"https:\/\/aclanthology.org\/W06-2109","title":"German Particle Verbs and Pleonastic Prepositions","abstract":"This paper discusses the behaviour of German particle verbs formed by two-way prepositions in combination with pleonastic PPs including the verb particle as a preposition. These particle verbs have a characteristic feature: some of them license directional prepositional phrases in the accusative, some only allow for locative PPs in the dative, and some particle verbs can occur with PPs in the accusative and in the dative. Directional particle verbs together with directional PPs present an additional problem: the particle and the preposition in the PP seem to provide redundant information. The paper gives an overview of the semantic verb classes influencing this phenomenon, based on corpus data, and explains the underlying reasons for the behaviour of the particle verbs. We also show how the restrictions on particle verbs and pleonastic PPs can be expressed in a grammar theory like Lexical Functional Grammar (LFG). 'She gets into the car.' (4) Sie steigt ein. She climb-3SG Part+DIR. 'She gets in.' (5) Sie steigt [PP in das Auto] ein. She gets [PP into+DIR Det car] Part+DIR.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"fan-etal-2019-strategies","url":"https:\/\/aclanthology.org\/P19-1254","title":"Strategies for Structuring Story Generation","abstract":"Writers often rely on plans or sketches to write long stories, but most current language models generate word by word from left to right. We explore coarse-to-fine models for creating narrative texts of several hundred words, and introduce new models which decompose stories by abstracting over actions and entities. The model first generates the predicate-argument structure of the text, where different mentions of the same entity are marked with placeholder tokens. It then generates a surface realization of the predicate-argument structure, and finally replaces the entity placeholders with context-sensitive names and references. Human judges prefer the stories from our models to a wide range of previous approaches to hierarchical text generation. Extensive analysis shows that our methods can help improve the diversity and coherence of events and entities in generated stories.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"nzeyimana-niyongabo-rubungo-2022-kinyabert","url":"https:\/\/aclanthology.org\/2022.acl-long.367","title":"KinyaBERT: a Morphology-aware Kinyarwanda Language Model","abstract":"Pre-trained language models such as BERT have been successful at tackling many natural language processing tasks. However, the unsupervised sub-word tokenization methods commonly used in these models (e.g., byte-pair encoding-BPE) are sub-optimal at handling morphologically rich languages. Even given a morphological analyzer, naive sequencing of morphemes into a standard BERT architecture is inefficient at capturing morphological compositionality and expressing word-relative syntactic regularities. We address these challenges by proposing a simple yet effective twotier BERT architecture that leverages a morphological analyzer and explicitly represents morphological compositionality. Despite the success of BERT, most of its evaluations have been conducted on high-resource languages, obscuring its applicability on low-resource languages. We evaluate our proposed method on the low-resource morphologically rich Kinyarwanda language, naming the proposed model architecture KinyaBERT. A robust set of experimental results reveal that KinyaBERT outperforms solid baselines by 2% in F1 score on a named entity recognition task and by 4.3% in average score of a machine-translated GLUE benchmark. KinyaBERT fine-tuning has better convergence and achieves more robust results on multiple tasks even in the presence of translation noise. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program and Google Cloud Research Credits with the award GCP19980904. We also thank the anonymous reviewers for their insightful feedback.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"haas-riezler-2016-corpus","url":"https:\/\/aclanthology.org\/N16-1088","title":"A Corpus and Semantic Parser for Multilingual Natural Language Querying of OpenStreetMap","abstract":"We present a corpus of 2,380 natural language queries paired with machine readable formulae that can be executed against world wide geographic data of the OpenStreetMap (OSM) database. We use the corpus to learn an accurate semantic parser that builds the basis of a natural language interface to OSM. Furthermore, we use response-based learning on parser feedback to adapt a statistical machine translation system for multilingual database access to OSM. Our framework allows to map fuzzy natural language expressions such as \"nearby\", \"north of\", or \"in walking distance\" to spatial polygons on an interactive map. Furthermore, it combines syntactic complexity and compositionality with a reasonable lexical variability of queries, making it an interesting new publicly available dataset for research on semantic parsing.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the OSM developers Roland Olbricht and Martin Raifer for their support and for contributing a dataset of shared user queries. The research reported in this paper was supported in part by DFG grant RI-2221\/2-1 \"Grounding Statistical Machine Translation in Perception and Action\". 7 www.cl.uni-heidelberg.de\/nlmaps 8 nlmaps.cl.uni-heidelberg.de","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"zhang-johnson-2003-robust","url":"https:\/\/aclanthology.org\/W03-0434","title":"A Robust Risk Minimization based Named Entity Recognition System","abstract":"This paper describes a robust linear classification system for Named Entity Recognition. A similar system has been applied to the CoNLL text chunking shared task with state of the art performance. By using different linguistic features, we can easily adapt this system to other token-based linguistic tagging problems. The main focus of the current paper is to investigate the impact of various local linguistic features for named entity recognition on the CoNLL-2003 (Tjong Kim Sang and De Meulder, 2003) shared task data. We show that the system performance can be enhanced significantly with some relative simple token-based features that are available for many languages. Although more sophisticated linguistic features will also be helpful, they provide much less improvement than might be expected.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank Radu Florian for preparing the German data and for providing additional German dictionaries that helped to achieve the performance presented in the paper.","year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"penn-munteanu-2003-tabulation","url":"https:\/\/aclanthology.org\/P03-1026","title":"A Tabulation-Based Parsing Method that Reduces Copying","abstract":"This paper presents a new bottom-up chart parsing algorithm for Prolog along with a compilation procedure that reduces the amount of copying at run-time to a constant number (2) per edge. It has applications to unification-based grammars with very large partially ordered categories, in which copying is expensive, and can facilitate the use of more sophisticated indexing strategies for retrieving such categories that may otherwise be overwhelmed by the cost of such copying. It also provides a new perspective on \"quick-checking\" and related heuristics, which seems to confirm that forcing an early failure (as opposed to seeking an early guarantee of success) is in fact the best approach to use. A preliminary empirical evaluation of its performance is also provided.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ataman-federico-2018-evaluation","url":"https:\/\/aclanthology.org\/W18-1810","title":"An Evaluation of Two Vocabulary Reduction Methods for Neural Machine Translation","abstract":"Neural machine translation (NMT) models are conventionally trained with fixed-size vocabularies to control the computational complexity and the quality of the learned word representations. This, however, limits the accuracy and the generalization capability of the models, especially for morphologically-rich languages, which usually have very sparse vocabularies containing rare inflected or derivated word forms. Some studies tried to overcome this problem by segmenting words into subword level representations and modeling translation at this level. However, recent findings have shown that if these methods interrupt the word structure during segmentation, they might cause semantic or syntactic losses and lead to generating inaccurate translations. In order to investigate this phenomenon, we present an extensive evaluation of two unsupervised vocabulary reduction methods in NMT. The first is the wellknown byte-pair-encoding (BPE), a statistical subword segmentation method, whereas the second is linguistically-motivated vocabulary reduction (LMVR), a segmentation method which also considers morphological properties of subwords. We compare both approaches on ten translation directions involving English and five other languages (Arabic, Czech, German, Italian and Turkish), each representing a distinct language family and morphological typology. LMVR obtains significantly better performance in most languages, showing gains proportional to the sparseness of the vocabulary and the morphological complexity of the tested language.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been partially supported by the EC-funded H2020 project QT21 (grant no. 645452).","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"webber-1988-foreword","url":"https:\/\/aclanthology.org\/J88-2001","title":"Foreword to Special Issue on Tense and Aspect","abstract":"The phenomena of tense and aspect have long been of interest to linguists and philosophers. Linguists have tried to describe their interesting morphological, syntactic, and semantic properties in the various languages of the world, while philosophers have tried to characterize formally their truth conditions. (For some recent collections of papers, the reader is referred to Tedeschi and Zaenen 1981; Hopper 1982; Dahl 1985; and LoCasio and Vet 1985.) Recently, computational linguists have joined in the act, their interest being sparked by a desire to characterize--at the level of processing--how we understand and describe complex events in a changing world. Here, two kinds of questions converge--one concerning the problem of encoding event descriptions, the other to do with manipulating references to events. In approaching the first question, researchers of all linguistic stripes (computational linguists, philosophers of language, psycholinguists, and linguists of the \"unmarked case\") have begun to turn their attention from how languages convey information about individuals (or sets of individuals) and their properties to how they convey information about events and situations changing over time. In approaching the second question, computational linguists have become interested in developing systems that can converse with users about events and situations (e.g., for planning) or can process accounts of events and situations (e.g., for summarizing and\/or integrating messages). Last year, following the appearance of a number of papers on this topic at the 1987 Conference of the Association for Computational Linguistics at Stanford, it was suggested that a special issue of Computational Linguistics should be devoted to the topic of tense and aspect, in order to examine what appeared to be an emerging consensus on these questions within the computational-linguistics community. This issue is the result of that suggestion, and many of the papers collected below constitute extensions of the papers presented at the Stanford meeting.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1988,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-etal-2002-structure","url":"https:\/\/aclanthology.org\/C02-1010","title":"Structure Alignment Using Bilingual Chunking","abstract":"A new statistical method called \"bilingual chunking\" for structure alignment is proposed. Different with the existing approaches which align hierarchical structures like sub-trees, our method conducts alignment on chunks. The alignment is finished through a simultaneous bilingual chunking algorithm. Using the constrains of chunk correspondence between source language (SL) 1 and target language (TL), our algorithm can dramatically reduce search space, support time synchronous DP algorithm, and lead to highly consistent chunking. Furthermore, by unifying the POS tagging and chunking in the search process, our algorithm alleviates effectively the influence of POS tagging deficiency to the chunking result. The experimental results with English-Chinese structure alignment show that our model can produce 90% in precision for chunking, and 87% in precision for chunk alignment. \u00a1 This work was done while the author was visiting Microsoft Research Asia 1 In this paper, we take English-Chinese parallel text as example; it is relatively easy, however, to be extended to other language pairs.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"li-etal-2006-mining","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2006\/pdf\/297_pdf.pdf","title":"Mining Implicit Entities in Queries","abstract":"Entities are pivotal in describing events and objects, and also very important in Document Summarization. In general only explicit entities which can be extracted by a Named Entity Recognizer are used in real applications. However, implicit entities hidden behind the phrases or words, e.g. entity referred by the phrase \"cross border\", are proved to be helpful in Document Summarization.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"vasilescu-etal-2004-evaluating","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/219.pdf","title":"Evaluating Variants of the Lesk Approach for Disambiguating Words","abstract":"This paper presents a detailed analysis of the factors determining the performance of Lesk-based word sense disambiguation methods. We conducted a series of experiments on the original Lesk algorithm, adapted to WORDNET, and on some variants. These methods were evaluated on the test corpus from SENSEVAL2, English All Words, and on excerpts from SEMCOR. We designed a fine grain analysis of the answers provided by each variant in order to better understand the algorithms than by the mere precision and recall figures.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"pighin-moschitti-2009-efficient","url":"https:\/\/aclanthology.org\/W09-1106","title":"Efficient Linearization of Tree Kernel Functions","abstract":"The combination of Support Vector Machines with very high dimensional kernels, such as string or tree kernels, suffers from two major drawbacks: first, the implicit representation of feature spaces does not allow us to understand which features actually triggered the generalization; second, the resulting computational burden may in some cases render unfeasible to use large data sets for training. We propose an approach based on feature space reverse engineering to tackle both problems. Our experiments with Tree Kernels on a Semantic Role Labeling data set show that the proposed approach can drastically reduce the computational footprint while yielding almost unaffected accuracy.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"finzel-etal-2021-conversational","url":"https:\/\/aclanthology.org\/2021.eacl-demos.38","title":"Conversational Agent for Daily Living Assessment Coaching Demo","abstract":"Conversational Agent for Daily Living Assessment Coaching (CADLAC) is a multi-modal conversational agent system designed to impersonate \"individuals\" with various levels of ability in activities of daily living (ADLs: e.g., dressing, bathing, mobility, etc.) for use in training professional assessors how to conduct interviews to determine one's level of functioning. The system is implemented on the Mind-Meld platform for conversational AI and features a Bidirectional Long Short-Term Memory topic tracker that allows the agent to navigate conversations spanning 18 different ADL domains, a dialogue manager that interfaces with a database of over 10,000 historical ADL assessments, a rule-based Natural Language Generation (NLG) module, and a pre-trained open-domain conversational sub-agent (based on GPT-2) for handling conversation turns outside of the 18 ADL domains. CADLAC is delivered via state-of-the-art web frameworks to handle multiple conversations and users simultaneously and is enabled with voice interface. The paper includes a description of the system design and evaluation of individual components followed by a brief discussion of current limitations and next steps.","label_nlp4sg":1,"task":["Daily Living Assessment Coaching"],"method":["Conversational Agent","Bidirectional Long Short - Term Memory","GPT - 2"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"piperidis-etal-2014-meta","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/786_Paper.pdf","title":"META-SHARE: One year after","abstract":"This paper presents META-SHARE (www.meta-share.eu), an open language resource infrastructure, and its usage since its Europe-wide deployment in early 2013. META-SHARE is a network of repositories that store language resources (data, tools and processing services) documented with high-quality metadata, aggregated in central inventories allowing for uniform search and access. META-SHARE was developed by META-NET (www.meta-net.eu) and aims to serve as an important component of a language technology marketplace for researchers, developers, professionals and industrial players, catering for the full development cycle of language technology, from research through to innovative products and services. The observed usage in its initial steps, the steadily increasing number of network nodes, resources, users, queries, views and downloads are all encouraging and considered as supportive of the choices made so far. In tandem, take-up activities like direct linking and processing of datasets by language processing services as well as metadata transformation to RDF are expected to open new avenues for data and resources linking and boost the organic growth of the infrastructure while facilitating language technology deployment by much wider research communities and industrial sectors.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to all members of META-NET for their feedback and support. We are especially thankful to the META-SHARE working groups, the implementation team, the representatives of the service providing and depositing organisations the representatives of the service providing and depositing organisations: Tamas ","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hong-etal-2010-using","url":"https:\/\/aclanthology.org\/Y10-1045","title":"Using Corpus-based Linguistic Approaches in Sense Prediction Study","abstract":"In this study, we propose to use two corpus-based linguistic approaches for a sense prediction study. We will concentrate on the character similarity clustering approach and concept similarity clustering approach to predict the senses of non-assigned words by using corpora and tools, such as Chinese Gigaword Corpus, and HowNet. In this study, we would then like to evaluate their predictions via the sense divisions of Chinese Wordnet and Xiandai Hanyu Cidian. Using these corpora, we will determine the clusters of our four target words-chi1 \"eat\", wan2 \"play\", huan4 \"change\" and shao1 \"burn\" in order to predict their all possible senses and evaluate them. This requirement will demonstrate the visibility of the corpus-based approaches.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"seneff-polifroni-2001-hypothesis","url":"https:\/\/aclanthology.org\/H01-1032","title":"Hypothesis Selection and Resolution in the Mercury Flight Reservation System","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2001,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"ahiladas-etal-2015-ruchi","url":"https:\/\/aclanthology.org\/W15-5932","title":"Ruchi: Rating Individual Food Items in Restaurant Reviews","abstract":"Restaurant recommendation systems are capable of recommending restaurants based on various aspects such as location, facilities and price range. There exists some research that implements restaurant recommendation systems, as well as some famous online recommendation systems such as Yelp. However, automatically rating individual food items of a restaurant based on online customer reviews is an area that has not received much attention. This paper presents Ruchi, a system capable of rating individual food items in restaurants. Ruchi makes use of Named Entity Recognition (NER) techniques to identify food names in restaurant reviews. Typed dependency technique is used to identify opinions associated with different food names in a single sentence, thus it was possible to carry out entity-level sentiment analysis to rate individual food items instead of sentence-level sentiment analysis as done by previous research.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"tinlap-1975-theoretical-issues","url":"https:\/\/aclanthology.org\/T75-2000","title":"Theoretical Issues in Natural Language Processing","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1975,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"peris-etal-2012-empirical","url":"https:\/\/aclanthology.org\/J12-4005","title":"Empirical Methods for the Study of Denotation in Nominalizations in Spanish","abstract":"This article deals with deverbal nominalizations in Spanish; concretely, we focus on the denotative distinction between event and result nominalizations. The goals of this work is twofold: first, to detect the most relevant features for this denotative distinction; and, second, to build an automatic classification system of deverbal nominalizations according to their denotation. We have based our study on theoretical hypotheses dealing with this semantic distinction and we have analyzed them empirically by means of Machine Learning techniques which are the basis of the ADN-Classifier. This is the first tool that aims to automatically classify deverbal nominalizations in event, result, or underspecified denotation types in Spanish. The ADN-Classifier has helped us to quantitatively evaluate the validity of our claims regarding deverbal nominalizations. We set up a series of experiments in order to test the ADN-Classifier with different models and in different realistic scenarios depending on the knowledge resources and natural language processors available. The ADN-Classifier achieved good results (87.20% accuracy).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to Maria Ant\u00f2nia Mart\u00ed and Marta Recasens for their helpful advice and to David Bridgewater for the proofreading of English. We would also like to express our gratitude to the three anonymous reviewers for their comments and suggestions to improve this article. This work was partly supported by the projects Araknion (FFI2010-114774-E), Know2 (TIN2009-14715-C04-04), and TEXT-MESS 2.0 (TIN2009-13391-C04-04) from the Spanish Ministry of Science and Innovation, and by a FPU grant (AP2007-01028) from the Spanish Ministry of Education.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"al-sallab-etal-2014-automatic","url":"https:\/\/aclanthology.org\/W14-3608","title":"Automatic Arabic diacritics restoration based on deep nets","abstract":"In this paper, Arabic diacritics restoration problem is tackled under the deep learning framework presenting Confused Subset Resolution (CSR) method to improve the classification accuracy, in addition to Arabic Part-of-Speech (PoS) tagging framework using deep neural nets. Special focus is given to syntactic diacritization, which still suffer low accuracy as indicated by related works. Evaluation is done versus state-of-the-art systems reported in literature, with quite challenging datasets, collected from different domains. Standard datasets like LDC Arabic Tree Bank is used in addition to custom ones available online for results replication. Results show significant improvement of the proposed techniques over other approaches, reducing the syntactic classification error to 9.9% and morphological classification error to 3% compared to 12.7% and 3.8% of the best reported results in literature, improving the error by 22% over the best reported systems","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"branavan-etal-2009-reinforcement","url":"https:\/\/aclanthology.org\/P09-1010","title":"Reinforcement Learning for Mapping Instructions to Actions","abstract":"In this paper, we present a reinforcement learning approach for mapping natural language instructions to sequences of executable actions. We assume access to a reward function that defines the quality of the executed actions. During training, the learner repeatedly constructs action sequences for a set of documents, executes those actions, and observes the resulting reward. We use a policy gradient algorithm to estimate the parameters of a log-linear model for action selection. We apply our method to interpret instructions in two domains-Windows troubleshooting guides and game tutorials. Our results demonstrate that this method can rival supervised learning techniques while requiring few or no annotated training examples. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors acknowledge the support of the NSF (CAREER grant IIS-0448168, grant IIS-0835445, grant IIS-0835652, and a Graduate Research Fellowship) and the ONR. Thanks to Michael Collins, Amir Globerson, Tommi Jaakkola, Leslie Pack Kaelbling, Dina Katabi, Martin Rinard, and members of the MIT NLP group for their suggestions and comments. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations.","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"shapira-etal-2019-crowdsourcing","url":"https:\/\/aclanthology.org\/N19-1072","title":"Crowdsourcing Lightweight Pyramids for Manual Summary Evaluation","abstract":"Conducting a manual evaluation is considered an essential part of summary evaluation methodology. Traditionally, the Pyramid protocol, which exhaustively compares system summaries to references, has been perceived as very reliable, providing objective scores. Yet, due to the high cost of the Pyramid method and the required expertise, researchers resorted to cheaper and less thorough manual evaluation methods, such as Responsiveness and pairwise comparison, attainable via crowdsourcing. We revisit the Pyramid approach, proposing a lightweight samplingbased version that is crowdsourcable. We analyze the performance of our method in comparison to original expert-based Pyramid evaluations, showing higher correlation relative to the common Responsiveness method. We release our crowdsourced Summary-Content-Units, along with all crowdsourcing scripts, for future evaluations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their constructive comments, as well as Ani Nenkova for her helpful remarks. This work was supported in part by the Bloomberg Data Science Research Grant Program; by the German Research Foundation through the German-Israeli Project Cooperation (DIP, grants DA 1600\/1-1 and GU 798\/17-1); by the BIU Center for Research in Applied Cryptography and Cyber Security in conjunction with the Israel National Cyber Bureau in the Prime Minister's Office; by the Israel Science Foundation (grants 1157\/16 and 1951\/17); by DARPA Young Faculty Award YFA17-D17AP00022; and by the ArguAna Project GU 798\/20-1 (DFG).","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"micklesen-smith-1963-algorithm","url":"https:\/\/aclanthology.org\/1963.earlymt-1.25","title":"An algorithm for the translation of Russian inorganic-chemistry terms","abstract":null,"label_nlp4sg":1,"task":[],"method":[],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":null,"year":1963,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"medina-maza-etal-2020-event","url":"https:\/\/aclanthology.org\/2020.findings-emnlp.344","title":"Event-Related Bias Removal for Real-time Disaster Events","abstract":"Social media has become an important tool to share information about crisis events such as natural disasters and mass attacks. Detecting actionable posts that contain useful information requires rapid analysis of huge volume of data in real-time. This poses a complex problem due to the large amount of posts that do not contain any actionable information. Furthermore, the classification of information in real-time systems requires training on outof-domain data, as we do not have any data from a new emerging crisis. Prior work focuses on models pre-trained on similar event types. However, those models capture unnecessary event-specific biases, like the location of the event, which affect the generalizability and performance of the classifiers on new unseen data from an emerging new event. In our work, we train an adversarial neural model to remove latent event-specific biases and improve the performance on tweet importance classification.","label_nlp4sg":1,"task":["Event - Related Bias Removal"],"method":["adversarial neural model"],"goal1":"Sustainable Cities and Communities","goal2":null,"goal3":null,"acknowledgments":"This research was partially supported by DARPA grant no HR001117S0017-World-Mod-FP-036 funded under the World Modelers program, as well as by the financial assistance award 60NANB17D156 from U.S. Department of Commerce, National Institute of Standards and Technology (NIST). The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation\/herein. Disclaimer: The views and con-clusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of NIST, DOI\/IBC, or the U.S. Government.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":1,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"abend-rappoport-2017-state","url":"https:\/\/aclanthology.org\/P17-1008","title":"The State of the Art in Semantic Representation","abstract":"Semantic representation is receiving growing attention in NLP in the past few years, and many proposals for semantic schemes (e.g., AMR, UCCA, GMB, UDS) have been put forth. Yet, little has been done to assess the achievements and the shortcomings of these new contenders, compare them with syntactic schemes, and clarify the general goals of research on semantic representation. We address these gaps by critically surveying the state of the art in the field.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgements. We thank Nathan Schneider for his helpful comments. The work was support by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI).","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"cote-1998-system","url":"https:\/\/link.springer.com\/chapter\/10.1007\/3-540-49478-2_44","title":"System description\/demo of Alis Translation Solutions: overview","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kim-etal-2021-monotonic","url":"https:\/\/aclanthology.org\/2021.naloma-1.5","title":"Monotonic Inference for Underspecified Episodic Logic","abstract":"We present a method of making natural logic inferences from Unscoped Logical Form of Episodic Logic. We establish a correspondence between inference rules of scoperesolved Episodic Logic and the natural logic treatment by S\u00e1nchez Valencia (1991a), and hence demonstrate the ability to handle foundational natural logic inferences from prior literature as well as more general nested monotonicity inferences.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by NSF EAGER grant NSF IIS-1908595, DARPA CwC subcontract W911NF-15-1-0542, and a Sproull Graduate Fellowship from the University of Rochester. We are grateful to Hannah An, Sapphire Becker and the anonymous reviewers for their helpful feedback.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"navarretta-2004-algorithm","url":"https:\/\/aclanthology.org\/W04-0713","title":"An Algorithm for Resolving Individual and Abstract Anaphora in Danish Texts and Dialogues","abstract":"This paper describes the dar-algorithm for resolving intersentential pronominal anaphors referring to individual and abstract entities in Danish texts and dialogues. Individual entities are resolved combining models which identify high degree of salience with high degree of givenness (topicality) of entities in the hearer's cognitive model, e.g. (Grosz et al., 1995), with Haji\u010dov\u00e1 et al.'s (1990) salience account which assigns the highest degree of salience to entities in the focal part of an utterance in Information Structure terms. These focal entities often introduce new information in discourse. Anaphors referring to abstract entities are resolved with an extension of the algorithm presented by Eckert and Strube (2000). Manual tests of the dar-algorithm and other well-known resolution algorithms on the same data show that dar performs significantly better on most types of anaphor.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hotate-etal-2019-controlling","url":"https:\/\/aclanthology.org\/P19-2020","title":"Controlling Grammatical Error Correction Using Word Edit Rate","abstract":"When professional English teachers correct grammatically erroneous sentences written by English learners, they use various methods. The correction method depends on how much corrections a learner requires. In this paper, we propose a method for neural grammar error correction (GEC) that can control the degree of correction. We show that it is possible to actually control the degree of GEC by using new training data annotated with word edit rate. Thereby, diverse corrected sentences is obtained from a single erroneous sentence. Moreover, compared to a GEC model that does not use information on the degree of correction, the proposed method improves correction accuracy. * A statistically significant difference can be observed from the baseline (p < 0.05).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Yangyang Xi of Lang-8, Inc. for kindly allowing us to use the Lang-8 learner corpus.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bajaj-etal-2022-evaluating","url":"https:\/\/aclanthology.org\/2022.insights-1.11","title":"Evaluating Biomedical Word Embeddings for Vocabulary Alignment at Scale in the UMLS Metathesaurus Using Siamese Networks","abstract":"Recent work uses a Siamese Network, initialized with BioWordVec embeddings (distributed word embeddings), for predicting synonymy among biomedical terms to automate a part of the UMLS (Unified Medical Language System) Metathesaurus construction process. We evaluate the use of contextualized word embeddings extracted from nine different biomedical BERT-based models for synonymy prediction in the UMLS by replacing BioWordVec embeddings with embeddings extracted from each biomedical BERT model using different feature extraction methods. Surprisingly, we find that Siamese Networks initialized with BioWordVec embeddings still outperform the Siamese Networks initialized with embedding extracted from biomedical BERT model.","label_nlp4sg":1,"task":["Vocabulary Alignment"],"method":["Word Embeddings","BERT - based models","Siamese Networks"],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"The authors thank Liu et al. for providing the additional pretrained SapBERT models (Liu et al., 2021 ) and a cooperative AI Institute grant (AI-EDGE), from the National Science Foundation under CNS-2112471.","year":2022,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"popel-etal-2011-influence","url":"https:\/\/aclanthology.org\/W11-2153","title":"Influence of Parser Choice on Dependency-Based MT","abstract":"Accuracy of dependency parsers is one of the key factors limiting the quality of dependencybased machine translation. This paper deals with the influence of various dependency parsing approaches (and also different training data size) on the overall performance of an English-to-Czech dependency-based statistical translation system implemented in the Treex framework. We also study the relationship between parsing accuracy in terms of unlabeled attachment score and machine translation quality in terms of BLEU.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-etal-2010-using","url":"https:\/\/aclanthology.org\/W10-2416","title":"Using Deep Belief Nets for Chinese Named Entity Categorization","abstract":"Identifying named entities is essential in understanding plain texts. Moreover, the categories of the named entities are indicative of their roles in the texts. In this paper, we propose a novel approach, Deep Belief Nets (DBN), for the Chinese entity mention categorization problem. DBN has very strong representation power and it is able to elaborately self-train for discovering complicated feature combinations. The experiments conducted on the Automatic Context Extraction (ACE) 2004 data set demonstrate the effectiveness of DBN. It outperforms the state-of-the-art learning models such as SVM or BP neural network.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"peng-etal-2021-cross","url":"https:\/\/aclanthology.org\/2021.naacl-main.214","title":"Cross-Lingual Word Embedding Refinement by $\\ell_1$ Norm Optimisation","abstract":"Cross-Lingual Word Embeddings (CLWEs) encode words from two or more languages in a shared high-dimensional space in which vectors representing words with similar meaning (regardless of language) are closely located. Existing methods for building highquality CLWEs learn mappings that minimise the 2 norm loss function. However, this optimisation objective has been demonstrated to be sensitive to outliers. Based on the more robust Manhattan norm (aka. 1 norm) goodnessof-fit criterion, this paper proposes a simple post-processing step to improve CLWEs. An advantage of this approach is that it is fully agnostic to the training process of the original CLWEs and can therefore be applied widely. Extensive experiments are performed involving ten diverse languages and embeddings trained on different corpora. Evaluation results based on bilingual lexicon induction and crosslingual transfer for natural language inference tasks show that the 1 refinement substantially outperforms four state-of-the-art baselines in both supervised and unsupervised settings. It is therefore recommended that this strategy be adopted as a standard for CLWE methods.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by the award made by the UK Engineering and Physical Sciences Research Council (Grant number: EP\/P011829\/1) and Baidu, Inc. We would also like to express our sincerest gratitude to Guanyi Chen, Ruizhe Li, Xiao Li, Shun Wang, and the anonymous reviewers for their insightful and helpful comments.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"shardlow-2013-comparison","url":"https:\/\/aclanthology.org\/P13-3015","title":"A Comparison of Techniques to Automatically Identify Complex Words.","abstract":"Identifying complex words (CWs) is an important, yet often overlooked, task within lexical simplification (The process of automatically replacing CWs with simpler alternatives). If too many words are identified then substitutions may be made erroneously, leading to a loss of meaning. If too few words are identified then those which impede a user's understanding may be missed, resulting in a complex final text. This paper addresses the task of evaluating different methods for CW identification. A corpus of sentences with annotated CWs is mined from Simple Wikipedia edit histories, which is then used as the basis for several experiments. Firstly, the corpus design is explained and the results of the validation experiments using human judges are reported. Experiments are carried out into the CW identification techniques of: simplifying everything, frequency thresholding and training a support vector machine. These are based upon previous approaches to the task and show that thresholding does not perform significantly differently to the more na\u00efve technique of simplifying everything. The support vector machine achieves a slight increase in precision over the other two methods, but at the cost of a dramatic trade off in recall.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by EPSRC grant EP\/I028099\/1. Thanks go to the anonymous reviewers for their helpful suggestions.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bohus-etal-2007-olympus","url":"https:\/\/aclanthology.org\/W07-0305","title":"Olympus: an open-source framework for conversational spoken language interface research","abstract":"We introduce Olympus, a freely available framework for research in conversational interfaces. Olympus' open, transparent, flexible, modular and scalable nature facilitates the development of large-scale, real-world systems, and enables research leading to technological and scientific advances in conversational spoken language interfaces. In this paper, we describe the overall architecture, several systems spanning different domains, and a number of current research efforts supported by Olympus.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank all those who have brought contributions to the components underlying the Olympus dialog system framework. Neither Olympus nor the dialog systems discussed in this paper would have been possible without their help. We particularly wish to thank Alan W Black for his continued support and advice. Work on Olympus components and systems was supported in part by DARPA, under contract NBCH-D-03-0010, Boeing, under contract CMU-BA-GTA-1, and the US National Science Foundation under grant number 0208835. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding agencies.","year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"beckley-roark-2013-pair","url":"https:\/\/aclanthology.org\/D13-1165","title":"Pair Language Models for Deriving Alternative Pronunciations and Spellings from Pronunciation Dictionaries","abstract":"Pronunciation dictionaries provide a readily available parallel corpus for learning to transduce between character strings and phoneme strings or vice versa. Translation models can be used to derive character-level paraphrases on either side of this transduction, allowing for the automatic derivation of alternative pronunciations or spellings. We examine finitestate and SMT-based methods for these related tasks, and demonstrate that the tasks have different characteristics-finding alternative spellings is harder than alternative pronunciations and benefits from round-trip algorithms when the other does not. We also show that we can increase accuracy by modeling syllable stress.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by NSF grant #BCS-1049308. Any opinions, findings, conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the NSF.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-di-eugenio-2010-lucene","url":"https:\/\/aclanthology.org\/W10-3016","title":"A Lucene and Maximum Entropy Model Based Hedge Detection System","abstract":"This paper describes the approach to hedge detection we developed, in order to participate in the shared task at CoNLL-2010. A supervised learning approach is employed in our implementation. Hedge cue annotations in the training data are used as the seed to build a reliable hedge cue set. Maximum Entropy (MaxEnt) model is used as the learning technique to determine uncertainty. By making use of Apache Lucene, we are able to do fuzzy string match to extract hedge cues, and to incorporate part-of-speech (POS) tags in hedge cues. Not only can our system determine the certainty of the sentence, but is also able to find all the contained hedges. Our system was ranked third on the Wikipedia dataset. In later experiments with different parameters, we further improved our results, with a 0.612 F-score on the Wikipedia dataset, and a 0.802 F-score on the biological dataset.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by award IIS-0905593 from the National Science Foundation.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"rytting-etal-2014-arcade","url":"https:\/\/aclanthology.org\/W14-1813","title":"ArCADE: An Arabic Corpus of Auditory Dictation Errors","abstract":"We present a new corpus of word-level listening errors collected from 62 native English speakers learning Arabic designed to inform models of spell checking for this learner population. While we use the corpus to assist in automated detection and correction of auditory errors in electronic dictionary lookup, the corpus can also be used as a phonological error layer, to be combined with a composition error layer in a more complex spell-checking system for non-native speakers. The corpus may be useful to instructors of Arabic as a second language, and researchers who study second language phonology and listening perception.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This material is based on work supported, in whole or in part, with funding from the United States Government. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the University of Maryland, College Park and\/or any agency or entity of the United States Government.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-etal-2020-didis","url":"https:\/\/aclanthology.org\/2020.wmt-1.7","title":"DiDi's Machine Translation System for WMT2020","abstract":"This paper describes DiDi AI Labs' submission to the WMT2020 news translation shared task. We participate in the translation direction of Chinese\u2192English. In this direction, we use the Transformer as our baseline model, and integrate several techniques for model enhancement, including data filtering, data selection, back-translation, fine-tuning, model ensembling, and re-ranking. As a result, our submission achieves a BLEU score of 36.6 in Chinese\u2192English.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hu-etal-2021-ranknas","url":"https:\/\/aclanthology.org\/2021.emnlp-main.191","title":"RankNAS: Efficient Neural Architecture Search by Pairwise Ranking","abstract":"This paper addresses the efficiency challenge of Neural Architecture Search (NAS) by formulating the task as a ranking problem. Previous methods require numerous training examples to estimate the accurate performance of architectures, although the actual goal is to find the distinction between \"good\" and \"bad\" candidates. Here we do not resort to performance predictors. Instead, we propose a performance ranking method (RankNAS) via pairwise ranking. It enables efficient architecture search using much fewer training examples. Moreover, we develop an architecture selection method to prune the search space and concentrate on more promising candidates. Extensive experiments on machine translation and language modeling tasks show that RankNAS can design high-performance architectures while being orders of magnitude faster than state-ofthe-art NAS systems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by the National Science Foundation of China (Nos. 61876035 and 61732005), the National Key R&D Program of China (No.2019QY1801), and the Ministry of Science and Technology of the PRC (Nos. 2019YFF0303002 and 2020AAA0107900). The authors would like to thank the anonymous reviewers for their comments and suggestions.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"reynaert-2016-ocr","url":"https:\/\/aclanthology.org\/L16-1154","title":"OCR Post-Correction Evaluation of Early Dutch Books Online - Revisited","abstract":"We present further work on evaluation of the fully automatic post-correction of Early Dutch Books Online, a collection of 10,333 18th century books. In prior work we evaluated the new implementation of Text-Induced Corpus Clean-up (TICCL) on the basis of a single book Gold Standard derived from this collection. In the current paper we revisit the same collection on the basis of a sizeable 1020 item random sample of OCR post-corrected strings from the full collection. Both evaluations have their own stories to tell and lessons to teach.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kinyon-prolo-2002-classification","url":"https:\/\/aclanthology.org\/W02-1507","title":"A Classification of Grammar Development Strategies","abstract":"In this paper, we propose a classification of grammar development strategies according to two criteria : handwritten versus automatically acquired grammars, and grammars based on a low versus high level of syntactic abstraction. Our classification yields four types of grammars. For each type, we discuss implementation and evaluation issues.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"kim-park-1996-english","url":"https:\/\/aclanthology.org\/Y96-1004","title":"English Free Relative Clause Constructions : From a Constraint-Based Perspective","abstract":"Like other Indo-European languages, English also employs a particular type of relative clause constructions, the so-called free-relative constructions, exemplified by the phrase like what Kim ate. This paper provides a constraint-based approach to these constructions. The paper begins with surveying on the properties of the construction. We will discuss two types of free relatives, their lexical restrictions, nominal properties, and their behavior with respect to extraposition and piped piping, and finiteness. Following this, we sketch basic theory of the constraint-based grammar, Head-driven Phrase Structure Grammar(HPSG) which is of relevance in this paper. As the main part of this paper, we then present our constraint-based analysis couched upon this framework.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"hwang-etal-2014-criteria","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/624_Paper.pdf","title":"Criteria for Identifying and Annotating Caused Motion Constructions in Corpus Data","abstract":"While natural language processing performance has been improved through the recognition that there is a relationship between the semantics of the verb and the syntactic context in which the verb is realized, sentences where the verb does not conform to the expected syntax-semantic patterning behavior remain problematic. For example, in the sentence \"The crowed laughed the clown off the stage\", a verb of non-verbal communication laugh is used in a caused motion construction and gains a motion entailment that is atypical given its inherent lexical semantics. This paper focuses on our efforts at defining the semantic types and varieties of caused motion constructions (CMCs) through an iterative annotation process and establishing annotation guidelines based on these criteria to aid in the production of a consistent and reliable annotation. The annotation will serve as training and test data for classifiers for CMCs, and the CMC definitions developed throughout this study will be used in extending VerbNet to handle representations of sentences in which a verb is used in a syntactic context that is atypical for its lexical semantics.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We gratefully acknowledge the support of the National Science Foundation Grant NSF-IIS-1116782, A Bayesian Approach to Dynamic Lexical Resources for Flexible Lan-guage Processing and DARPA FA8750-09-C-0179 (via BBN) Machine Reading: Ontology Induction. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or DARPA.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"candito-etal-2014-deep","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/494_Paper.pdf","title":"Deep Syntax Annotation of the Sequoia French Treebank","abstract":"We define a deep syntactic representation scheme for French, which abstracts away from surface syntactic variation and diathesis alternations, and describe the annotation of deep syntactic representations on top of the surface dependency trees of the Sequoia corpus. The resulting deep-annotated corpus, named DEEP-SEQUOIA, is freely available, and hopefully useful for corpus linguistics studies and for training deep analyzers to prepare semantic analysis.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partially funded by the French Investissements d'Avenir -Labex EFL program (ANR-10-LABX-0083). We are grateful to Sylvain Kahane for useful discussions and to Alexandra Kinyon for proofreading this paper. All remaining errors are ours.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"chen-etal-2016-learning","url":"https:\/\/aclanthology.org\/C16-1035","title":"Learning to Distill: The Essence Vector Modeling Framework","abstract":"In the context of natural language processing, representation learning has emerged as a newly active research subject because of its excellent performance in many applications. Learning representations of words is a pioneering study in this school of research. However, paragraph (or sentence and document) embedding learning is more suitable\/reasonable for some tasks, such as sentiment classification and document summarization. Nevertheless, as far as we are aware, there is relatively less work focusing on the development of unsupervised paragraph embedding methods. Classic paragraph embedding methods infer the representation of a given paragraph by considering all of the words occurring in the paragraph. Consequently, those stop or function words that occur frequently may mislead the embedding learning process to produce a misty paragraph representation. Motivated by these observations, our major contributions in this paper are twofold. First, we propose a novel unsupervised paragraph embedding method, named the essence vector (EV) model, which aims at not only distilling the most representative information from a paragraph but also excluding the general background information to produce a more informative low-dimensional vector representation for the paragraph. We evaluate the proposed EV model on benchmark sentiment classification and multi-document summarization tasks. The experimental results demonstrate the effectiveness and applicability of the proposed embedding method. Second, in view of the increasing importance of spoken content processing, an extension of the EV model, named the denoising essence vector (D-EV) model, is proposed. The D-EV model not only inherits the advantages of the EV model but also can infer a more robust representation for a given spoken paragraph against imperfect speech recognition. The utility of the D-EV model is evaluated on a spoken document summarization task, confirming the practical merits of the proposed embedding method in relation to several well-practiced and state-of-the-art summarization methods.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"van-zaanen-2000-abl","url":"https:\/\/aclanthology.org\/C00-2139","title":"ABL: Alignment-Based Learning","abstract":"This paper introduces a new type of grammar learning algorithm, inspired by string edit distance Wagner and Fischer, 1974. The algorithm takes a corpus of at sentences as input and returns a corpus of labelled, bracketed sentences. The method works on pairs of unstructured sentences that have one or more words in common. When two sentences are divided into parts that are the same in both sentences and parts that are di erent, this information is used to nd parts that are interchangeable. These parts are taken as possible constituents of the same type. After this alignment learning step, the selection learning step selects the most probable constituents from all possible constituents. This method was used to bootstrap structure on the ATIS corpus Marcus et al., 1993 and on the OVIS 1 corpus Bonnema et al., 1997. While the results are encouraging we obtained up to 89.25 non-crossing brackets precision, this paper will point out some of the shortcomings of our approach and will suggest possible solutions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wang-htun-2020-gokus","url":"https:\/\/aclanthology.org\/2020.wat-1.16","title":"Goku's Participation in WAT 2020","abstract":"This paper introduces our neural machine translation systems' participation in the WAT 2020 (team ID: goku20). We participated in the (i) Patent, (ii) Business Scene Dialogue (BSD) document-level translation, (iii) Mixeddomain tasks. Regardless of simplicity, standard Transformer models have been proven to be very effective in many machine translation systems. Recently, some advanced pretraining generative models have been proposed on the basis of encoder-decoder framework. Our main focus of this work is to explore how robust Transformer models perform in translation from sentence-level to document-level, from resource-rich to low-resource languages. Additionally, we also investigated the improvement that fine-tuning on the top of pre-trained transformer-based models can achieve on various tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"bosco-etal-2012-parallel","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/209_Paper.pdf","title":"The Parallel-TUT: a multilingual and multiformat treebank","abstract":"The paper introduces an ongoing project for the development of a parallel treebank for Italian, English and French, i.e. Parallel-TUT, or simply ParTUT. For the development of this resource, both the dependency and constituency-based formats of the Italian Turin University Treebank (TUT) have been applied to a preliminary dataset, which includes the whole text of the Universal Declaration of Human Rights, sentences from the JRC-Acquis Multilingual Parallel Corpus and the Creative Commons licence. The focus of the project is mainly on the quality of the annotation and the investigation of some issues related to the alignment of data that can be allowed by the TUT formats, also taking into account the availability of conversion tools for display data in standard ways, such as Tiger-XML and CoNLL formats. It is, in fact, our belief that increasing the portability of our treebank could give us the opportunity to access resources and tools provided by other research groups, especially at this stage of the project, where no particular tool-compatible with the TUT format-is available in order to tackle the alignment problems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"soldini-2018-thinking","url":"https:\/\/aclanthology.org\/W18-1909","title":"Thinking of Going Neural? Factors Honda R\\&D Americas is Considering before Making the Switch","abstract":"Proceedings of AMTA 2018, vol. 2: MT Users' Track Boston, March 17 -21, 2018 | Page 58\nProceedings of AMTA 2018, vol. 2: MT Users' Track Boston, March 17 -21, 2018 | Page 59","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"maxwell-david-2008-joint","url":"https:\/\/aclanthology.org\/I08-3007","title":"Joint Grammar Development by Linguists and Computer Scientists","abstract":"For languages with inflectional morphology, development of a morphological parser can be a bottleneck to further development. We focus on two difficulties: first, finding people with expertise in both computer programming and the linguistics of a particular language, and second, the short lifetime of software such as parsers. We describe a methodology to split parser building into two tasks: descriptive grammar development, and formal grammar development. The two grammars are combined into a single document using Literate Programming. The formal grammar is designed to be independent of a particular parsing engine's programming language, so that it can be readily ported to a new parsing engine, thus helping solve the software lifetime problem.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wibberley-etal-2014-method51","url":"https:\/\/aclanthology.org\/C14-2025","title":"Method51 for Mining Insight from Social Media Datasets","abstract":"We present Method51, a social media analysis software platform with a set of accompanying methodologies. We discuss a series of case studies illustrating the platform's application, and motivating our methodological proposals.","label_nlp4sg":1,"task":["Mining Insight from Social Media Datasets"],"method":["software platform"],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"This work was supported by the ESRC National Centre for Research Methods grant DU\/512589110. We are grateful to our collaborators at the Centre for the Analysis of social media, Jamie Bartlett and Carl Miller for valuable contributions to this work. We thank the anonymous reviewers for their helpful comments. This work was partially supported by the Open Society Foundation.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} {"ID":"agic-schluter-2018-baselines","url":"https:\/\/aclanthology.org\/L18-1614","title":"Baselines and Test Data for Cross-Lingual Inference","abstract":"The recent years have seen a revival of interest in textual entailment, sparked by i) the emergence of powerful deep neural network learners for natural language processing and ii) the timely development of large-scale evaluation datasets such as SNLI. Recast as natural language inference, the problem now amounts to detecting the relation between pairs of statements: they either contradict or entail one another, or they are mutually neutral. Current research in natural language inference is effectively exclusive to English. In this paper, we propose to advance the research in SNLI-style natural language inference toward multilingual evaluation. To that end, we provide test data for four major languages: Arabic, French, Spanish, and Russian. We experiment with a set of baselines. Our systems are based on cross-lingual word embeddings and machine translation. While our best system scores an average accuracy of just over 75%, we focus largely on enabling further research in multilingual inference.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"wu-1993-closing","url":"https:\/\/aclanthology.org\/W93-0240","title":"Closing the Gap Between Discourse Structure and Communicative Intention","abstract":"Ill tile past, quite a few phenomena related to discourse structures have been studied, such as lexica] cohesion (HMliday and Hasan, 1976) , coherence relation (Hobbs, 1985; Wu and Lytinen, 1989) , and rhetorical relation (Mann and Thompson, 1988) . On the other hand, there Mso exists al)uadaut research on communicative intention such as speech act theory (Searle, 1968) and relevance maxim (Wilson and Sperber, 1986) . One logical question to ask is then:","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1993,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} {"ID":"krey-novak-1990-textplanning","url":"https:\/\/aclanthology.org\/C90-3095","title":"The Textplanning Component PIT of the LILOG System","abstract":"In thi+ lmper we detcrlhe the conRtruction and impl,-me~v ration of PIT (Pre,enting Information by \"l'extplamfing), tt ~ubsy,tem of the LILOO. textunder.tandlng Jy~tem. Pl'F i~ uted for plmming rtnswertt of pnretgraph ]eltgth to qne~tlo,A of the kind What do lieu kttolJ~ about X q. We concentrated m~ t~ *imple. envy to implement mechanltm thtd can be further extended. Experiencet with thit planning cottll~Onent , e~pecirtlly concerning the integrntion of new plrmt anti further exiellslonN are dilcllAle