ID
stringlengths 11
54
| url
stringlengths 33
64
| title
stringlengths 11
184
| abstract
stringlengths 17
3.87k
⌀ | label_nlp4sg
bool 2
classes | task
sequence | method
sequence | goal1
stringclasses 9
values | goal2
stringclasses 9
values | goal3
stringclasses 1
value | acknowledgments
stringlengths 28
1.28k
⌀ | year
stringlengths 4
4
| sdg1
bool 1
class | sdg2
bool 1
class | sdg3
bool 2
classes | sdg4
bool 2
classes | sdg5
bool 2
classes | sdg6
bool 1
class | sdg7
bool 1
class | sdg8
bool 2
classes | sdg9
bool 2
classes | sdg10
bool 2
classes | sdg11
bool 2
classes | sdg12
bool 1
class | sdg13
bool 2
classes | sdg14
bool 1
class | sdg15
bool 1
class | sdg16
bool 2
classes | sdg17
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
li-etal-2020-shallow | https://aclanthology.org/2020.emnlp-main.72 | Shallow-to-Deep Training for Neural Machine Translation | Deep encoders have been proven to be effective in improving neural machine translation (NMT) systems, but training an extremely deep encoder is time consuming. Moreover, why deep models help NMT is an open question. In this paper, we investigate the behavior of a well-tuned deep Transformer system. We find that stacking layers is helpful in improving the representation ability of N-MT models and adjacent layers perform similarly. This inspires us to develop a shallowto-deep training method that learns deep models by stacking shallow models. In this way, we successfully train a Transformer system with a 54-layer encoder. Experimental results on WMT'16 English-German and WMT'14 English-French translation tasks show that it is 1.4 × faster than training from scratch, and achieves a BLEU score of 30.33 and 43.29 on two tasks. The code is publicly available at https://github.com/libeineu/ SDT-Training. | false | [] | [] | null | null | null | This work was supported in part by the National Science Foundation of China (Nos. 61876035 and 61732005), the National Key R&D Program of China (No. 2019QY1801). The authors would like to thank anonymous reviewers for their valuable comments. And thank Qiang Wang for the helpful advice to improve the paper. | 2020 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
arthur-etal-2021-multilingual | https://aclanthology.org/2021.findings-acl.420 | Multilingual Simultaneous Neural Machine Translation | Simultaneous machine translation (SIMT) involves translating source utterances to the target language in real-time before the speaker utterance completes. This paper proposes the multilingual approach to SIMT, where a single model simultaneously translates between multiple language-pairs. This not only results in more efficiency in terms of the number of models and parameters (hence simpler deployment), but may also lead to higher performing models by capturing commonalities among the languages. We further explore simple and effective multilingual architectures based on two strong recently proposed SIMT models. Our results on translating from two Germanic languages (German, Dutch) and three Romance languages (French, Italian, Romanian) into English show (i) the single multilingual model is on-par or better than individual models, and (ii) multilingual SIMT models trained based on language families are on-par or better than the universal model trained for all languages. 1 | false | [] | [] | null | null | null | null | 2021 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
gundapu-mamidi-2020-gundapusunil-semeval | https://aclanthology.org/2020.semeval-1.166 | Gundapusunil at SemEval-2020 Task 9: Syntactic Semantic LSTM Architecture for SENTIment Analysis of Code-MIXed Data | The phenomenon of mixing the vocabulary and syntax of multiple languages within the same utterance is called Code-Mixing. This is more evident in multilingual societies. In this paper, we have developed a system for SemEval 2020: Task 9 on Sentiment Analysis for Code-Mixed Social Media Text. Our system first generates two types of embeddings for the social media text. In those, the first one is character level embeddings to encode the character level information and to handle the out-of-vocabulary entries and the second one is FastText word embeddings for capturing morphology and semantics. These two embeddings were passed to the LSTM network and the system outperformed the baseline model. | false | [] | [] | null | null | null | null | 2020 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
polak-polakova-1982-operation | https://aclanthology.org/C82-2058 | Operation Logic - A Database Management Operation System of Human-Like Information Processing | The paper contains the description of a database management computer operation system called operation logic. This system is a formal logic with well-deflned formulas as semantic language clauses and with reasoning by means of modus ponens rules. There are four frames-CLAUSE, QUESTION, | false | [] | [] | null | null | null | null | 1982 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
jin-etal-2021-neural | https://aclanthology.org/2021.emnlp-main.80 | Neural Attention-Aware Hierarchical Topic Model | Neural topic models (NTMs) apply deep neural networks to topic modelling. Despite their success, NTMs generally ignore two important aspects: (1) only document-level word count information is utilized for the training, while more fine-grained sentence-level information is ignored, and (2) external semantic knowledge regarding documents, sentences and words are not exploited for the training. To address these issues, we propose a variational autoencoder (VAE) NTM model that jointly reconstructs the sentence and document word counts using combinations of bag-of-words (BoW) topical embeddings and pre-trained semantic embeddings. The pre-trained embeddings are first transformed into a common latent topical space to align their semantics with the BoW embeddings. Our model also features hierarchical KL divergence to leverage embeddings of each document to regularize those of their sentences, thereby paying more attention to semantically relevant sentences. Both quantitative and qualitative experiments have shown the efficacy of our model in 1) lowering the reconstruction errors at both the sentence and document levels, and 2) discovering more coherent topics from real-world datasets. | false | [] | [] | null | null | null | Yuan Jin and Wray Buntine were supported by the Australian Research Council under awards DE170100037. Wray Buntine was also sponsored by DARPA under agreement number FA8750-19-2-0501. | 2021 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
gupta-etal-2021-sumpubmed | https://aclanthology.org/2021.acl-srw.30 | SumPubMed: Summarization Dataset of PubMed Scientific Articles | Most earlier work on text summarization is carried out on news article datasets. The summary in these datasets is naturally located at the beginning of the text. Hence, a model can spuriously utilize this correlation for summary generation instead of truly learning to summarize. To address this issue, we constructed a new dataset, SUMPUBMED, using scientific articles from the PubMed archive. We conducted a human analysis of summary coverage, redundancy, readability, coherence, and informativeness on SUMPUBMED. SUMPUBMED is challenging because (a) the summary is distributed throughout the text (not-localized on top), and (b) it contains rare domain-specific scientific terms. We observe that seq2seq models that adequately summarize news articles struggle to summarize SUMPUBMED. Thus, SUMPUBMED opens new avenues for the future improvement of models as well as the development of new evaluation metrics. | true | [] | [] | Good Health and Well-Being | Industry, Innovation and Infrastructure | null | We would like to thank the ACL SRW anonymous reviewers for their useful feedback, comments, and suggestions. | 2021 | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false |
schiffman-mckeown-2005-context | https://aclanthology.org/H05-1090 | Context and Learning in Novelty Detection | We demonstrate the value of using context in a new-information detection system that achieved the highest precision scores at the Text Retrieval Conference's Novelty Track in 2004. In order to determine whether information within a sentence has been seen in material read previously, our system integrates information about the context of the sentence with novel words and named entities within the sentence, and uses a specialized learning algorithm to tune the system parameters. | true | [] | [] | Industry, Innovation and Infrastructure | null | null | null | 2005 | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false |
pinnis-etal-2014-real | https://aclanthology.org/2014.amta-users.7 | Real-world challenges in application of MT for localization: the Baltic case | null | false | [] | [] | null | null | null | null | 2014 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
chung-etal-2021-splat | https://aclanthology.org/2021.naacl-main.152 | SPLAT: Speech-Language Joint Pre-Training for Spoken Language Understanding | Spoken language understanding (SLU) requires a model to analyze input acoustic signal to understand its linguistic content and make predictions. To boost the models' performance, various pre-training methods have been proposed to learn rich representations from large-scale unannotated speech and text. However, the inherent disparities between the two modalities necessitate a mutual analysis. In this paper, we propose a novel semisupervised learning framework, SPLAT, to jointly pre-train the speech and language modules. Besides conducting a self-supervised masked language modeling task on the two individual modules using unpaired speech and text, SPLAT aligns representations from the two modules in a shared latent space using a small amount of paired speech and text. Thus, during fine-tuning, the speech module alone can produce representations carrying both acoustic information and contextual semantic knowledge of an input acoustic signal. Experimental results verify the effectiveness of our approach on various SLU tasks. For example, SPLAT improves the previous stateof-the-art performance on the Spoken SQuAD dataset by more than 10%. ⇤ Equal contribution. The work was done when Yu-An Chung was interning at Microsoft. | false | [] | [] | null | null | null | null | 2021 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
ruseti-etal-2016-using | https://aclanthology.org/W16-1623 | Using Embedding Masks for Word Categorization | Word embeddings are widely used nowadays for many NLP tasks. They reduce the dimensionality of the vocabulary space, but most importantly they should capture (part of) the meaning of words. The new vector space used by the embeddings allows computation of semantic distances between words, while some word embeddings also permit simple vector operations (e.g. summation, difference) resembling analogical reasoning. This paper proposes a new operation on word embeddings aimed to capturing categorical information by first learning and then applying an embedding mask for each analyzed category. Thus, we conducted a series of experiments related to categorization of words based on their embeddings. Several classical approaches were compared together with the one introduced in the paper which uses different embedding masks learnt for each category. | false | [] | [] | null | null | null | null | 2016 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
ma-etal-2010-multimodal | https://aclanthology.org/W10-1308 | A Multimodal Vocabulary for Augmentative and Alternative Communication from Sound/Image Label Datasets | Existing Augmentative and Alternative Communication vocabularies assign multimodal stimuli to words with multiple meanings. The ambiguity hampers the vocabulary effectiveness when used by people with language disabilities. For example, the noun "a missing letter" may refer to a character or a written message, and each corresponds to a different picture. A vocabulary with images and sounds unambiguously linked to words can better eliminate misunderstanding and assist communication for people with language disorders. We explore a new approach of creating such a vocabulary via automatically assigning semantically unambiguous groups of synonyms to sound and image labels. We propose an unsupervised word sense disambiguation (WSD) voting algorithm, which combines different semantic relatedness measures. Our voting algorithm achieved over 80% accuracy with a sound label dataset, which significantly outperforms WSD with individual measures. We also explore the use of human judgments of evocation between members of concept pairs, in the label disambiguation task. Results show that evocation achieves similar performance to most of the existing relatedness measures. | false | [] | [] | null | null | null | Figure 6 . Percentage of WSD results overlap between evocation and various relatedness measures.We thank the Kimberley and Frank H. Moss '71 Princeton SEAS Research Fund for supporting our project. | 2010 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
selfridge-etal-2012-integrating | https://aclanthology.org/W12-1638 | Integrating Incremental Speech Recognition and POMDP-Based Dialogue Systems | The goal of this paper is to present a first step toward integrating Incremental Speech Recognition (ISR) and Partially-Observable Markov Decision Process (POMDP) based dialogue systems. The former provides support for advanced turn-taking behavior while the other increases the semantic accuracy of speech recognition results. We present an Incremental Interaction Manager that supports the use of ISR with strictly turn-based dialogue managers. We then show that using a POMDP-based dialogue manager with ISR substantially improves the semantic accuracy of the incremental results. | false | [] | [] | null | null | null | Thanks to Vincent Goffin for help with this work, and to the anonymous reviewers for their comments and critique. We acknowledge funding from the NSF under grant IIS-0713698. | 2012 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
ravenscroft-etal-2018-harrigt | https://aclanthology.org/P18-4004 | HarriGT: A Tool for Linking News to Science | Being able to reliably link scientific works to the newspaper articles that discuss them could provide a breakthrough in the way we rationalise and measure the impact of science on our society. Linking these articles is challenging because the language used in the two domains is very different, and the gathering of online resources to align the two is a substantial information retrieval endeavour. We present HarriGT, a semi-automated tool for building corpora of news articles linked to the scientific papers that they discuss. Our aim is to facilitate future development of information-retrieval tools for newspaper/scientific work citation linking. Har-riGT retrieves newspaper articles from an archive containing 17 years of UK web content. It also integrates with 3 large external citation networks, leveraging named entity extraction, and document classification to surface relevant examples of scientific literature to the user. We also provide a tuned candidate ranking algorithm to highlight potential links between scientific papers and newspaper articles to the user, in order of likelihood. HarriGT is provided as an open source tool (http: //harrigt.xyz). | true | [] | [] | Industry, Innovation and Infrastructure | Peace, Justice and Strong Institutions | null | We thank the EPSRC (grant EP/L016400/1) for funding us through the University of Warwick's CDT in Urban Science, the Alan Turing Institute and British Library for providing resources. | 2018 | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false |
nayek-etal-2015-catalog | https://aclanthology.org/W15-5206 | CATaLog: New Approaches to TM and Post Editing Interfaces | This paper explores a new TM-based CAT tool entitled CATaLog. New features have been integrated into the tool which aim to improve post-editing both in terms of performance and productivity. One of the new features of CATaLog is a color coding scheme that is based on the similarity between a particular input sentence and the segments retrieved from the TM. This color coding scheme will help translators to identify which part of the sentence is most likely to require post-editing thus demanding minimal effort and increasing productivity. We demonstrate the tool's functionalities using an English-Bengali dataset. | false | [] | [] | null | null | null | We would like to thank the anonymous NLP4TM reviewers who provided us valuable feedback to improve this paper as well as new ideas for future work.English to Indian language Machine Translation (EILMT) is a project funded by the Department of Information and Technology (DIT), Government of India.Santanu Pal is supported by the People Programme (Marie Curie Actions) of the European Union's Framework Programme (FP7/2007-2013) under REA grant agreement no 317471. | 2015 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
arnold-etal-2016-tasty | https://aclanthology.org/C16-2024 | TASTY: Interactive Entity Linking As-You-Type | We introduce TASTY (Tag-as-you-type), a novel text editor for interactive entity linking as part of the writing process. Tasty supports the author of a text with complementary information about the mentioned entities shown in a 'live' exploration view. The system is automatically triggered by keystrokes, recognizes mention boundaries and disambiguates the mentioned entities to Wikipedia articles. The author can use seven operators to interact with the editor and refine the results according to his specific intention while writing. Our implementation captures syntactic and semantic context using a robust end-to-end LSTM sequence learner and word embeddings. We demonstrate the applicability of our system in English and German language for encyclopedic or medical text. Tasty is currently being tested in interactive applications for text production, such as scientific research, news editorial, medical anamnesis, help desks and product reviews. | false | [] | [] | null | null | null | Acknowledgements Our work is funded by the Federal Ministry of Economic Affairs and Energy (BMWi) under grant agreement 01MD15010B (Project: Smart Data Web). | 2016 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
kim-sohn-2020-positive | https://aclanthology.org/2020.coling-main.191 | How Positive Are You: Text Style Transfer using Adaptive Style Embedding | The prevalent approach for unsupervised text style transfer is disentanglement between content and style. However, it is difficult to completely separate style information from the content. Other approaches allow the latent text representation to contain style and the target style to affect the generated output more than the latent representation does. In both approaches, however, it is impossible to adjust the strength of the style in the generated output. Moreover, those previous approaches typically perform both the sentence reconstruction and style control tasks in a single model, which complicates the overall architecture. In this paper, we address these issues by separating the model into a sentence reconstruction module and a style module. We use the Transformer-based autoencoder model for sentence reconstruction and the adaptive style embedding is learned directly in the style module. Because of this separation, each module can better focus on its own task. Moreover, we can vary the style strength of the generated sentence by changing the style of the embedding expression. Therefore, our approach not only controls the strength of the style, but also simplifies the model architecture. Experimental results show that our approach achieves better style transfer performance and content preservation than previous approaches. 1 | false | [] | [] | null | null | null | This research was supported by the National Research Foundation of Korea grant funded by the Korea government (MSIT) (No. NRF-2019R1A2C1006608). | 2020 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
dagan-etal-2021-co | https://aclanthology.org/2021.eacl-main.260 | Co-evolution of language and agents in referential games | Referential games offer a grounded learning environment for neural agents which accounts for the fact that language is functionally used to communicate. However, they do not take into account a second constraint considered to be fundamental for the shape of human language: that it must be learnable by new language learners. Cogswell et al. (2019) introduced cultural transmission within referential games through a changing population of agents to constrain the emerging language to be learnable. However, the resulting languages remain inherently biased by the agents' underlying capabilities. In this work, we introduce Language Transmission Simulator to model both cultural and architectural evolution in a population of agents. As our core contribution, we empirically show that the optimal situation is to take into account also the learning biases of the language learners and thus let language and agents coevolve. When we allow the agent population to evolve through architectural evolution, we achieve across the board improvements on all considered metrics and surpass the gains made with cultural transmission. These results stress the importance of studying the underlying agent architecture and pave the way to investigate the co-evolution of language and agent in language emergence studies. | false | [] | [] | null | null | null | We would like to thank Angeliki Lazaridou for her helpful discussions and feedback on previous iterations of this work. | 2021 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
schneider-1987-metal | https://aclanthology.org/1987.mtsummit-1.7 | The METAL System. Status 1987 | 1. History 2. Hardware 3. Grammar 4. Lexicon 5. Development Tools 6. Current Applications and Quality 7. Research, Future Applications
In the late seventies, when there was a noticeable shortage of qualified technical translators versus the volume of required in-house translations, Siemens began to look for an operative machine translation system. It was intended to increase the productivity of the translators available, and to reduce the time required for the translation process. This is extremely critical if voluminous product documentation needs to be delivered on time. | false | [] | [] | null | null | null | null | 1987 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
kuzman-etal-2019-neural | https://aclanthology.org/W19-7301 | Neural Machine Translation of Literary Texts from English to Slovene | Neural Machine Translation has shown promising performance in literary texts. Since literary machine translation has not yet been researched for the English-to-Slovene translation direction, this paper aims to fulfill this gap by presenting a comparison among bespoke NMT models, tailored to novels, and Google Neural Machine Translation. The translation models were evaluated by the BLEU and METEOR metrics, assessment of fluency and adequacy, and measurement of the postediting effort. The findings show that all evaluated approaches resulted in an increase in translation productivity. The translation model tailored to a specific author outperformed the model trained on a more diverse literary corpus, based on all metrics except the scores for fluency. However, the translation model by Google still outperforms all bespoke models. The evaluation reveals a very low inter-rater agreement on fluency and adequacy, based on the kappa coefficient values, and significant discrepancies between posteditors. This suggests that these methods might not be reliable, which should be addressed in future studies. Recent years have seen the advent of Neural Machine Translation (NMT), which has shown promising performance in literary texts | false | [] | [] | null | null | null | This publication has emanated from research supported in part by a research grant from Science Foundation Ireland (SFI) under Grant Number SFI/12/RC/2289 (Insight), co-funded by the European Regional Development Fund. | 2019 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
vasconcellos-1989-place | https://aclanthology.org/1989.mtsummit-1.9 | The place of MT in an in-house translation service | At the Pan American Health Organization (PAHO), MT service is approaching its tenth anniversary. A special combination of characteristics have placed this operation in a class by itself. One of these characteristics is that the MT software (SPANAM and ENGSPAN and supporting programs) has been developed in-house by an international organization. PAHO was motivated by the dual need to: (1) meet the translation needs of its secretariat, and (2) disseminate information in its member countries. Thus MT at PAHO was conceived from the start as a public service. | false | [] | [] | null | null | null | null | 1989 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
vanzo-etal-2014-context | https://aclanthology.org/C14-1221 | A context-based model for Sentiment Analysis in Twitter | Most of the recent literature on Sentiment Analysis over Twitter is tied to the idea that the sentiment is a function of an incoming tweet. However, tweets are filtered through streams of posts, so that a wider context, e.g. a topic, is always available. In this work, the contribution of this contextual information is investigated. We modeled the polarity detection problem as a sequential classification task over streams of tweets. A Markovian formulation of the Support Vector Machine discriminative model as embodied by the SVM hmm algorithm has been here employed to assign the sentiment polarity to entire sequences. The experimental evaluation proves that sequential tagging effectively embodies evidence about the contexts and is able to reach a relative increment in detection accuracy of around 20% in F1 measure. These results are particularly interesting as the approach is flexible and does not require manually coded resources. ColMustard : Amazing match yesterday!!#Bayern vs. #Freiburg 4-0 #easyvictory SergGray : @ColMustard Surely, but #Freiburg wasted lot of chances to score.. wrong substitutions by #Guardiola during the 2nd half!! ColMustard : @SergGray Yes, I totally agree with you about the substitutions! #Bayern #Freiburg This work is licenced under a Creative Commons Attribution 4.0 International License. | false | [] | [] | null | null | null | null | 2014 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
gupta-lehal-2011-punjabi | https://aclanthology.org/W11-3006 | Punjabi Language Stemmer for nouns and proper names | This paper concentrates on Punjabi language noun and proper name stemming. The purpose of stemming is to obtain the stem or radix of those words which are not found in dictionary. If stemmed word is present in dictionary, then that is a genuine word, otherwise it may be proper name or some invalid word. In Punjabi language stemming for nouns and proper names, an attempt is made to obtain stem or radix of a Punjabi word and then stem or radix is checked against Punjabi noun and proper name dictionary. An in depth analysis of Punjabi news corpus was made and various possible noun suffixes were identified like ੀ ਆਂ īāṃ, ਿੀਆਂ iāṃ, ੀ ਆਂ ūāṃ, ੀ ੀਂ āṃ, ੀ ਏ īē etc. and the various rules for noun and proper name stemming have been generated. Punjabi language stemmer for nouns and proper names is applied for Punjabi Text Summarization. The efficiency of Punjabi language noun and Proper name stemmer is 87.37%. | false | [] | [] | null | null | null | null | 2011 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
zhao-kawahara-2018-unified | https://aclanthology.org/W18-5021 | A Unified Neural Architecture for Joint Dialog Act Segmentation and Recognition in Spoken Dialog System | In spoken dialog systems (SDSs), dialog act (DA) segmentation and recognition provide essential information for response generation. A majority of previous works assumed ground-truth segmentation of DA units, which is not available from automatic speech recognition (ASR) in SDS. We propose a unified architecture based on neural networks, which consists of a sequence tagger for segmentation and a classifier for recognition. The DA recognition model is based on hierarchical neural networks to incorporate the context of preceding sentences. We investigate sharing some layers of the two components so that they can be trained jointly and learn generalized features from both tasks. An evaluation on the Switchboard Dialog Act (SwDA) corpus shows that the jointly-trained models outperform independently-trained models, single-step models, and other reported results in DA segmentation, recognition, and joint tasks. | false | [] | [] | null | null | null | This work was supported by JST ERATO Ishiguro Symbiotic Human-Robot Interaction program (Grant Number JPMJER1401), Japan. | 2018 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
crego-etal-2005-ngram | https://aclanthology.org/2005.iwslt-1.23 | Ngram-based versus Phrase-based Statistical Machine Translation | This work summarizes a comparison between two approaches to Statistical Machine Translation (SMT), namely Ngram-based and Phrase-based SMT. In both approaches, the translation process is based on bilingual units related by word-to-word alignments (pairs of source and target words), while the main differences are based on the extraction process of these units and the statistical modeling of the translation context. The study has been carried out on two different translation tasks (in terms of translation difficulty and amount of available training data), and allowing for distortion (reordering) in the decoding process. Thus it extends a previous work were both approaches were compared under monotone conditions. We finally report comparative results in terms of translation accuracy, computation time and memory size. Results show how the ngram-based approach outperforms the phrase-based approach by achieving similar accuracy scores in less computational time and with less memory needs. | false | [] | [] | null | null | null | null | 2005 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
luo-etal-2018-auto | https://aclanthology.org/D18-1075 | An Auto-Encoder Matching Model for Learning Utterance-Level Semantic Dependency in Dialogue Generation | Generating semantically coherent responses is still a major challenge in dialogue generation. Different from conventional text generation tasks, the mapping between inputs and responses in conversations is more complicated, which highly demands the understanding of utterance-level semantic dependency, a relation between the whole meanings of inputs and outputs. To address this problem, we propose an Auto-Encoder Matching (AEM) model to learn such dependency. The model contains two auto-encoders and one mapping module. The auto-encoders learn the semantic representations of inputs and responses, and the mapping module learns to connect the utterance-level representations. Experimental results from automatic and human evaluations demonstrate that our model is capable of generating responses of high coherence and fluency compared to baseline models. 1 | false | [] | [] | null | null | null | This work was supported in part by National Natural Science Foundation of China (No. 61673028). We thank all reviewers for providing the construc-tive suggestions. Xu Sun is the corresponding author of this paper. | 2018 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
okumura-etal-2003-text | https://aclanthology.org/W03-0507 | Text Summarization Challenge 2 - Text summarization evaluation at NTCIR Workshop 3 | We describe the outline of Text Summarization Challenge 2 (TSC2 hereafter), a sequel text summarization evaluation conducted as one of the tasks at the NTCIR Workshop 3. First, we describe briefly the previous evaluation, Text Summarization Challenge (TSC1) as introduction to TSC2. Then we explain TSC2 including the participants, the two tasks in TSC2, data used, evaluation methods for each task, and brief report on the results. | false | [] | [] | null | null | null | null | 2003 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
soboroff-harman-2005-novelty | https://aclanthology.org/H05-1014 | Novelty Detection: The TREC Experience | A challenge for search systems is to detect not only when an item is relevant to the user's information need, but also when it contains something new which the user has not seen before. In the TREC novelty track, the task was to highlight sentences containing relevant and new information in a short, topical document stream. This is analogous to highlighting key parts of a document for another person to read, and this kind of output can be useful as input to a summarization system. Search topics involved both news events and reported opinions on hot-button subjects. When people performed this task, they tended to select small blocks of consecutive sentences, whereas current systems identified many relevant and novel passages. We also found that opinions are much harder to track than events. | true | [] | [] | Industry, Innovation and Infrastructure | null | null | null | 2005 | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false |
eshghi-etal-2013-probabilistic | https://aclanthology.org/W13-0110 | Probabilistic induction for an incremental semantic grammar | We describe a method for learning an incremental semantic grammar from a corpus in which sentences are paired with logical forms as predicate-argument structure trees. Working in the framework of Dynamic Syntax, and assuming a set of generally available compositional mechanisms, we show how lexical entries can be learned as probabilistic procedures for the incremental projection of semantic structure, providing a grammar suitable for use in an incremental probabilistic parser. By inducing these from a corpus generated using an existing grammar, we demonstrate that this results in both good coverage and compatibility with the original entries, without requiring annotation at the word level. We show that this semantic approach to grammar induction has the novel ability to learn the syntactic and semantic constraints on pronouns. * We would like to thank Ruth Kempson and Yo Sato for helpful comments and discussion. | false | [] | [] | null | null | null | null | 2013 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
liu-etal-2021-universal | https://aclanthology.org/2021.cl-2.15 | Universal Discourse Representation Structure Parsing | We consider the task of crosslingual semantic parsing in the style of Discourse Representation Theory (DRT) where knowledge from annotated corpora in a resource-rich language is transferred via bitext to guide learning in other languages. We introduce Universal Discourse Representation Theory (UDRT), a variant of DRT that explicitly anchors semantic representations to tokens in the linguistic input. We develop a semantic parsing framework based on the Transformer architecture and utilize it to obtain semantic resources in multiple languages following two learning schemes. The many-to-one approach translates non-English text to English, and then runs a relatively accurate English parser on the translated text, while the one-to-many approach translates gold standard English to non-English text and trains multiple parsers (one per language) on the translations. Experimental results on the Parallel Meaning Bank show that our proposal outperforms strong baselines by a wide margin and can be used to construct (silver-standard) meaning banks for 99 languages. | false | [] | [] | null | null | null | We thank the anonymous reviewers for their feedback. We thank Alex Lascarides for her comments. We gratefully acknowledge the support of the European Research Council (Lapata, Liu; award number 681760), the EU H2020 project SUMMA (Cohen, Liu; grant agreement 688139) and Bloomberg (Cohen, Liu). This work was partly funded by the NWO-VICI grant "Lost in Translation -Found in Meaning" (288-89-003). | 2021 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
ahlberg-etal-2015-paradigm | https://aclanthology.org/N15-1107 | Paradigm classification in supervised learning of morphology | Supervised morphological paradigm learning by identifying and aligning the longest common subsequence found in inflection tables has recently been proposed as a simple yet competitive way to induce morphological patterns. We combine this non-probabilistic strategy of inflection table generalization with a discriminative classifier to permit the reconstruction of complete inflection tables of unseen words. Our system learns morphological paradigms from labeled examples of inflection patterns (inflection tables) and then produces inflection tables from unseen lemmas or base forms. We evaluate the approach on datasets covering 11 different languages and show that this approach results in consistently higher accuracies vis-à-vis other methods on the same task, thus indicating that the general method is a viable approach to quickly creating highaccuracy morphological resources. | false | [] | [] | null | null | null | null | 2015 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
wu-etal-2020-improving-knowledge | https://aclanthology.org/2020.findings-emnlp.126 | Improving Knowledge-Aware Dialogue Response Generation by Using Human-Written Prototype Dialogues | Incorporating commonsense knowledge can alleviate the issue of generating generic responses in open-domain generative dialogue systems. However, selecting knowledge facts for the dialogue context is still a challenge. The widely used approach Entity Name Matching always retrieves irrelevant facts from the view of local entity words. This paper proposes a novel knowledge selection approach, Prototype-KR, and a knowledge-aware generative model, Prototype-KRG. Given a query, our approach first retrieves a set of prototype dialogues that are relevant to the query. We find knowledge facts used in prototype dialogues usually are highly relevant to the current query; thus, Prototype-KR ranks such knowledge facts based on the semantic similarity and then selects the most appropriate facts. Subsequently, Prototype-KRG can generate an informative response using the selected knowledge facts. Experiments demonstrate that our approach has achieved notable improvements on the most metrics, compared to generative baselines. Meanwhile, compared to IR(Retrieval)-based baselines, responses generated by our approach are more relevant to the context and have comparable informativeness. | false | [] | [] | null | null | null | This work is supported by the National Key R&D Program of China (Grant No. 2017YFB1002000). | 2020 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
feng-etal-2013-connotation | https://aclanthology.org/P13-1174 | Connotation Lexicon: A Dash of Sentiment Beneath the Surface Meaning | Understanding the connotation of words plays an important role in interpreting subtle shades of sentiment beyond denotative or surface meaning of text, as seemingly objective statements often allude nuanced sentiment of the writer, and even purposefully conjure emotion from the readers' minds. The focus of this paper is drawing nuanced, connotative sentiments from even those words that are objective on the surface, such as "intelligence", "human", and "cheesecake". We propose induction algorithms encoding a diverse set of linguistic insights (semantic prosody, distributional similarity, semantic parallelism of coordination) and prior knowledge drawn from lexical resources, resulting in the first broad-coverage connotation lexicon. | false | [] | [] | null | null | null | This research was supported in part by the Stony Brook University Office of the Vice President for Research. We thank reviewers for many insightful comments and suggestions, and for providing us with several very inspiring examples to work with. | 2013 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
cooper-stickland-etal-2021-recipes | https://aclanthology.org/2021.eacl-main.301 | Recipes for Adapting Pre-trained Monolingual and Multilingual Models to Machine Translation | There has been recent success in pre-training on monolingual data and fine-tuning on Machine Translation (MT), but it remains unclear how to best leverage a pre-trained model for a given MT task. This paper investigates the benefits and drawbacks of freezing parameters, and adding new ones, when fine-tuning a pre-trained model on MT. We focus on 1) Fine-tuning a model trained only on English monolingual data, BART. 2) Fine-tuning a model trained on monolingual data from 25 languages, mBART. For BART we get the best performance by freezing most of the model parameters, and adding extra positional embeddings. For mBART we match or outperform the performance of naive fine-tuning for most language pairs with the encoder, and most of the decoder, frozen. The encoder-decoder attention parameters are most important to finetune. When constraining ourselves to an outof-domain training set for Vietnamese to English we see the largest improvements over the fine-tuning baseline. | false | [] | [] | null | null | null | We'd like to thank James Cross, Mike Lewis, Naman Goyal, Jiatao Gu, Iain Murray, Yuqing Tang and Luke Zettlemoyer for useful discussion. We also thank our colleagues at FAIR and FAIAR for valuable feedback. | 2021 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
hasler-2004-ignore | http://www.lrec-conf.org/proceedings/lrec2004/pdf/338.pdf | ``Why do you Ignore me?'' - Proof that not all Direct Speech is Bad | In the automatic summarisation of written texts, direct speech is usually deemed unsuitable for inclusion in important sentences. This is due to the fact that humans do not usually include such quotations when they create summaries. In this paper, we argue that despite generally negative attitudes, direct speech can be useful for summarisation and ignoring it can result in the omission of important and relevant information. We present an analysis of a corpus of annotated newswire texts in which a substantial amount of speech is marked by different annotators, and describe when and why direct speech can be included in summaries. In an attempt to make direct speech more appropriate for summaries, we also describe rules currently being developed to transform it into a more summary-acceptable format. | false | [] | [] | null | null | null | null | 2004 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
patwardhan-riloff-2009-unified | https://aclanthology.org/D09-1016 | A Unified Model of Phrasal and Sentential Evidence for Information Extraction | Information Extraction (IE) systems that extract role fillers for events typically look at the local context surrounding a phrase when deciding whether to extract it. Often, however, role fillers occur in clauses that are not directly linked to an event word. We present a new model for event extraction that jointly considers both the local context around a phrase along with the wider sentential context in a probabilistic framework. Our approach uses a sentential event recognizer and a plausible role-filler recognizer that is conditioned on event sentences. We evaluate our system on two IE data sets and show that our model performs well in comparison to existing IE systems that rely on local phrasal context. | false | [] | [] | null | null | null | This work has been supported in part by the Department of Homeland Security Grant N0014-07-1-0152. We are grateful to Nathan Gilbert and Adam Teichert for their help with the annotation of event sentences. | 2009 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
mcconachy-etal-1998-bayesian | https://aclanthology.org/W98-1212 | A Bayesian Approach to Automating Argumentation | Our argumentation system NAG uses Bayesian networks in a user model and in a normative model to assemble and assess nice arguments, that is arguments which balance persuasiveness with normative correctness. Attentional focus is simulated in both models to select relevant subnetworks for Bayesian propagation. Bayesian propagation in the user model is modified to represent some human cognitive weaknesses. The subnetworks are expanded in an iterative abductive process until argumentative goals are achieved in both models, when the argument is presented to the user. | false | [] | [] | null | null | null | This work was supported in part by Australian Research Council grant A49531227. | 1998 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
rommel-1984-language | https://aclanthology.org/1984.bcs-1.36 | Language or information: a new role for the translator | After three days of hearing about machine translation, machine-aided translation, terminology, lexicography, in fact about the methodology and techniques that are making it possible to escalate the information flow to unprecedented proportions, I am astounded at my temerity in agreeing to speak about so pedestrian a subject as the role of the human translator. In vindication, may I say that I have spent a considerable number of years in training future generations of translators, so that this is perhaps an act of self-justification. I might add that what I have heard this week has not led me to believe that the translator's skills, unlike the compositor's, have become obsolete. I am convinced, however, that these skills must be adapted and expanded so that the translator can continue to play his vital role in the dissemination of information and as a "keeper of the language". | false | [] | [] | null | null | null | null | 1984 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
min-etal-2000-typographical | http://www.lrec-conf.org/proceedings/lrec2000/pdf/221.pdf | Typographical and Orthographical Spelling Error Correction | This paper focuses on selection techniques for best correction of misspelt words at the lexical level. Spelling errors are introduced by either cognitive or typographical mistakes. A robust spelling correction algorithm is needed to cover both cognitive and typographical errors. For the most effective spelling correction system, various strategies are considered in this paper: ranking heuristics, correction algorithms, and correction priority strategies for the best selection. The strategies also take account of error types, syntactic information, word frequency statistics, and character distance. The findings show that it is very hard to generalise the spelling correction strategy for various types of data sets such as typographical, orthographical, and scanning errors. | false | [] | [] | null | null | null | null | 2000 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
schluter-2018-word | https://aclanthology.org/N18-2039 | The Word Analogy Testing Caveat | There are some important problems in the evaluation of word embeddings using standard word analogy tests. In particular, in virtue of the assumptions made by systems generating the embeddings, these remain tests over randomness. We show that even supposing there were such word analogy regularities that should be detected in the word embeddings obtained via unsupervised means, standard word analogy test implementation practices provide distorted or contrived results. We raise concerns regarding the use of Principal Component Analysis to 2 or 3 dimensions as a provision of visual evidence for the existence of word analogy relations in embeddings. Finally, we propose some solutions to these problems. | false | [] | [] | null | null | null | null | 2018 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
keller-1995-towards | https://aclanthology.org/E95-1045 | Towards an Account of Extraposition in HPSG | This paper investigates the syntax of extraposition in the HPSG framework. We present English and German data (partly taken from corpora), and provide an analysis using a nonlocal dependency and lexical rules. The condition for binding the dependency is formulated relative to the antecedent of the extraposed phrase, which entails that no fixed site for extraposition exists. Our account allows to explains the interaction of extraposition with fronting and coordination, and predicts constraints on multiple extraposition. | false | [] | [] | null | null | null | null | 1995 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
kiefer-etal-2002-novel | https://aclanthology.org/C02-1075 | A Novel Disambiguation Method for Unification-Based Grammars Using Probabilistic Context-Free Approximations | We present a novel disambiguation method for unification-based grammars (UBGs). In contrast to other methods, our approach obviates the need for probability models on the UBG side in that it shifts the responsibility to simpler context-free models, indirectly obtained from the UBG. Our approach has three advantages: (i) training can be effectively done in practice, (ii) parsing and disambiguation of context-free readings requires only cubic time, and (iii) involved probability distributions are mathematically clean. In an experiment for a mid-size UBG, we show that our novel approach is feasible. Using unsupervised training, we achieve 88% accuracy on an exact-match task. | false | [] | [] | null | null | null | This research was supported by the German Federal Ministry for Education, Science, Research, and Technology under grant no. 01 IW 002 and EU grant no. IST-1999-11438. | 2002 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
porzel-baudis-2004-tao | https://aclanthology.org/N04-1027 | The Tao of CHI: Towards Effective Human-Computer Interaction | End-to-end evaluations of conversational dialogue systems with naive users are currently uncovering severe usability problems that result in low task completion rates. Preliminary analyses suggest that these problems are related to the system's dialogue management and turntaking behavior. We present the results of experiments designed to take a detailed look at the effects of that behavior. Based on the resulting findings, we spell out a set of criteria which lie orthogonal to dialogue quality, but nevertheless constitute an integral part of a more comprehensive view on dialogue felicity as a function of dialogue quality and efficiency. | false | [] | [] | null | null | null | This work has been partially funded by the German Federal Ministry of Research and Technology (BMBF) and by the Klaus Tschira Foundation as part of the SMARTKOM, SMARTWEB, and EDU projects. We would like to thank the International Computer Science Institute in Berkeley for their help in collecting the data especially, Lila Finhill, Thilo Pfau, Adam Janin and Fey Parrill. | 2004 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
ashok-etal-2014-dialogue | https://aclanthology.org/W14-4317 | Dialogue Act Modeling for Non-Visual Web Access | Speech-enabled dialogue systems have the potential to enhance the ease with which blind individuals can interact with the Web beyond what is possible with screen readers-the currently available assistive technology which narrates the textual content on the screen and provides shortcuts to navigate the content. In this paper, we present a dialogue act model towards developing a speech enabled browsing system. The model is based on the corpus data that was collected in a wizard-of-oz study with 24 blind individuals who were assigned a gamut of browsing tasks. The development of the model included extensive experiments with assorted feature sets and classifiers; the outcomes of the experiments and the analysis of the results are presented. | false | [] | [] | null | null | null | Research reported in this publication was supported by the National Eye Institute of the National Institutes of Health under award number 1R43EY21962-1A1. We would like to thank Lighthouse Guild International and Dr. William Seiple in particular for helping conduct user studies. | 2014 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
kuroda-2010-arguments | https://aclanthology.org/Y10-1052 | Arguments for Parallel Distributed Parsing: Toward the Integration of Lexical and Sublexical (Semantic) Parsings | This paper illustrates the idea of parallel distributed parsing (PDP), which allows us to integrate lexical and sublexical analyses. PDP is proposed for providing a new model of efficient, information-rich parses that can remedy the data sparseness problem. 1) The example and explanation were taken from http://nlp.stanford.edu/projects/shallow-parsing.shtml. | false | [] | [] | null | null | null | null | 2010 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
liu-etal-2021-morphological | https://aclanthology.org/2021.americasnlp-1.10 | Morphological Segmentation for Seneca | This study takes up the task of low-resource morphological segmentation for Seneca, a critically endangered and morphologically complex Native American language primarily spoken in what is now New York State and Ontario. The labeled data in our experiments comes from two sources: one digitized from a publicly available grammar book and the other collected from informal sources. We treat these two sources as distinct domains and investigate different evaluation designs for model selection. The first design abides by standard practices and evaluates models with the in-domain development set, while the second one carries out evaluation using a development domain, or the out-of-domain development set. Across a series of monolingual and cross-linguistic training settings, our results demonstrate the utility of neural encoderdecoder architecture when coupled with multitask learning. | false | [] | [] | null | null | null | We are grateful for the cooperation and support of the Seneca Nation of Indians. This material is based upon work supported by the National Science Foundation under Grant No. 1761562. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. | 2021 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
rei-etal-2021-mt | https://aclanthology.org/2021.acl-demo.9 | MT-Telescope: An interactive platform for contrastive evaluation of MT systems | We present MT-TELESCOPE, a visualization platform designed to facilitate comparative analysis of the output quality of two Machine Translation (MT) systems. While automated MT evaluation metrics are commonly used to evaluate MT systems at a corpus-level, our platform supports fine-grained segment-level analysis and interactive visualisations that expose the fundamental differences in the performance of the compared systems. MT-TELESCOPE also supports dynamic corpus filtering to enable focused analysis on specific phenomena such as; translation of named entities, handling of terminology, and the impact of input segment length on translation quality. Furthermore, the platform provides a bootstrapped t-test for statistical significance as a means of evaluating the rigor of the resulting system ranking. MT-TELESCOPE is open source 1 , written in Python, and is built around a user friendly and dynamic web interface. Complementing other existing tools, our platform is designed to facilitate and promote the broader adoption of more rigorous analysis practices in the evaluation of MT quality. | false | [] | [] | null | null | null | We are grateful to the Unbabel MT team, specially Austin Matthews and João Alves, for their valuable feedback. This work was supported in part by the P2020 Program through projects MAIA and Unbabel4EU, supervised by ANI under contract numbers 045909 and 042671, respectively. | 2021 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
bartsch-2004-annotating | http://www.lrec-conf.org/proceedings/lrec2004/pdf/361.pdf | Annotating a Corpus for Building a Domain-specific Knowledge Base | The project described in this paper seeks to develop a knowledge base for the domain of data processing in construction-a sub-domain of mechanical engineering-based on a corpus of authentic natural language text. Central in this undertaking is the annotation of the relevant linguistic and conceptual units and structures which are to form the basis of the knowledge base. This paper describes the levels of annotation and the ontology on which the knowledge base is going to be modelled and sketches some of the linguistic relations which are used in building the knowledge base. | false | [] | [] | null | null | null | null | 2004 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
martins-etal-2012-structured | https://aclanthology.org/N12-4002 | Structured Sparsity in Natural Language Processing: Models, Algorithms and Applications | This tutorial will cover recent advances in sparse modeling with diverse applications in natural language processing (NLP). A sparse model is one that uses a relatively small number of features to map an input to an output, such as a label sequence or parse tree. The advantages of sparsity are, among others, compactness and interpretability; in fact, sparsity is currently a major theme in statistics, machine learning, and signal processing. The goal of sparsity can be seen in terms of earlier goals of feature selection and therefore model selection (Della | false | [] | [] | null | null | null | This tutorial was enabled by support from the following organizations: | 2012 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
malmasi-dras-2014-chinese | https://aclanthology.org/E14-4019 | Chinese Native Language Identification | We present the first application of Native Language Identification (NLI) to non-English data. Motivated by theories of language transfer, NLI is the task of identifying a writer's native language (L1) based on their writings in a second language (the L2). An NLI system was applied to Chinese learner texts using topicindependent syntactic models to assess their accuracy. We find that models using part-of-speech tags, context-free grammar production rules and function words are highly effective, achieving a maximum accuracy of 71%. Interestingly, we also find that when applied to equivalent English data, the model performance is almost identical. This finding suggests a systematic pattern of cross-linguistic transfer may exist, where the degree of transfer is independent of the L1 and L2. | false | [] | [] | null | null | null | We wish to thank Associate Professor Maolin Wang for providing access to the CLC corpus, and Zhendong Zhao for his assistance. We also thank the reviewers for their constructive feedback. | 2014 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
obeidat-etal-2019-description | https://aclanthology.org/N19-1087 | Description-Based Zero-shot Fine-Grained Entity Typing | Fine-grained Entity typing (FGET) is the task of assigning a fine-grained type from a hierarchy to entity mentions in the text. As the taxonomy of types evolves continuously, it is desirable for an entity typing system to be able to recognize novel types without additional training. This work proposes a zero-shot entity typing approach that utilizes the type description available from Wikipedia to build a distributed semantic representation of the types. During training, our system learns to align the entity mentions and their corresponding type representations on the known types. At test time, any new type can be incorporated into the system given its Wikipedia descriptions. We evaluate our approach on FIGER, a public benchmark entity tying dataset. Because the existing test set of FIGER covers only a small portion of the fine-grained types, we create a new test set by manually annotating a portion of the noisy training data. Our experiments demonstrate the effectiveness of the proposed method in recognizing novel types that are not present in the training data. | false | [] | [] | null | null | null | We thank Jordan University of Science and Technology for Ph.D. fellowship (to R. O.). | 2019 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
shirai-etal-1980-trial | https://aclanthology.org/C80-1070 | A Trial of Japanese Text Input System Using Speech Recognition | Since written Japanese texts are expressed by many kinds of characters, input technique is most difficult when Japanese information is processed by computers. Therefore a task-independent Japanese text input system which has a speech analyzer as a main input device and a keyboard as an auxiliary device was designed and it has been implemented. The outline and experience of this system is described in this paper.
The system consists of the phoneme discrimination part and the word discrimination part. | false | [] | [] | null | null | null | The authors wish to thank J.Kubota, T. Kobayashi and M.Ohashi for their contributions to designing and developing this system. | 1980 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
cubel-etal-2003-adapting | https://aclanthology.org/2003.eamt-1.6 | Adapting finite-state translation to the TransType2 project | Machine translation can play an important role nowadays, helping communication between people. One of the projects in this field is TransType2 1. Its purpose is to develop an innovative, interactive machine translation system. TransType2 aims at facilitating the task of producing high-quality translations, and make the translation task more cost-effective for human translators. To achieve this goal, stochastic finite-state transducers are being used. Stochastic finite-state transducers are generated by means of hybrid finite-state and statistical alignment techniques. Viterbi parsing procedure with stochastic finite-state transducers have been adapted to take into account the source sentence to be translated and the target prefix given by the human translator. Experiments have been carried out with a corpus of printer manuals. The first results showed that with this preliminary prototype, users can only type a 15% of the words instead the whole complete translated text. | false | [] | [] | null | null | null | The authors would like to thank the reasearchers involved in the TT2 project who have developed the methodologies that are presented in this paper.This work has been supported by the European Union under the IST Programme (IST-2001-32091). | 2003 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
libovicky-pecina-2014-tolerant | https://aclanthology.org/W14-3353 | Tolerant BLEU: a Submission to the WMT14 Metrics Task | This paper describes a machine translation metric submitted to the WMT14 Metrics Task. It is a simple modification of the standard BLEU metric using a monolingual alignment of reference and test sentences. The alignment is computed as a minimum weighted maximum bipartite matching of the translated and the reference sentence words with respect to the relative edit distance of the word prefixes and suffixes. The aligned words are included in the n-gram precision computation with a penalty proportional to the matching distance. The proposed tBLEU metric is designed to be more tolerant to errors in inflection, which usually does not effect the understandability of a sentence, and therefore be more suitable for measuring quality of translation into morphologically richer languages. | false | [] | [] | null | null | null | This research has been funded by the Czech Science Foundation (grant n. P103/12/G084) and the EU FP7 project Khresmoi (contract no. 257528). | 2014 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
mao-etal-2008-chinese | https://aclanthology.org/I08-4013 | Chinese Word Segmentation and Named Entity Recognition Based on Conditional Random Fields | Chinese word segmentation (CWS), named entity recognition (NER) and part-ofspeech tagging is the lexical processing in Chinese language. This paper describes the work on these tasks done by France Telecom Team (Beijing) at the fourth International Chinese Language Processing Bakeoff. In particular, we employ Conditional Random Fields with different features for these tasks. In order to improve NER relatively low recall; we exploit non-local features and alleviate class imbalanced distribution on NER dataset to enhance the recall and keep its relatively high precision. Some other post-processing measures such as consistency checking and transformation-based error-driven learning are used to improve word segmentation performance. Our systems participated in most CWS and POS tagging evaluations and all the NER tracks. As a result, our NER system achieves the first ranks on MSRA open track and MSRA/CityU closed track. Our CWS system achieves the first rank on CityU open track, which means that our systems achieve state-of-the-art performance on Chinese lexical processing. | false | [] | [] | null | null | null | null | 2008 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
egan-2010-cross | https://aclanthology.org/2010.amta-government.5 | Cross Lingual Arabic Blog Alerting (COLABA) | null | false | [] | [] | null | null | null | null | 2010 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
kaplan-1997-lexical | https://aclanthology.org/W97-1508 | Lexical Resource Reconciliation in the Xerox Linguistic Environment | This paper motivates and describes those aspects of the Xerox Linguistic Environment (XLE) that facilitate the construction of broad-coverage Lexical Functional grammars by incorporating morphological and lexical material from external resources. Because that material can be incorrect, incomplete, or otherwise incompatible with the grammar, mechanisms are provided to correct and augment the external material to suit the needs of the grammar developer. This can be accomplished without direct modification of the incorporated material, which is often infeasible or undesirable. Externally-developed finite-state morphological analyzers are reconciled with grammar requirements by run-time simulation of finite-state calculus operations for combining transducers. Lexical entries derived by automatic extraction from on-line dictionaries or via corpus-analysis tools are incorporated and reconciled by extending the LFG lexicon formalism to allow fine-tuned integration of information from difference sources. | false | [] | [] | null | null | null | We would like to thank the participants of the Pargram Parallel Grammar project for raising the issues motivating the work described in this paper, in particular Miriam Butt and Christian Rohrer for identifying the lexicon-related problems, and Tracy Holloway King and Marfa-Eugenia Nifio for bringing morphological problems to our attention. We also thank John Maxwell for his contribution towards formulating one of the approaches described, and Max Copperman for his help in implementing the facilities. And we thank Max Copperman, Mary Dalrymple, and John Maxwell for their editorial assistance. | 1997 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
tiedemann-nygaard-2004-opus | http://www.lrec-conf.org/proceedings/lrec2004/pdf/320.pdf | The OPUS Corpus - Parallel and Free: \urlhttp://logos.uio.no/opus | The OPUS corpus is a growing collection of translated documents collected from the internet. The current version contains about 30 million words in 60 languages. The entire corpus is sentence aligned and it also contains linguistic markup for certain languages. | false | [] | [] | null | null | null | null | 2004 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
junczys-dowmunt-grundkiewicz-2016-log | https://aclanthology.org/W16-2378 | Log-linear Combinations of Monolingual and Bilingual Neural Machine Translation Models for Automatic Post-Editing | This paper describes the submission of the AMU (Adam Mickiewicz University) team to the Automatic Post-Editing (APE) task of WMT 2016. We explore the application of neural translation models to the APE problem and achieve good results by treating different models as components in a log-linear model, allowing for multiple inputs (the MT-output and the source) that are decoded to the same target language (post-edited translations). A simple string-matching penalty integrated within the log-linear model is used to control for higher faithfulness with regard to the raw machine translation output. To overcome the problem of too little training data, we generate large amounts of artificial data. Our submission improves over the uncorrected baseline on the unseen test set by-3.2% TER and +5.5% BLEU and outperforms any other system submitted to the shared-task by a large margin. | false | [] | [] | null | null | null | This work is partially funded by the National Science Centre, Poland (Grant No. 2014/15/N/ST6/02330). | 2016 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
scicluna-strapparava-2020-vroav | https://aclanthology.org/2020.lrec-1.742 | VROAV: Using Iconicity to Visually Represent Abstract Verbs | ness is a feature of semantics that limits our ability to visualise every conceivable concept represented by a word. By tapping into the visual representation of words, we explore the common semantic elements that link words to each other. Visual languages like sign languages have been found to reveal enlightening patterns across signs of similar meanings, pointing towards the possibility of identifying clusters of iconic meanings in words. Thanks to this insight, along with an understanding of verb predicates achieved from VerbNet, this study produced VROAV (Visual Representation of Abstract Verbs): a novel verb classification system based on the shape and movement of verbs. The outcome includes 20 classes of abstract verbs and their visual representations, which were tested for validity in an online survey. Considerable agreement between participants, who judged graphic animations based on representativeness, suggests a positive way forward for this proposal, which may be developed as a language learning aid in educational contexts or as a multimodal language comprehension tool for digital text. | false | [] | [] | null | null | null | null | 2020 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
beck-etal-2013-shef | https://aclanthology.org/W13-2241 | SHEF-Lite: When Less is More for Translation Quality Estimation | We describe the results of our submissions to the WMT13 Shared Task on Quality Estimation (subtasks 1.1 and 1.3). Our submissions use the framework of Gaussian Processes to investigate lightweight approaches for this problem. We focus on two approaches, one based on feature selection and another based on active learning. Using only 25 (out of 160) features, our model resulting from feature selection ranked 1st place in the scoring variant of subtask 1.1 and 3rd place in the ranking variant of the subtask, while the active learning model reached 2nd place in the scoring variant using only ∼25% of the available instances for training. These results give evidence that Gaussian Processes achieve the state of the art performance as a modelling approach for translation quality estimation, and that carefully selecting features and instances for the problem can further improve or at least maintain the same performance levels while making the problem less resource-intensive. | false | [] | [] | null | null | null | This work was supported by funding from CNPq/Brazil (No. 237999/2012-9, Daniel Beck) and from the EU FP7- ICT QTLaunchPad project (No. 296347, Kashif Shah and Lucia Specia). | 2013 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
aarts-1992-uniform | https://aclanthology.org/C92-4183 | Uniform Recognition for Acyclic Context-Sensitive Grammars is NP-complete | Context-sensitive grammars in which each rule is of the forln aZfl-~ (-*Tfl are acyclic if the associated context-free grammar with the rules Z ~ 3' is acyclic. The problem whether an intmt string is in the language generated by an acyclic contextsensitive grammar is NP-conlplete. | false | [] | [] | null | null | null | Acknowledgements I want to thank Peter van Erode Boa.s, Reinhard Muskens, Mart Trautwein and Theo Jansen for their comments on carher versions of this paper. | 1992 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
oraby-etal-2017-serious | https://aclanthology.org/W17-5537 | Are you serious?: Rhetorical Questions and Sarcasm in Social Media Dialog | Effective models of social dialog must understand a broad range of rhetorical and figurative devices. Rhetorical questions (RQs) are a type of figurative language whose aim is to achieve a pragmatic goal, such as structuring an argument, being persuasive, emphasizing a point, or being ironic. While there are computational models for other forms of figurative language, rhetorical questions have received little attention to date. We expand a small dataset from previous work, presenting a corpus of 10,270 RQs from debate forums and Twitter that represent different discourse functions. We show that we can clearly distinguish between RQs and sincere questions (0.76 F1). We then show that RQs can be used both sarcastically and non-sarcastically, observing that non-sarcastic (other) uses of RQs are frequently argumentative in forums, and persuasive in tweets. We present experiments to distinguish between these uses of RQs using SVM and LSTM models that represent linguistic features and post-level context, achieving results as high as 0.76 F1 for SARCASTIC and 0.77 F1 for OTHER in forums, and 0.83 F1 for both SARCASTIC and OTHER in tweets. We supplement our quantitative experiments with an in-depth characterization of the linguistic variation in RQs. 1 Subjects could provide multiple discourse functions for RQs, thus the frequencies do not add to 1. | false | [] | [] | null | null | null | This work was funded by NSF CISE RI 1302668, under the Robust Intelligence Program. | 2017 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
puduppully-etal-2019-data | https://aclanthology.org/P19-1195 | Data-to-text Generation with Entity Modeling | Recent approaches to data-to-text generation have shown great promise thanks to the use of large-scale datasets and the application of neural network architectures which are trained end-to-end. These models rely on representation learning to select content appropriately, structure it coherently, and verbalize it grammatically, treating entities as nothing more than vocabulary tokens. In this work we propose an entity-centric neural architecture for data-to-text generation. Our model creates entity-specific representations which are dynamically updated. Text is generated conditioned on the data input and entity memory representations using hierarchical attention at each time step. We present experiments on the ROTOWIRE benchmark and a (five times larger) new dataset on the baseball domain which we create. Our results show that the proposed model outperforms competitive baselines in automatic and human evaluation. 1 | false | [] | [] | null | null | null | We would like to thank Adam Lopez for helpful discussions. We acknowledge the financial support of the European Research Council (Lapata; award number 681760). | 2019 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
p-r-etal-2017-hitachi | https://aclanthology.org/S17-2176 | Hitachi at SemEval-2017 Task 12: System for temporal information extraction from clinical notes | This paper describes the system developed for the task of temporal information extraction from clinical narratives in the context of the 2017 Clinical TempEval challenge. Clinical TempEval 2017 addressed the problem of temporal reasoning in the clinical domain by providing annotated clinical notes, pathology and radiology reports in line with Clinical Tem-pEval challenges 2015/16, across two different evaluation phases focusing on cross domain adaptation. Our team focused on subtasks involving extractions of temporal spans and relations for which the developed systems showed average F-score of 0.45 and 0.47 across the two phases of evaluations. | true | [] | [] | Good Health and Well-Being | null | null | We thank Mayo clinic and Clinical TempEval organizers for providing access to THYME corpus and other helps provided for our participation in the competition. | 2017 | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
slocum-1988-morphological | https://aclanthology.org/A88-1031 | Morphological Processing in the Nabu System | Processing system under development in the Human Interface Laboratory at MCC, for shareholder companies. Its morphological component is designed to perform a number of different functions. This has been used to produce a complete analyzer for Arabic; very substantial analyzers for English, French, German, and Spanish; and small collections of rules for Russian and Japanese. In addition, other functions have been implemented for several of these languages.
In this paper we discuss our philosophy, which constrained our design decisions; elaborate some specific functions a morphological component should support; survey some competing approaches; describe our technique, which provides the necessary functionality while meeting the other design constraints; and support our approach by characterizing our success in developing/testing processors for various combinations of language and function. | false | [] | [] | null | null | null | null | 1988 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
more-tsarfaty-2016-data | https://aclanthology.org/C16-1033 | Data-Driven Morphological Analysis and Disambiguation for Morphologically Rich Languages and Universal Dependencies | Parsing texts into universal dependencies (UD) in realistic scenarios requires infrastructure for morphological analysis and disambiguation (MA&D) of typologically different languages as a first tier. MA&D is particularly challenging in morphologically rich languages (MRLs), where the ambiguous space-delimited tokens ought to be disambiguated with respect to their constituent morphemes. Here we present a novel, language-agnostic, framework for MA&D, based on a transition system with two variants, word-based and morpheme-based, and a dedicated transition to mitigate the biases of variable-length morpheme sequences. Our experiments on a Modern Hebrew case study outperform the state of the art, and we show that the morpheme-based MD consistently outperforms our word-based variant. We further illustrate the utility and multilingual coverage of our framework by morphologically analyzing and disambiguating the large set of languages in the UD treebanks. | false | [] | [] | null | null | null | null | 2016 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
moss-1990-growing | https://aclanthology.org/1990.tc-1.3 | The growing range of document preparation systems | What I intend to do today is to look at the complete document preparation scene from the simplest systems to the most complex. I will examine how each level is relevant to the translator and how each level relates to each other.
I will finish by examining why there have been major leaps forward in document preparation systems for translators in the last couple of years and the likely way forward. | false | [] | [] | null | null | null | null | 1990 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
gu-cercone-2006-segment | https://aclanthology.org/P06-1061 | Segment-Based Hidden Markov Models for Information Extraction | Hidden Markov models (HMMs) are powerful statistical models that have found successful applications in Information Extraction (IE). In current approaches to applying HMMs to IE, an HMM is used to model text at the document level. This modelling might cause undesired redundancy in extraction in the sense that more than one filler is identified and extracted. We propose to use HMMs to model text at the segment level, in which the extraction process consists of two steps: a segment retrieval step followed by an extraction step. In order to retrieve extractionrelevant segments from documents, we introduce a method to use HMMs to model and retrieve segments. Our experimental results show that the resulting segment HMM IE system not only achieves near zero extraction redundancy, but also has better overall extraction performance than traditional document HMM IE systems. | false | [] | [] | null | null | null | null | 2006 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
schmidtke-groves-2019-automatic | https://aclanthology.org/W19-6729 | Automatic Translation for Software with Safe Velocity | We report on a model for machine translation (MT) of software, without review, for the Microsoft Office product range. We have deployed an automated localisation workflow, known as Automated Translation (AT) for software, which identifies resource strings as suitable and safe for MT without post-editing. The model makes use of string profiling, user impact assessment, MT quality estimation, and customer feedback mechanisms. This allows us to introduce automatic translation at a safe velocity, with a minimal risk to customer satisfaction. Quality constraints limit the volume of MT in relation to human translation, with published low-quality MT limited to not exceed 10% of total word count. The AT for software model has been deployed into production for most of the Office product range, for 37 languages. It allows us to MT and publish without review over 20% of the word count for some languages and products. To date, we have processed more than 1 million words with this model, and so far have not seen any measurable negative impact on customer satisfaction. | false | [] | [] | null | null | null | The AT for software model was developed by the Office GSX (Global Service Experience) team in the Microsoft European Development Centre, from 2017 to 2018. The following people were involved; Siobhan Ashton, Antonio Benítez Lopez, Brian Comerford, Gemma Devine, Vincent Gadani, Craig Jeffares, Sankar Kumar Indraganti, Anton Masalovich, David Moran, Glen Poor and Simone Van Bruggen, in addition to the authors. | 2019 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
chu-qian-2001-locating | https://aclanthology.org/O01-2003 | Locating Boundaries for Prosodic Constituents in Unrestricted Mandarin Texts | This paper proposes a three-tier prosodic hierarchy, including prosodic word, intermediate phrase and intonational phrase tiers, for Mandarin that emphasizes the use of the prosodic word instead of the lexical word as the basic prosodic unit. Both the surface difference and perceptual difference show that this is helpful for achieving high naturalness in text-to-speech conversion. Three approaches, the basic CART approach, the bottom-up hierarchical approach and the modified hierarchical approach, are presented for locating the boundaries of three prosodic constituents in unrestricted Mandarin texts. Two sets of features are used in the basic CART method: one contains syntactic phrasal information and the other does not. The one with syntactic phrasal information results in about a 1% increase in accuracy and an 11% decrease in error-cost. The performance of the modified hierarchical method produces the highest accuracy, 83%, and lowest error cost when no syntactic phrasal information is provided. It shows advantages in detecting the boundaries of intonational phrases at locations without breaking punctuation. 71.1% precision and 52.4% recall are achieved. Experiments on acceptability reveal that only 26% of the mis-assigned break indices are real infelicitous errors, and that the perceptual difference between the automatically assigned break indices and the manually annotated break indices are small. | false | [] | [] | null | null | null | The authors thank Dr. Ming Zhou for providing the block-based robust dependency parser as a toolkit for use in this study. Thanks go to everybody who took part in the perceptual test. The authors are especially grateful to all the reviewers for their valuable remarks and suggestions. | 2001 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
chakrabarty-etal-2020-r | https://aclanthology.org/2020.acl-main.711 | R\^3: Reverse, Retrieve, and Rank for Sarcasm Generation with Commonsense Knowledge | We propose an unsupervised approach for sarcasm generation based on a non-sarcastic input sentence. Our method employs a retrieve-andedit framework to instantiate two major characteristics of sarcasm: reversal of valence and semantic incongruity with the context, which could include shared commonsense or world knowledge between the speaker and the listener. While prior works on sarcasm generation predominantly focus on context incongruity, we show that combining valence reversal and semantic incongruity based on commonsense knowledge generates sarcastic messages of higher quality based on several criteria. Human evaluation shows that our system generates sarcasm better than human judges 34% of the time, and better than a reinforced hybrid baseline 90% of the time. | false | [] | [] | null | null | null | This work was supported in part by the MCS program under Cooperative Agreement N66001-19-2-4032 and the CwC program under Contract W911NF-15-1-0543 with the US Defense Advanced Research Projects Agency (DARPA). The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. The authors would like to thank Christopher Hidey, John Kropf, Anusha Bala and Christopher Robert Kedzie for useful discussions. The authors also thank members of PLUSLab at the University Of Southern California and the anonymous reviewers for helpful comments. | 2020 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
wan-xiao-2008-collabrank | https://aclanthology.org/C08-1122 | CollabRank: Towards a Collaborative Approach to Single-Document Keyphrase Extraction | Previous methods usually conduct the keyphrase extraction task for single documents separately without interactions for each document, under the assumption that the documents are considered independent of each other. This paper proposes a novel approach named Col-labRank to collaborative single-document keyphrase extraction by making use of mutual influences of multiple documents within a cluster context. CollabRank is implemented by first employing the clustering algorithm to obtain appropriate document clusters, and then using the graph-based ranking algorithm for collaborative single-document keyphrase extraction within each cluster. Experimental results demonstrate the encouraging performance of the proposed approach. Different clustering algorithms have been investigated and we find that the system performance relies positively on the quality of document clusters. | false | [] | [] | null | null | null | This work was supported by the National Science Foundation of China (No.60703064), the Research Fund for the Doctoral Program of Higher Education of China (No.20070001059) | 2008 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
stallard-1989-unification | https://aclanthology.org/H89-2006 | Unification-Based Semantic Interpretation in the BBN Spoken Language System | This paper describes the current state of work on unification-based semantic interpretation in HARC (for Hear and Recognize Continous speech) the BBN Spoken Language System. It presents the implementation of an integrated syntax/semantics grammar written in a unification formalism similar to Definite Clause Grammar. This formalism is described, and its use in solving a number of semantic interpretation problems is shown. These include, among others, the encoding of semantic selectional restrictions and the representation of relational nouns and their modifiers. | false | [] | [] | null | null | null | The work reported here was supported by the Advanced Research Projects Agency and was monitored by the Office of Naval Research under Contract No. 00014-89-C-0008. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the United States Government.The author would like to thank Andy Haas, who was the original impetus behind the change to a unification- | 1989 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
frank-petty-2020-sequence | https://aclanthology.org/2020.crac-1.16 | Sequence-to-Sequence Networks Learn the Meaning of Reflexive Anaphora | Reflexive anaphora present a challenge for semantic interpretation: their meaning varies depending on context in a way that appears to require abstract variables. Past work has raised doubts about the ability of recurrent networks to meet this challenge. In this paper, we explore this question in the context of a fragment of English that incorporates the relevant sort of contextual variability. We consider sequence-to-sequence architectures with recurrent units and show that such networks are capable of learning semantic interpretations for reflexive anaphora which generalize to novel antecedents. We explore the effect of attention mechanisms and different recurrent unit types on the type of training data that is needed for success as measured in two ways: how much lexical support is needed to induce an abstract reflexive meaning (i.e., how many distinct reflexive antecedents must occur during training) and what contexts must a noun phrase occur in to support generalization of reflexive interpretation to this noun phrase? | false | [] | [] | null | null | null | For helpful comments and discussion of this work, we are grateful to Shayna Sragovicz, Noah Amsel, Tal Linzen and the members of the Computational Linguistics at Yale (CLAY) and the JHU Computation and Psycholinguistics labs. This work has been supported in part by NSF grant BCS-1919321 and a Yale College Summer Experience Award. Code for replicating these experiments can be found on the Computational Linguistics at the CLAY Lab GitHub transductions and logos repositories. | 2020 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
hoang-koehn-2009-improving | https://aclanthology.org/E09-1043 | Improving Mid-Range Re-Ordering Using Templates of Factors | We extend the factored translation model (Koehn and Hoang, 2007) to allow translations of longer phrases composed of factors such as POS and morphological tags to act as templates for the selection and reordering of surface phrase translation. We also reintroduce the use of alignment information within the decoder, which forms an integral part of decoding in the Alignment Template System (Och, 2002), into phrase-based decoding. Results show an increase in translation performance of up to 1.0% BLEU for out-of-domain French-English translation. We also show how this method compares and relates to lexicalized reordering. | false | [] | [] | null | null | null | This work was supported by the EuroMatrix project funded by the European Commission (6th Framework Programme) and made use of the resources provided by the Edinburgh Compute and Data Facility (http://www.ecdf.ed.ac.uk/). The ECDF is partially supported by the eDIKT initiative (http://www.edikt.org.uk/). | 2009 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
volodina-etal-2021-dalaj | https://aclanthology.org/2021.nlp4call-1.3 | DaLAJ -- a dataset for linguistic acceptability judgments for Swedish | We present DaLAJ 1.0, a Dataset for Linguistic Acceptability Judgments for Swedish, comprising 9 596 sentences in its first version. DaLAJ is based on the SweLL second language learner data (Volodina et al., 2019), consisting of essays at different levels of proficiency. To make sure the dataset can be freely available despite the GDPR regulations, we have sentence-scrambled learner essays and removed part of the metadata about learners, keeping for each sentence only information about the mother tongue and the level of the course where the essay has been written. We use the normalized version of learner language as the basis for DaLAJ sentences, and keep only one error per sentence. We repeat the same sentence for each individual correction tag used in the sentence. For DaLAJ 1.0 four error categories of 35 available in SweLL are used, all connected to lexical or wordbuilding choices. The dataset is included in the SwedishGlue benchmark. 1 Below, we describe the format of the dataset, our insights and motivation for the chosen approach to data sharing. | false | [] | [] | null | null | null | This work has been supported by Nationella Språkbanken -jointly funded by its 10 partner institutions and the Swedish Research Council (dnr 2017-00626), as well as partly supported by a grant from the Swedish Riksbankens Jubileumsfond (SweLL -research infrastructure for Swedish as a second language, dnr IN16-0464:1). | 2021 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
hassan-etal-2011-identifying | https://aclanthology.org/P11-2104 | Identifying the Semantic Orientation of Foreign Words | We present a method for identifying the positive or negative semantic orientation of foreign words. Identifying the semantic orientation of words has numerous applications in the areas of text classification, analysis of product review, analysis of responses to surveys, and mining online discussions. Identifying the semantic orientation of English words has been extensively studied in literature. Most of this work assumes the existence of resources (e.g. Wordnet, seeds, etc) that do not exist in foreign languages. In this work, we describe a method based on constructing a multilingual network connecting English and foreign words. We use this network to identify the semantic orientation of foreign words based on connection between words in the same language as well as multilingual connections. The method is experimentally tested using a manually labeled set of positive and negative words and has shown very promising results. | false | [] | [] | null | null | null | This research was funded in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the U.S. Army Research Lab. All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the ofcial views or policies of IARPA, the ODNI or the U.S. Government. | 2011 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
zhang-chai-2021-hierarchical | https://aclanthology.org/2021.findings-acl.368 | Hierarchical Task Learning from Language Instructions with Unified Transformers and Self-Monitoring | Despite recent progress, learning new tasks through language instructions remains an extremely challenging problem. On the AL-FRED benchmark for task learning, the published state-of-the-art system only achieves a task success rate of less than 10% in an unseen environment, compared to the human performance of over 90%. To address this issue, this paper takes a closer look at task learning. In a departure from a widely applied end-toend architecture, we decomposed task learning into three sub-problems: sub-goal planning, scene navigation, and object manipulation; and developed a model HiTUT 1 (stands for Hierarchical Tasks via Unified Transformers) that addresses each sub-problem in a unified manner to learn a hierarchical task structure. On the ALFRED benchmark, HiTUT has achieved the best performance with a remarkably higher generalization ability. In the unseen environment, HiTUT achieves over 160% performance gain in success rate compared to the previous state of the art. The explicit representation of task structures also enables an in-depth understanding of the nature of the problem and the ability of the agent, which provides insight for future benchmark development and evaluation. | false | [] | [] | null | null | null | This work is supported by the National Science Foundation (IIS-1949634). The authors would like to thank the anonymous reviewers for their valuable comments and suggestions. | 2021 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
tambouratzis-etal-2012-evaluating | https://aclanthology.org/C12-1157 | Evaluating the Translation Accuracy of a Novel Language-Independent MT Methodology | The current paper evaluates the performance of the PRESEMT methodology, which facilitates the creation of machine translation (MT) systems for different language pairs. This methodology aims to develop a hybrid MT system that extracts translation information from large, predominantly monolingual corpora, using pattern recognition techniques. PRESEMT has been designed to have the lowest possible requirements on specialised resources and tools, given that for many languages (especially less widely used ones) only limited linguistic resources are available. In PRESEMT, the main translation process is divided into two phases, the first determining the overall structure of a target language (TL) sentence, and the second disambiguating between alternative translations for words or phrases and establishing local word order. This paper describes the latest version of the system and evaluates its translation accuracy, while also benchmarking the PRESEMT performance by comparing it with other established MT systems using objective measures. | false | [] | [] | null | null | null | null | 2012 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
rohrer-1986-linguistic | https://aclanthology.org/C86-1084 | Linguistic Bases For Machine Translation | My aim in organizing this panel is to stimulate the discussion between researchers working on MT and linguists interested in formal syntax and semantics. I am convinced that a closer cooperation will be fruitful for both sides. I will be talking about experimental MT or MT as a research project and not as a development project.[l ] A. The relation between MT and theoretical linguistics Researchers in MT do not work with linguistic theories which are 'on vogue' today. The two special issues on MT of the journal Computational Linguistics (CL 1985) contain eight contributions of the leading teams. In the bibliography of these articles you don't find names like Chomsky, Montague, Bresnan, Gazdar, Kamp, Barwise, Perry etc.[2] Syntactic theories like GB, GPSG, LFG are not mentioned (with one exception: R. Johnson et al. (1985 0.165) praise I.FG for its 'perspicuous notation', but do not (or not yet) incorporate ideas from LFG into their theory of MT). There arc no references whatsoever to recent semantic theories. | false | [] | [] | null | null | null | null | 1986 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
gabbard-kulick-2008-construct | https://aclanthology.org/P08-2053 | Construct State Modification in the Arabic Treebank | Earlier work in parsing Arabic has speculated that attachment to construct state constructions decreases parsing performance. We make this speculation precise and define the problem of attachment to construct state constructions in the Arabic Treebank. We present the first statistics that quantify the problem. We provide a baseline and the results from a first attempt at a discriminative learning procedure for this task, achieving 80% accuracy. | false | [] | [] | null | null | null | We thank Mitch Marcus, Ann Bies, Mohamed Maamouri, and the members of the Arabic Treebank project for helpful discussions. This work was supported in part under the GALE program of the Defense Advanced Research Projects Agency, Contract Nos. HR0011-06-C-0022 and HR0011-06-1-0003. The content of this paper does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred. | 2008 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
hedeland-etal-2018-introducing | https://aclanthology.org/L18-1370 | Introducing the CLARIN Knowledge Centre for Linguistic Diversity and Language Documentation | The European digital research infrastructure CLARIN (Common Language Resources and Technology Infrastructure) is building a Knowledge Sharing Infrastructure (KSI) to ensure that existing knowledge and expertise is easily available both for the CLARIN community and for the humanities research communities for which CLARIN is being developed. Within the Knowledge Sharing Infrastructure, so called Knowledge Centres comprise one or more physical institutions with particular expertise in certain areas and are committed to providing their expertise in the form of reliable knowledge-sharing services. In this paper, we present the ninth K Centre-the CLARIN Knowledge Centre for Linguistic Diversity and Language Documentation (CKLD)-and the expertise and services provided by the member institutions at the Universities of London (ELAR/SWLI), Cologne (DCH/IfDH/IfL) and Hamburg (HZSK/INEL). The centre offers information on current best practices, available resources and tools, and gives advice on technological and methodological matters for researchers working within relevant fields. | false | [] | [] | null | null | null | null | 2018 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
wong-etal-2021-cross | https://aclanthology.org/2021.acl-long.548 | Cross-replication Reliability - An Empirical Approach to Interpreting Inter-rater Reliability | When collecting annotations and labeled data from humans, a standard practice is to use inter-rater reliability (IRR) as a measure of data goodness (Hallgren, 2012). Metrics such as Krippendorff's alpha or Cohen's kappa are typically required to be above a threshold of 0.6 (Landis and Koch, 1977). These absolute thresholds are unreasonable for crowdsourced data from annotators with high cultural and training variances, especially on subjective topics. We present a new alternative to interpreting IRR that is more empirical and contextualized. It is based upon benchmarking IRR against baseline measures in a replication, one of which is a novel cross-replication reliability (xRR) measure based on Cohen's (1960) kappa. We call this approach the xRR framework. We opensource a replication dataset of 4 million human judgements of facial expressions and analyze it with the proposed framework. We argue this framework can be used to measure the quality of crowdsourced datasets. | false | [] | [] | null | null | null | We like to thank Gautam Prasad and Alan Cowen for their work on collecting and sharing the IRep dataset and opensourcing it. | 2021 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
gandhe-traum-2010-ive | https://aclanthology.org/W10-4345 | I've said it before, and I'll say it again: An empirical investigation of the upper bound of the selection approach to dialogue | We perform a study of existing dialogue corpora to establish the theoretical maximum performance of the selection approach to simulating human dialogue behavior in unseen dialogues. This maximum is the proportion of test utterances for which an exact or approximate match exists in the corresponding training corpus. The results indicate that some domains seem quite suitable for a corpusbased selection approach, with over half of the test utterances having been seen before in the corpus, while other domains show much more novelty compared to previous dialogues. | false | [] | [] | null | null | null | This work has been sponsored by the U.S. Army Research, Development, and Engineering Command (RDE-COM). Statements and opinions expressed do not necessarily reflect the position or the policy of the United States Government, and no official endorsement should be inferred. We would like to thank Ron Artstein and others at ICT for compiling the ICT Corpora used in this study. | 2010 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
celano-2020-gradient | https://aclanthology.org/2020.lt4hala-1.19 | A Gradient Boosting-Seq2Seq System for Latin POS Tagging and Lemmatization | The paper presents the system used in the EvaLatin shared task to POS tag and lemmatize Latin. It consists of two components. A gradient boosting machine (LightGBM) is used for POS tagging, mainly fed with pre-computed word embeddings of a window of seven contiguous tokens-the token at hand plus the three preceding and following ones-per target feature value. Word embeddings are trained on the texts of the Perseus Digital Library, Patrologia Latina, and Biblioteca Digitale di Testi Tardo Antichi, which together comprise a high number of texts of different genres from the Classical Age to Late Antiquity. Word forms plus the outputted POS labels are used to feed a Seq2Seq algorithm implemented in Keras to predict lemmas. The final shared-task accuracies measured for Classical Latin texts are in line with state-of-the-art POS taggers (∼96%) and lemmatizers (∼95%). | false | [] | [] | null | null | null | null | 2020 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
pluss-piwek-2016-measuring | https://aclanthology.org/C16-1181 | Measuring Non-cooperation in Dialogue | This paper introduces a novel method for measuring non-cooperation in dialogue. The key idea is that linguistic non-cooperation can be measured in terms of the extent to which dialogue participants deviate from conventions regarding the proper introduction and discharging of conversational obligations (e.g., the obligation to respond to a question). Previous work on non-cooperation has focused mainly on non-linguistic task-related non-cooperation or modelled non-cooperation in terms of special rules describing non-cooperative behaviours. In contrast, we start from rules for normal/correct dialogue behaviour-i.e., a dialogue game-which in principle can be derived from a corpus of cooperative dialogues, and provide a quantitative measure for the degree to which participants comply with these rules. We evaluated the model on a corpus of political interviews, with encouraging results. The model predicts accurately the degree of cooperation for one of the two dialogue game roles (interviewer) and also the relative cooperation for both roles (i.e., which interlocutor in the conversation was most cooperative). Being able to measure cooperation has applications in many areas from the analysis-manual, semi and fully automatic-of natural language interactions to human-like virtual personal assistants, tutoring agents, sophisticated dialogue systems, and role-playing virtual humans. | false | [] | [] | null | null | null | null | 2016 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
sun-etal-2020-colake | https://aclanthology.org/2020.coling-main.327 | CoLAKE: Contextualized Language and Knowledge Embedding | With the emerging branch of incorporating factual knowledge into pre-trained language models such as BERT, most existing models consider shallow, static, and separately pre-trained entity embeddings, which limits the performance gains of these models. Few works explore the potential of deep contextualized knowledge representation when injecting knowledge. In this paper, we propose the Contextualized Language and Knowledge Embedding (CoLAKE), which jointly learns contextualized representation for both language and knowledge with the extended MLM objective. Instead of injecting only entity embeddings, CoLAKE extracts the knowledge context of an entity from large-scale knowledge bases. To handle the heterogeneity of knowledge context and language context, we integrate them in a unified data structure, word-knowledge graph (WK graph). CoLAKE is pre-trained on large-scale WK graphs with the modified Transformer encoder. We conduct experiments on knowledge-driven tasks, knowledge probing tasks, and language understanding tasks. Experimental results show that CoLAKE outperforms previous counterparts on most of the tasks. Besides, CoLAKE achieves surprisingly high performance on our synthetic task called word-knowledge graph completion, which shows the superiority of simultaneously contextualizing language and knowledge representation. 1 * Work done during internship at Amazon Shanghai AI Lab. | false | [] | [] | null | null | null | null | 2020 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
harkema-etal-2004-large-scale | https://aclanthology.org/W04-3110 | A Large Scale Terminology Resource for Biomedical Text Processing | In this paper we discuss the design, implementation, and use of Termino, a large scale terminological resource for text processing. Dealing with terminology is a difficult but unavoidable task for language processing applications, such as Information Extraction in technical domains. Complex, heterogeneous information must be stored about large numbers of terms. At the same time term recognition must be performed in realistic times. Termino attempts to reconcile this tension by maintaining a flexible, extensible relational database for storing terminological information and compiling finite state machines from this database to do term lookup. While Termino has been developed for biomedical applications, its general design allows it to be used for term processing in any domain. | true | [] | [] | Good Health and Well-Being | null | null | null | 2004 | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
abercrombie-batista-navarro-2020-parlvote | https://aclanthology.org/2020.lrec-1.624 | ParlVote: A Corpus for Sentiment Analysis of Political Debates | Debate transcripts from the UK Parliament contain information about the positions taken by politicians towards important topics, but are difficult for people to process manually. While sentiment analysis of debate speeches could facilitate understanding of the speakers' stated opinions, datasets currently available for this task are small when compared to the benchmark corpora in other domains. We present ParlVote, a new, larger corpus of parliamentary debate speeches for use in the evaluation of sentiment analysis systems for the political domain. We also perform a number of initial experiments on this dataset, testing a variety of approaches to the classification of sentiment polarity in debate speeches. These include a linear classifier as well as a neural network trained using a transformer word embedding model (BERT), and fine-tuned on the parliamentary speeches. We find that in many scenarios, a linear classifier trained on a bag-of-words text representation achieves the best results. However, with the largest dataset, the transformer-based model combined with a neural classifier provides the best performance. We suggest that further experimentation with classification models and observations of the debate content and structure are required, and that there remains much room for improvement in parliamentary sentiment analysis. | true | [] | [] | Peace, Justice and Strong Institutions | null | null | The authors would like to thank the anonymous reviewers for their helpful comments. | 2020 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false |
kibble-van-deemter-2000-coreference | http://www.lrec-conf.org/proceedings/lrec2000/pdf/100.pdf | Coreference Annotation: Whither? | The terms coreference and anaphora tend to be used inconsistently and interchangeably in much empirically-oriented work in NLP, and this threatens to lead to incoherent analyses of texts and arbitrary loss of information. This paper discusses the role of coreference annotation in Information Extraction, focussing on the coreference scheme defined for the MUC-7 evaluation exercise. We point out deficiencies in that scheme and make some suggestions towards a new annotation philosophy. | false | [] | [] | null | null | null | We are grateful to Lynette Hirschman and Breck Baldwin for their very constructive responses to a presentation on the topic of this paper (van Deemter and Kibble, 1999). Rodger Kibble's participation in this research was funded by the UK EPSRC as part of the GNOME (GR/L51126) and RAGS (GR/L77102) projects. | 2000 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
sikdar-gamback-2016-feature | https://aclanthology.org/W16-3922 | Feature-Rich Twitter Named Entity Recognition and Classification | Twitter named entity recognition is the process of identifying proper names and classifying them into some predefined labels/categories. The paper introduces a Twitter named entity system using a supervised machine learning approach, namely Conditional Random Fields. A large set of different features was developed and the system was trained using these. The Twitter named entity task can be divided into two parts: i) Named entity extraction from tweets and ii) Twitter name classification into ten different types. For Twitter named entity recognition on unseen test data, our system obtained the second highest F 1 score in the shared task: 63.22%. The system performance on the classification task was worse, with an F 1 measure of 40.06% on unseen test data, which was the fourth best of the ten systems participating in the shared task. | false | [] | [] | null | null | null | null | 2016 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
barthelemy-2009-karamel | https://aclanthology.org/W09-0802 | The Karamel System and Semitic Languages: Structured Multi-Tiered Morphology | Karamel is a system for finite-state morphology which is multi-tape and uses a typed Cartesian product to relate tapes in a structured way. It implements statically compiled feature structures. Its language allows the use of regular expressions and Generalized Restriction rules to define multi-tape transducers. Both simultaneous and successive application of local constraints are possible. This system is interesting for describing rich and structured morphologies such as the morphology of Semitic languages. | false | [] | [] | null | null | null | null | 2009 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
sanchez-badeka-2014-linguistic | https://aclanthology.org/2014.amta-users.1 | Linguistic QA for MT of user-generated content at eBay | null | false | [] | [] | null | null | null | null | 2014 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
skadina-pinnis-2017-nmt | https://aclanthology.org/I17-1038 | NMT or SMT: Case Study of a Narrow-domain English-Latvian Post-editing Project | The recent technological shift in machine translation from statistical machine translation (SMT) to neural machine translation (NMT) raises the question of the strengths and weaknesses of NMT. In this paper, we present an analysis of NMT and SMT systems' outputs from narrow domain English-Latvian MT systems that were trained on a rather small amount of data. We analyze post-edits produced by professional translators and manually annotated errors in these outputs. Analysis of post-edits allowed us to conclude that both approaches are comparably successful, allowing for an increase in translators' productivity, with the NMT system showing slightly worse results. Through the analysis of annotated errors, we found that NMT translations are more fluent than SMT translations. However, errors related to accuracy, especially, mistranslation and omission errors, occur more often in NMT outputs. The word form errors, that characterize the morphological richness of Latvian, are frequent for both systems, but slightly fewer in NMT outputs. | false | [] | [] | null | null | null | We would like to thank Tilde's Localization Department for the hard work they did to prepare material for the analysis presented in this paper. The work within the QT21 project has received funding from the European Union under grant agreement n • 645452. The research has been supported by the ICT Competence Centre (www.itkc.lv) within the project "2.2. Prototype of a Software and Hardware Platform for Integration of Machine Translation in Corporate Infrastructure" of EU Structural funds, ID n • 1.2.1.1/16/A/007. | 2017 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
blache-2014-challenging | https://aclanthology.org/W14-0501 | Challenging incrementality in human language processing: two operations for a cognitive architecture | The description of language complexity and the cognitive load related to the different linguistic phenomena is a key issue for the understanding of language processing. Many studies have focused on the identification of specific parameters that can lead to a simplification or on the contrary to a complexification of the processing (e.g. the different difficulty models proposed in (Gibson, 2000) , (Warren and Gibson, 2002) , (Hawkins, 2001) ). Similarly, different simplification factors can be identified, such as the notion of activation, relying on syntactic priming effects making it possible to predict (or activate) a word (Vasishth, 2003) . Several studies have shown that complexity factors are cumulative (Keller, 2005) , but can be offset by simplification (Blache et al., 2006) . It is therefore necessary to adopt a global point of view of language processing, explaining the interplay between positive and negative cumulativity, in other words compensation effects.
From the computational point of view, some models can account more or less explicitly for these phenomena. This is the case of the Surprisal index (Hale, 2001) , offering for each word an assessment of its integration costs into the syntactic structure. This evaluation is done starting from the probability of the possible solutions. On their side, symbolic approaches also provide an estimation of the activation degree, depending on the number and weight of syntactic relations to the current word (Blache et al., 2006) ; (Blache, 2013) . | false | [] | [] | null | null | null | This work, carried out within the Labex BLRI (ANR-11-LABX-0036), has benefited from support from the French government, managed by the French National Agency for Research (ANR), under the project title Investments of the Future A*MIDEX (ANR-11-IDEX-0001-02). | 2014 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
trojahn-etal-2008-framework | http://www.lrec-conf.org/proceedings/lrec2008/pdf/270_paper.pdf | A Framework for Multilingual Ontology Mapping | In the field of ontology mapping, multilingual ontology mapping is an issue that is not well explored. This paper proposes a framework for mapping of multilingual Description Logics (DL) ontologies. First, the DL source ontology is translated to the target ontology language, using a lexical database or a dictionary, generating a DL translated ontology. The target and the translated ontologies are then used as input for the mapping process. The mappings are computed by specialized agents using different mapping approaches. Next, these agents use argumentation to exchange their local results, in order to agree on the obtained mappings. Based on their preferences and confidence of the arguments, the agents compute their preferred mapping sets. The arguments in such preferred sets are viewed as the set of globally acceptable arguments. A DL mapping ontology is generated as result of the mapping process. In this paper we focus on the process of generating the DL translated ontology. | false | [] | [] | null | null | null | null | 2008 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
litman-1986-linguistic | https://aclanthology.org/P86-1033 | Linguistic Coherence: A Plan-Based Alternative | To fully understand a sequence of utterances, one must be able to infer implicit relationships between the utterances. Although the identification of sets of utterance relationships forms the basis for many theories of discourse, the formalization and recognition of such relationships has proven to be an extremely difficult computational task. This paper presents a plan-based approach to the representation and recognition of implicit relationships between utterances. Relationships are formulated as discourse plans, which allows their representation in terms of planning operators and their computation via a plan recognition process. By incorporating complex inferential processes relating utterances into a plan-based framework, a formalization and computability not available in the earlier works is provided. | false | [] | [] | null | null | null | I would like to thank Julia Hirschberg, Marcia Derr, Mark Jones, Mark Kahrs, and Henry Kautz for their helpful comments on drafts of this paper. | 1986 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
dragoni-2018-neurosent | https://aclanthology.org/S18-1013 | NEUROSENT-PDI at SemEval-2018 Task 1: Leveraging a Multi-Domain Sentiment Model for Inferring Polarity in Micro-blog Text | This paper describes the NeuroSent system that participated in SemEval 2018 Task 1. Our system takes a supervised approach that builds on neural networks and word embeddings. Word embeddings were built by starting from a repository of user generated reviews. Thus, they are specific for sentiment analysis tasks. Then, tweets are converted in the corresponding vector representation and given as input to the neural network with the aim of learning the different semantics contained in each emotion taken into account by the SemEval task. The output layer has been adapted based on the characteristics of each subtask. Preliminary results obtained on the provided training set are encouraging for pursuing the investigation into this direction. | true | [] | [] | Peace, Justice and Strong Institutions | null | null | null | 2018 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false |
brixey-etal-2017-shihbot | https://aclanthology.org/W17-5544 | SHIHbot: A Facebook chatbot for Sexual Health Information on HIV/AIDS | We present the implementation of an autonomous chatbot, SHIHbot, deployed on Facebook, which answers a wide variety of sexual health questions on HIV/AIDS. The chatbot's response database is compiled from professional medical and public health resources in order to provide reliable information to users. The system's backend is NPCEditor, a response selection platform trained on linked questions and answers; to our knowledge this is the first retrieval-based chatbot deployed on a large public social network. | true | [] | [] | Good Health and Well-Being | null | null | Many thanks to Professors Milind Tambe and Eric Rice for helping to develop this work and for promoting artificial intelligence for social good. The first, seventh and eighth authors were supported in part by the U.S. Army; statements and opinions expressed do not necessarily reflect the position or the policy of the United States Government, and no official endorsement should be inferred. | 2017 | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
do-carmo-2019-edit | https://aclanthology.org/W19-7001 | Edit distances do not describe editing, but they can be useful for translation process research | Translation process research (TPR) aims at describing what translators do, and one of the technical dimensions of translators' work is editing (applying detailed changes to text). In this presentation, we will analyze how different methods for process data collection describe editing. We will review keyloggers used in typical TPR applications, track changes used by word processors, and edit rates based on estimation of edit distances. The purpose of this presentation is to discuss the limitations of these methods when describing editing behavior, and to incentivize researchers in looking for ways to present process data in simplified formats, closer to those that describe product data. | false | [] | [] | null | null | null | This Project has received funding from the European Union's Horizon 2020 research and innovation programme under the EDGE COFUND Marie Skłodowska-Curie Grant Agreement no. 713567. This publication has emanated from research supported in part by a research grant from Science Foundation Ireland (SFI) under Grant Number 13/RC/2077. | 2019 | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |