ID
stringlengths
11
54
url
stringlengths
33
64
title
stringlengths
11
184
abstract
stringlengths
17
3.87k
label_nlp4sg
bool
2 classes
task
sequence
method
sequence
goal1
stringclasses
9 values
goal2
stringclasses
9 values
goal3
stringclasses
1 value
acknowledgments
stringlengths
28
1.28k
year
stringlengths
4
4
sdg1
bool
1 class
sdg2
bool
1 class
sdg3
bool
2 classes
sdg4
bool
2 classes
sdg5
bool
2 classes
sdg6
bool
1 class
sdg7
bool
1 class
sdg8
bool
2 classes
sdg9
bool
2 classes
sdg10
bool
2 classes
sdg11
bool
2 classes
sdg12
bool
1 class
sdg13
bool
2 classes
sdg14
bool
1 class
sdg15
bool
1 class
sdg16
bool
2 classes
sdg17
bool
2 classes
candito-constant-2014-strategies
https://aclanthology.org/P14-1070
Strategies for Contiguous Multiword Expression Analysis and Dependency Parsing
In this paper, we investigate various strategies to predict both syntactic dependency parsing and contiguous multiword expression (MWE) recognition, testing them on the dependency version of French Treebank (Abeillé and Barrier, 2004), as instantiated in the SPMRL Shared Task (Seddah et al., 2013). Our work focuses on using an alternative representation of syntactically regular MWEs, which captures their syntactic internal structure. We obtain a system with comparable performance to that of previous works on this dataset, but which predicts both syntactic dependencies and the internal structure of MWEs. This can be useful for capturing the various degrees of semantic compositionality of MWEs.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ws-1998-treatment
https://aclanthology.org/W98-0600
The Computational Treatment of Nominals
iii Toni Badia and Roser Sauri The Representation of Syntactically Unexpressed Complements to Nouns .........
false
[]
[]
null
null
null
null
1998
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
pavalanathan-eisenstein-2015-confounds
https://aclanthology.org/D15-1256
Confounds and Consequences in Geotagged Twitter Data
Twitter is often used in quantitative studies that identify geographically-preferred topics, writing styles, and entities. These studies rely on either GPS coordinates attached to individual messages, or on the user-supplied location field in each profile. In this paper, we compare these data acquisition techniques and quantify the biases that they introduce; we also measure their effects on linguistic analysis and textbased geolocation. GPS-tagging and selfreported locations yield measurably different corpora, and these linguistic differences are partially attributable to differences in dataset composition by age and gender. Using a latent variable model to induce age and gender, we show how these demographic variables interact with geography to affect language use. We also show that the accuracy of text-based geolocation varies with population demographics, giving the best results for men above the age of 40.
false
[]
[]
null
null
null
Acknowledgments Thanks to the anonymous reviewers for their useful and constructive feedback on our submission. The following members of the Georgia Tech Computational Linguistics Laboratory offered feedback throughout the research process: Naman Goyal, Yangfeng Ji, Vinodh Krishan, Ana Smith, Yijie Wang, and Yi Yang. This research was supported by the National Science Foundation under awards IIS-1111142 and RI-1452443, by the National Institutes of Health under award number R01GM112697-01, and by the Air Force Office of Scientific Research. The content is solely the responsibility of the authors and does not necessarily represent the official views of these sponsors.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chen-bunescu-2017-exploration
https://aclanthology.org/I17-2075
An Exploration of Data Augmentation and RNN Architectures for Question Ranking in Community Question Answering
The automation of tasks in community question answering (cQA) is dominated by machine learning approaches, whose performance is often limited by the number of training examples. Starting from a neural sequence learning approach with attention, we explore the impact of two data augmentation techniques on question ranking performance: a method that swaps reference questions with their paraphrases, and training on examples automatically selected from external datasets. Both methods are shown to lead to substantial gains in accuracy over a strong baseline. Further improvements are obtained by changing the model architecture to mirror the structure seen in the data.
false
[]
[]
null
null
null
We would like to thank the anonymous reviewers for their helpful comments. This work was supported by an allocation of computing time from the Ohio Supercomputer Center.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
surana-chinagundi-2022-ginius
https://aclanthology.org/2022.ltedi-1.43
giniUs @LT-EDI-ACL2022: Aasha: Transformers based Hope-EDI
This paper describes team giniUs' submission to the Hope Speech Detection for Equality, Diversity and Inclusion Shared Task organised by LT-EDI ACL 2022. We have fine-tuned the RoBERTa-large pre-trained model and extracted the last four Decoder layers to build a binary classifier. Our best result on the leaderboard achieves a weighted F1 score of 0.86 and a Macro F1 score of 0.51 for English. We rank fourth in the English task. We have opensourced our code implementations on GitHub to facilitate easy reproducibility by the scientific community.
true
[]
[]
Good Health and Well-Being
null
null
null
2022
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
okabe-etal-2005-query
https://aclanthology.org/H05-1121
Query Expansion with the Minimum User Feedback by Transductive Learning
Query expansion techniques generally select new query terms from a set of top ranked documents. Although a user's manual judgment of those documents would much help to select good expansion terms, it is difficult to get enough feedback from users in practical situations. In this paper we propose a query expansion technique which performs well even if a user notifies just a relevant document and a non-relevant document. In order to tackle this specific condition, we introduce two refinements to a well-known query expansion technique. One is application of a transductive learning technique in order to increase relevant documents. The other is a modified parameter estimation method which laps the predictions by multiple learning trials and try to differentiate the importance of candidate terms for expansion in relevant documents. Experimental results show that our technique outperforms some traditional query expansion methods in several evaluation measures.
false
[]
[]
null
null
null
null
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wang-etal-2012-exploiting
https://aclanthology.org/C12-2128
Exploiting Discourse Relations for Sentiment Analysis
The overall sentiment of a text is critically affected by its discourse structure. By splitting a text into text spans with different discourse relations, we automatically train the weights of different relations in accordance with their importance, and then make use of discourse structure knowledge to improve sentiment classification. In this paper, we utilize explicit connectives to predict discourse relations, and then propose several methods to incorporate discourse relation knowledge to the task of sentiment analysis. All our methods integrating discourse relations perform better than the baseline methods, validating the effectiveness of using discourse relations in Chinese sentiment analysis. We also automatically find out the most influential discourse relations and connectives in sentiment analysis.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nimb-2004-corpus
http://www.lrec-conf.org/proceedings/lrec2004/pdf/284.pdf
A Corpus-based Syntactic Lexicon for Adverbs
A word class often neglected in the field of NLP resources, namely adverbs, has lately been described in a computational lexicon produced at CST as one of the results of a Ph.D.-project. The adverb lexicon, which is integrated in the Danish STO lexicon, gives detailed syntactic information on the type of modification and position, as well as on other syntactic properties of approx 800 Danish adverbs. One of the aims of the lexicon has been to establish a clear distinction between syntactic and semantic information-where other lexicons often generalize over the syntactic behavior of semantic classes of adverbs, every adverb is described with respect to its proper syntactic behavior in a text corpus, revealing very individual syntactic properties. Syntactic information on adverbs is needed in NLP systems generating text to ensure correct placing in the phrase they modify. Also in systems analyzing text, this information is needed in order to attach the adverbs to the right node in the syntactic parse trees. Within the field of linguistic research, several results can be deduced from the lexicon, e.g. knowledge of syntactic classes of Danish adverbs.
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
muis-etal-2018-low
https://aclanthology.org/C18-1007
Low-resource Cross-lingual Event Type Detection via Distant Supervision with Minimal Effort
The use of machine learning for NLP generally requires resources for training. Tasks performed in a low-resource language usually rely on labeled data in another, typically resource-rich, language. However, there might not be enough labeled data even in a resource-rich language such as English. In such cases, one approach is to use a hand-crafted approach that utilizes only a small bilingual dictionary with minimal manual verification to create distantly supervised data. Another is to explore typical machine learning techniques, for example adversarial training of bilingual word representations. We find that in event-type detection task-the task to classify [parts of] documents into a fixed set of labels-they give about the same performance. We explore ways in which the two methods can be complementary and also see how to best utilize a limited budget for manual annotation to maximize performance gain.
false
[]
[]
null
null
null
We acknowledge NIST for coordinating the SF type evaluation and providing the test data. NIST serves to coordinate the evaluations in order to support research and to help advance the state-of-the-art. NIST evaluations are not viewed as a competition, and such results reported by NIST are not to be construed, or represented, as endorsements of any participants system, or as official findings on the part of NIST or the U.S. Government. We thank Lori Levin for the inputs for an earlier version of this paper. This project was sponsored by the Defense Advanced Research Projects Agency (DARPA) Information Innovation Office (I2O), program: Low Resource Languages for Emergent Incidents (LORELEI), issued by DARPA/I2O under Contract No. HR0011-15-C-0114.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhong-etal-2019-closer
https://aclanthology.org/D19-5410
A Closer Look at Data Bias in Neural Extractive Summarization Models
In this paper, we take stock of the current state of summarization datasets and explore how different factors of datasets influence the generalization behaviour of neural extractive summarization models. Specifically, we first propose several properties of datasets, which matter for the generalization of summarization models. Then we build the connection between priors residing in datasets and model designs, analyzing how different properties of datasets influence the choices of model structure design and training methods. Finally, by taking a typical dataset as an example, we rethink the process of the model design based on the experience of the above analysis. We demonstrate that when we have a deep understanding of the characteristics of datasets, a simple approach can bring significant improvements to the existing stateof-the-art model. 1 Introduction Neural network-based models have achieved great success on summarization tasks (
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
karamanolakis-etal-2020-txtract
https://aclanthology.org/2020.acl-main.751
TXtract: Taxonomy-Aware Knowledge Extraction for Thousands of Product Categories
Extracting structured knowledge from product profiles is crucial for various applications in e-Commerce. State-of-the-art approaches for knowledge extraction were each designed for a single category of product, and thus do not apply to real-life e-Commerce scenarios, which often contain thousands of diverse categories. This paper proposes TXtract, a taxonomyaware knowledge extraction model that applies to thousands of product categories organized in a hierarchical taxonomy. Through category conditional self-attention and multi-task learning, our approach is both scalable, as it trains a single model for thousands of categories, and effective, as it extracts categoryspecific attribute values. Experiments on products from a taxonomy with 4,000 categories show that TXtract outperforms state-of-the-art approaches by up to 10% in F1 and 15% in coverage across all categories.
false
[]
[]
null
null
null
The authors would like to sincerely thank Ron Benson, Christos Faloutsos, Andrey Kan, Yan Liang, Yaqing Wang, and Tong Zhao for their insightful comments on the paper, and Gabriel Blanco, Alexandre Manduca, Saurabh Deshpande, Jay Ren, and Johanna Umana for their constructive feedback on data integration for the experiments.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
leung-etal-2016-developing
https://aclanthology.org/W16-5403
Developing Universal Dependencies for Mandarin Chinese
This article proposes a Universal Dependency Annotation Scheme for Mandarin Chinese, including POS tags and dependency analysis. We identify cases of idiosyncrasy of Mandarin Chinese that are difficult to fit into the current schema which has mainly been based on the descriptions of various Indo-European languages. We discuss differences between our scheme and those of the Stanford Chinese Dependencies and the Chinese Dependency Treebank.
false
[]
[]
null
null
null
This work was supported by a grant from the PROCORE-France/Hong Kong Joint Research Scheme sponsored by the Research Grants Council and the Consulate General of France in Hong Kong (Reference No.: F-CityU107/15 and N° 35322RG); and by a Strategic Research Grant (Project No. 7004494) from City University of Hong Kong.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mondal-etal-2021-classification
https://aclanthology.org/2021.smm4h-1.29
Classification of COVID19 tweets using Machine Learning Approaches
The reported work is a description of our participation in the "Classification of COVID19 tweets containing symptoms" shared task, organized by the "Social Media Mining for Health Applications (SMM4H)" workshop. The literature describes two machine learning approaches that were used to build a threeclass classification system, that categorizes tweets related to COVID19, into three classes, viz., self-reports, non-personal reports, and literature/news mentions. The steps for preprocessing tweets, feature extraction, and the development of the machine learning models, are described extensively in the documentation. Both the developed learning models, when evaluated by the organizers, garnered F1 scores of 0.93 and 0.92 respectively.
true
[]
[]
Good Health and Well-Being
null
null
null
2021
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gero-etal-2022-sparks
https://aclanthology.org/2022.in2writing-1.12
Sparks: Inspiration for Science Writing using Language Models
Large-scale language models are rapidly improving, performing well on a variety of tasks with little to no customization. In this work we investigate how language models can support science writing, a challenging writing task that is both open-ended and highly constrained. We present a system for generating "sparks", sentences related to a scientific concept intended to inspire writers. We run a user study with 13 STEM graduate students and find three main use cases of sparks-inspiration, translation, and perspective-each of which correlates with a unique interaction pattern. We also find that while participants were more likely to select higher quality sparks, the overall quality of sparks seen by a given participant did not correlate with their satisfaction with the tool. 1
true
[]
[]
Industry, Innovation and Infrastructure
null
null
null
2022
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
zhang-etal-2019-paws
https://aclanthology.org/N19-1131
PAWS: Paraphrase Adversaries from Word Scrambling
Existing paraphrase identification datasets lack sentence pairs that have high lexical overlap without being paraphrases. Models trained on such data fail to distinguish pairs like flights from New York to Florida and flights from Florida to New York. This paper introduces PAWS (Paraphrase Adversaries from Word Scrambling), a new dataset with 108,463 wellformed paraphrase and non-paraphrase pairs with high lexical overlap. Challenging pairs are generated by controlled word swapping and back translation, followed by fluency and paraphrase judgments by human raters. Stateof-the-art models trained on existing datasets have dismal performance on PAWS (<40% accuracy); however, including PAWS training data for these models improves their accuracy to 85% while maintaining performance on existing tasks. In contrast, models that do not capture non-local contextual information fail even with PAWS training examples. As such, PAWS provides an effective instrument for driving further progress on models that better exploit structure, context, and pairwise comparisons.
false
[]
[]
null
null
null
We would like to thank our anonymous reviewers and the Google AI Language team, especially Emily Pitler, for the insightful comments that contributed to this paper. Many thanks also to the Data Compute team, especially Ashwin Kakarla and Henry Jicha, for their help with the annotations
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sassano-kurohashi-2010-using
https://aclanthology.org/P10-1037
Using Smaller Constituents Rather Than Sentences in Active Learning for Japanese Dependency Parsing
We investigate active learning methods for Japanese dependency parsing. We propose active learning methods of using partial dependency relations in a given sentence for parsing and evaluate their effectiveness empirically. Furthermore, we utilize syntactic constraints of Japanese to obtain more labeled examples from precious labeled ones that annotators give. Experimental results show that our proposed methods improve considerably the learning curve of Japanese dependency parsing. In order to achieve an accuracy of over 88.3%, one of our methods requires only 34.4% of labeled examples as compared to passive learning.
false
[]
[]
null
null
null
We would like to thank the anonymous reviewers and Tomohide Shibata for their valuable comments.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
shimorina-belz-2022-human
https://aclanthology.org/2022.humeval-1.6
The Human Evaluation Datasheet: A Template for Recording Details of Human Evaluation Experiments in NLP
This paper presents the Human Evaluation Datasheet (HEDS), a template for recording the details of individual human evaluation experiments in Natural Language Processing (NLP), and reports on first experience of researchers using HEDS sheets in practice. Originally taking inspiration from seminal papers by Bender and Friedman (2018), Mitchell et al. (2019), and Gebru et al. (2020), HEDS facilitates the recording of properties of human evaluations in sufficient detail, and with sufficient standardisation, to support comparability, meta-evaluation, and reproducibility assessments for human evaluations. These are crucial for scientifically principled evaluation, but the overhead of completing a detailed datasheet is substantial, and we discuss possible ways of addressing this and other issues observed in practice.
false
[]
[]
null
null
null
null
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hayes-2004-publisher
https://aclanthology.org/W04-3109
Publisher Perspective on Broad Full-text Literature Access for Text Mining in Academic and Corporate Endeavors
There is a great deal of interest in obtaining access to the vast stores of full-text literature held by the various publishers. The need to balance a reduction of the restrictions on access with the protection of the revenue streams of the publishers is critical. Without the publishers, the content would not be available and support for several scientific societies would also disappear. On the other hand, the value of the literature holdings, while it appears to be quite high, is not currently adequately exploited. Text mining and more effective information retrieval is necessary to take full advantage of the information captured by the millions of electronic journal articles currently available.
true
[]
[]
Industry, Innovation and Infrastructure
null
null
null
2004
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
schloder-fernandez-2014-role
https://aclanthology.org/W14-4321
The Role of Polarity in Inferring Acceptance and Rejection in Dialogue
We study the role that logical polarity plays in determining the rejection or acceptance function of an utterance in dialogue. We develop a model inspired by recent work on the semantics of negation and polarity particles and test it on annotated data from two spoken dialogue corpora: the Switchboard Corpus and the AMI Meeting Corpus. Our experiments show that taking into account the relative polarity of a proposal under discussion and of its response greatly helps to distinguish rejections from acceptances in both corpora.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
liu-etal-2007-forest
https://aclanthology.org/P07-1089
Forest-to-String Statistical Translation Rules
In this paper, we propose forest-to-string rules to enhance the expressive power of tree-to-string translation models. A forestto-string rule is capable of capturing nonsyntactic phrase pairs by describing the correspondence between multiple parse trees and one string. To integrate these rules into tree-to-string translation models, auxiliary rules are introduced to provide a generalization level. Experimental results show that, on the NIST 2005 Chinese-English test set, the tree-to-string model augmented with forest-to-string rules achieves a relative improvement of 4.3% in terms of BLEU score over the original model which allows treeto-string rules only.
false
[]
[]
null
null
null
This work was supported by National Natural Science Foundation of China, Contract No. 60603095 and 60573188.
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
junczys-dowmunt-grundkiewicz-2014-amu
https://aclanthology.org/W14-1703
The AMU System in the CoNLL-2014 Shared Task: Grammatical Error Correction by Data-Intensive and Feature-Rich Statistical Machine Translation
Statistical machine translation toolkits like Moses have not been designed with grammatical error correction in mind. In order to achieve competitive results in this area, it is not enough to simply add more data. Optimization procedures need to be customized, task-specific features should be introduced. Only then can the decoder take advantage of relevant data. We demonstrate the validity of the above claims by combining web-scale language models and large-scale error-corrected texts with parameter tuning according to the task metric and correction-specific features. Our system achieves a result of 35.0% F 0.5 on the blind CoNLL-2014 test set, ranking on third place. A similar system, equipped with identical models but without tuned parameters and specialized features, stagnates at 25.4%.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wu-etal-2011-answering
https://aclanthology.org/I11-1107
Answering Complex Questions via Exploiting Social Q\&A Collection
This paper regards social Q&A collections, such as Yahoo! Answer as a knowledge repository and investigates techniques to mine knowledge from them for improving a sentence-based complex question answering (QA) system. In particular, we present a question-type-specific method (QTSM) that studies at extracting question-type-dependent cue expressions from the social Q&A pairs in which question types are the same as the submitted question. The QTSM is also compared with question-specific and monolingual translation-based methods presented in previous work. Thereinto, the question-specific method (QSM) aims at extracting question-dependent answer words from social Q&A pairs in which questions are similar to the submitted question. The monolingual translationbased method (MTM) learns word-toword translation probabilities from all social Q&A pairs without consideration of question and question type. Experiments on extension of the NTCIR 2008 Chinese test data set verify the performance ranking of these methods as: QTSM > QSM, MTM. The largest F 3 improvements of the proposed QTSM over the QSM and MTM reach 6.0% and 5.8%, respectively.
false
[]
[]
null
null
null
null
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
swanson-yamangil-2012-correction
https://aclanthology.org/N12-1037
Correction Detection and Error Type Selection as an ESL Educational Aid
We present a classifier that discriminates between types of corrections made by teachers of English in student essays. We define a set of linguistically motivated feature templates for a log-linear classification model, train this classifier on sentence pairs extracted from the Cambridge Learner Corpus, and achieve 89% accuracy improving upon a 33% baseline. Furthermore, we incorporate our classifier into a novel application that takes as input a set of corrected essays that have been sentence aligned with their originals and outputs the individual corrections classified by error type. We report the F-Score of our implementation on this task.
true
[]
[]
Quality Education
null
null
null
2012
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
federico-etal-2011-overview
https://aclanthology.org/2011.iwslt-evaluation.1
Overview of the IWSLT 2011 evaluation campaign
We report here on the eighth Evaluation Campaign organized by the IWSLT workshop. This year, the IWSLT evaluation focused on the automatic translation of public talks and included tracks for speech recognition, speech translation, text translation, and system combination. Unlike previous years, all data supplied for the evaluation has been publicly released on the workshop website, and is at the disposal of researchers interested in working on our benchmarks and in comparing their results with those published at the workshop. This paper provides an overview of the IWSLT 2011 Evaluation Campaign, which includes: descriptions of the supplied data and evaluation specifications of each track, the list of participants specifying their submitted runs, a detailed description of the subjective evaluation carried out, the main findings of each exercise drawn from the results and the system descriptions prepared by the participants, and, finally, several detailed tables reporting all the evaluation results.
false
[]
[]
null
null
null
null
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
preotiuc-pietro-ungar-2018-user
https://aclanthology.org/C18-1130
User-Level Race and Ethnicity Predictors from Twitter Text
User demographic inference from social media text has the potential to improve a range of downstream applications, including real-time passive polling or quantifying demographic bias. This study focuses on developing models for user-level race and ethnicity prediction. We introduce a data set of users who self-report their race/ethnicity through a survey, in contrast to previous approaches that use distantly supervised data or perceived labels. We develop predictive models from text which accurately predict the membership of a user to the four largest racial and ethnic groups with up to .884 AUC and make these available to the research community.
false
[]
[]
null
null
null
The authors acknowledge the support of the Templeton Religion Trust, grant TRT-0048.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
li-etal-2014-annotating
http://www.lrec-conf.org/proceedings/lrec2014/pdf/250_Paper.pdf
Annotating Relation Mentions in Tabloid Press
This paper presents a new resource for the training and evaluation needed by relation extraction experiments. The corpus consists of annotations of mentions for three semantic relations: marriage, parent-child, siblings, selected from the domain of biographic facts about persons and their social relationships. The corpus contains more than one hundred news articles from Tabloid Press. In the current corpus, we only consider the relation mentions occurring in the individual sentences. We provide multi-level annotations which specify the marked facts from relation, argument, entity, down to the token level, thus allowing for detailed analysis of linguistic phenomena and their interactions. A generic markup tool Recon developed at the DFKI LT lab has been utilised for the annotation task. The corpus has been annotated by two human experts, supported by additional conflict resolution conducted by a third expert. As shown in the evaluation, the annotation is of high quality as proved by the stated inter-annotator agreements both on sentence level and on relationmention level. The current corpus is already in active use in our research for evaluation of the relation extraction performance of our automatically learned extraction patterns.
false
[]
[]
null
null
null
This research was partially supported by the German Federal Ministry of Education and Research (BMBF) through the project Deependance (contract 01IW11003) and by Google through a Focused Research Award for the project LUcKY granted in July 2013.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
resnik-etal-2013-using
https://aclanthology.org/D13-1133
Using Topic Modeling to Improve Prediction of Neuroticism and Depression in College Students
We investigate the value-add of topic modeling in text analysis for depression, and for neuroticism as a strongly associated personality measure. Using Pennebaker's Linguistic Inquiry and Word Count (LIWC) lexicon to provide baseline features, we show that straightforward topic modeling using Latent Dirichlet Allocation (LDA) yields interpretable, psychologically relevant "themes" that add value in prediction of clinical assessments.
true
[]
[]
Good Health and Well-Being
Quality Education
null
We are grateful to Jamie Pennebaker for the LIWC lexicon and for allowing us to use data from Pennebaker and King (1999) and Rude et al. (2004), to the three psychologists who kindly took the time to provide human ratings, and to our reviewers for helpful comments. This work has been supported in part by NSF grant IIS-1211153.
2013
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
false
sowa-1979-semantics
https://aclanthology.org/P79-1010
Semantics of Conceptual Graphs
Conceptual graphs are both a language for representing knowledge and patterns for constructing models. They form models in the AI sense of structures that approximate some actual or possible system in the real world. They also form models in the logical sense of structures for which some set of axioms are true. When combined with recent developments in nonstandard logic and semantics, conceptual graphs can form a bridge between heuristic techniques of AI and formal techniques of model theory.
false
[]
[]
null
null
null
null
1979
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zelasko-2018-expanding
https://aclanthology.org/L18-1295
Expanding Abbreviations in a Strongly Inflected Language: Are Morphosyntactic Tags Sufficient?
In this paper, the problem of recovery of morphological information lost in abbreviated forms is addressed with a focus on highly inflected languages. Evidence is presented that the correct inflected form of an expanded abbreviation can in many cases be deduced solely from the morphosyntactic tags of the context. The prediction model is a deep bidirectional LSTM network with tag embedding. The training and evaluation data are gathered by finding the words which could have been abbreviated and using their corresponding morphosyntactic tags as the labels, while the tags of the context words are used as the input features for classification. The network is trained on over 10 million words from the Polish Sejm Corpus and achieves 74.2% prediction accuracy on a smaller, but more general National Corpus of Polish. The analysis of errors suggests that performance in this task may improve if some prior knowledge about the abbreviated word is incorporated into the model.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mihalcea-moldovan-1999-method
https://aclanthology.org/P99-1020
A Method for Word Sense Disambiguation of Unrestricted Text
Selecting the most appropriate sense for an ambiguous word in a sentence is a central problem in Natural Language Processing. In this paper, we present a method that attempts to disambiguate all the nouns, verbs, adverbs and adjectives in a text, using the senses provided in WordNet. The senses are ranked using two sources of information: (1) the Internet for gathering statistics for word-word cooccurrences and (2)WordNet for measuring the semantic density for a pair of words. We report an average accuracy of 80% for the first ranked sense, and 91% for the first two ranked senses. Extensions of this method for larger windows of more than two words are considered.
false
[]
[]
null
null
null
null
1999
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ben-abacha-zweigenbaum-2011-medical
https://aclanthology.org/W11-0207
Medical Entity Recognition: A Comparaison of Semantic and Statistical Methods
Medical Entity Recognition is a crucial step towards efficient medical texts analysis. In this paper we present and compare three methods based on domain-knowledge and machine-learning techniques. We study two research directions through these approaches: (i) a first direction where noun phrases are extracted in a first step with a chunker before the final classification step and (ii) a second direction where machine learning techniques are used to identify simultaneously entities boundaries and categories. Each of the presented approaches is tested on a standard corpus of clinical texts. The obtained results show that the hybrid approach based on both machine learning and domain knowledge obtains the best performance.
true
[]
[]
Good Health and Well-Being
null
null
This work has been partially supported by OSEO under the Quaero program.
2011
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
vincze-almasi-2014-non
https://aclanthology.org/W14-0116
Non-Lexicalized Concepts in Wordnets: A Case Study of English and Hungarian
Here, we investigate non-lexicalized synsets found in the Hungarian wordnet, and compare them to the English one, in the context of wordnet building principles. We propose some strategies that may be used to overcome difficulties concerning non-lexicalized synsets in wordnets constructed using the expand method. It is shown that the merge model could also have been applied to Hungarian, and with the help of the above-mentioned strategies, a wordnet based on the expand model can be transformed into a wordnet similar to that constructed with the merge model.
false
[]
[]
null
null
null
This work was in part supported by the European Union and co-funded by the European Social Fund through the project Telemedicine-focused research activities in the fields of mathematics, informatics and medical sciences (grant no.: TÁMOP-4.2.2.A-11/1/KONV-2012-0073).
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dillinger-seligman-2006-conversertm
https://aclanthology.org/W06-3706
Converser\mbox$^\mboxTM$: Highly Interactive Speech-to-Speech Translation for Healthcare
We describe a highly interactive system for bidirectional, broad-coverage spoken language communication in the healthcare area. The paper briefly reviews the system's interactive foundations, and then goes on to discuss in greater depth our Translation Shortcuts facility, which minimizes the need for interactive verification of sentences after they have been vetted. This facility also considerably speeds throughput while maintaining accuracy, and allows use by minimally literate patients for whom any mode of text entry might be difficult.
true
[]
[]
Good Health and Well-Being
null
null
null
2006
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dadu-pant-2020-sarcasm
https://aclanthology.org/2020.figlang-1.6
Sarcasm Detection using Context Separators in Online Discourse
Sarcasm is an intricate form of speech, where meaning is conveyed implicitly. Being a convoluted form of expression, detecting sarcasm is an assiduous problem. The difficulty in recognition of sarcasm has many pitfalls, including misunderstandings in everyday communications, which leads us to an increasing focus on automated sarcasm detection. In the second edition of the Figurative Language Processing (FigLang 2020) workshop, the shared task of sarcasm detection released two datasets, containing responses along with their context sampled from Twitter and Reddit. In this work, we use RoBERT a large to detect sarcasm in both the datasets. We further assert the importance of context in improving the performance of contextual word embedding based models by using three different types of inputs-Response-only, Context-Response, and Context-Response (Separated). We show that our proposed architecture performs competitively for both the datasets. We also show that the addition of a separation token between context and target response results in an improvement of 5.13% in the F1-score in the Reddit dataset.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jardine-teufel-2014-topical
https://aclanthology.org/E14-1053
Topical PageRank: A Model of Scientific Expertise for Bibliographic Search
We model scientific expertise as a mixture of topics and authority. Authority is calculated based on the network properties of each topic network. ThemedPageRank, our combination of LDA-derived topics with PageRank differs from previous models in that topics influence both the bias and transition probabilities of PageRank. It also incorporates the age of documents. Our model is general in that it can be applied to all tasks which require an estimate of document-document, documentquery, document-topic and topic-query similarities. We present two evaluations, one on the task of restoring the reference lists of 10,000 articles, the other on the task of automatically creating reading lists that mimic reading lists created by experts. In both evaluations, our system beats state-of-the-art, as well as Google Scholar and Google Search indexed againt the corpus. Our experiments also allow us to quantify the beneficial effect of our two proposed modifications to PageRank.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
merlo-1997-attaching
https://aclanthology.org/W97-0317
Attaching Multiple Prepositional Phrases: Backed-off Estimation Generalized
There has recently been considerable interest in the use of lexically-based statistical techniques to resolve prepositional phrase attachments. To our knowledge, however, these investigations have only considered the problem of attaching the first PP, i.e., in a IV NP PP] configuration. In this paper, we consider one technique which has been successfully applied to this problem, backed-off estimation, and demonstrate how it can be extended to deal with the problem of multiple PP attachment. The multiple PP attachment introduces two related problems: sparser data (since multiple PPs are naturally rarer), and greater syntactic ambiguity (more attachment configurations which must be distinguished). We present and algorithm which solves this problem through re-use of the relatively rich data obtained from first PP training, in resolving subsequent PP attachments.
false
[]
[]
null
null
null
We gratefully acknowledge the support of the British Council and the Swiss National Science Foundation on grant 83BC044708 to the first two authors, and on grant 12-43283.95 and fellowship 8210-46569 from the Swiss NSF to the first author. We thank the audiences at Edinburgh and Pennsylvania for their useful comments. All errors remain our responsibility.
1997
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
louis-newman-2012-summarization
https://aclanthology.org/C12-2075
Summarization of Business-Related Tweets: A Concept-Based Approach
We present a method for summarizing the collection of tweets related to a business. Our procedure aggregates tweets into subtopic clusters which are then ranked and summarized by a few representative tweets from each cluster. Central to our approach is the ability to group diverse tweets into clusters. The broad clustering is induced by first learning a small set of business-related concepts automatically from free text and then subdividing the tweets into these concepts. Cluster ranking is performed using an importance score which combines topic coherence and sentiment value of the tweets. We also discuss alternative methods to summarize these tweets and evaluate the approaches using a small user study. Results show that the concept-based summaries are ranked favourably by the users.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
koufakou-scott-2020-lexicon
https://aclanthology.org/2020.trac-1.24
Lexicon-Enhancement of Embedding-based Approaches Towards the Detection of Abusive Language
Detecting abusive language is a significant research topic, which has received a lot of attention recently. Our work focuses on detecting personal attacks in online conversations. As previous research on this task has largely used deep learning based on embeddings, we explore the use of lexicons to enhance embedding-based methods in an effort to see how these methods apply in the particular task of detecting personal attacks. The methods implemented and experimented with in this paper are quite different from each other, not only in the type of lexicons they use (sentiment or semantic), but also in the way they use the knowledge from the lexicons, in order to construct or to change embeddings that are ultimately fed into the learning model. The sentiment lexicon approaches focus on integrating sentiment information (in the form of sentiment embeddings) into the learning model. The semantic lexicon approaches focus on transforming the original word embeddings so that they better represent relationships extracted from a semantic lexicon. Based on our experimental results, semantic lexicon methods are superior to the rest of the methods in this paper, with at least 4% macro-averaged F1 improvement over the baseline.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
We gratefully acknowledge the Google Cloud Platform (GCP) research credits program and the TensorFlow Research Cloud (TFRC) program.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
nguyen-etal-2010-nonparametric
https://aclanthology.org/C10-1092
Nonparametric Word Segmentation for Machine Translation
We present an unsupervised word segmentation model for machine translation. The model uses existing monolingual segmentation techniques and models the joint distribution over source sentence segmentations and alignments to the target sentence. During inference, the monolingual segmentation model and the bilingual word alignment model are coupled so that the alignments to the target sentence guide the segmentation of the source sentence. The experiments show improvements on Arabic-English and Chinese-English translation tasks.
false
[]
[]
null
null
null
We thank Kevin Gimpel for interesting discussions and technical advice. We also thank the anonymous reviewers for useful feedback. This work was supported by DARPA Gale project, NSF grants 0844507 and 0915187.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lazaridou-etal-2016-red
https://aclanthology.org/P16-2035
The red one!: On learning to refer to things based on discriminative properties
As a first step towards agents learning to communicate about their visual environment, we propose a system that, given visual representations of a referent (CAT) and a context (SOFA), identifies their discriminative attributes, i.e., properties that distinguish them (has_tail). Moreover, although supervision is only provided in terms of discriminativeness of attributes for pairs, the model learns to assign plausible attributes to specific objects (SOFA-has_cushion). Finally, we present a preliminary experiment confirming the referential success of the predicted discriminative attributes.
false
[]
[]
null
null
null
This work was supported by ERC 2011 Starting Independent Research Grant n. 283554 (COM-POSES). We gratefully acknowledge the support of NVIDIA Corporation with the donation of the GPUs used for this research.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
torabi-asr-demberg-2012-implicitness
https://aclanthology.org/C12-1163
Implicitness of Discourse Relations
The annotations of explicit and implicit discourse connectives in the Penn Discourse Treebank make it possible to investigate on a large scale how different types of discourse relations are expressed. Assuming an account of the Uniform Information Density hypothesis, we expect that discourse relations should be expressed explicitly with a discourse connector when they are unexpected, but may be implicit when the discourse relation can be anticipated. We investigate whether discourse relations which have been argued to be expected by the comprehender exhibit a higher ratio of implicit connectors. We find support for two hypotheses put forth in previous research which suggest that continuous and causal relations are presupposed by language users when processing consecutive sentences in a text. We then proceed to analyze the effect of Implicit Causality (IC) verbs (which have been argued to raise an expectation for an explanation) as a local cue for an upcoming causal relation.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
batista-navarro-ananiadou-2011-building
https://aclanthology.org/W11-0210
Building a Coreference-Annotated Corpus from the Domain of Biochemistry
One of the reasons for which the resolution of coreferences has remained a challenging information extraction task, especially in the biomedical domain, is the lack of training data in the form of annotated corpora. In order to address this issue, we developed the HANAPIN corpus. It consists of full-text articles from biochemistry literature, covering entities of several semantic types: chemical compounds, drug targets (e.g., proteins, enzymes, cell lines, pathogens), diseases, organisms and drug effects. All of the coreferring expressions pertaining to these semantic types were annotated based on the annotation scheme that we developed. We observed four general types of coreferences in the corpus: sortal, pronominal, abbreviation and numerical. Using the MASI distance metric, we obtained 84% in computing the inter-annotator agreement in terms of Krippendorff's alpha. Consisting of 20 full-text, open-access articles, the corpus will enable other researchers to use it as a resource for their own coreference resolution methodologies.
true
[]
[]
Good Health and Well-Being
Industry, Innovation and Infrastructure
null
The UK National Centre for Text Mining is funded by the UK Joint Information Systems Committee (JISC). The authors would also like to acknowledge the Office of the Chancellor, in collaboration with the Office of the Vice-Chancellor for Research and Development, of the University of the Philippines Diliman for funding support through the Outright Research Grant.The authors also thank Paul Thompson for his feedback on the annotation guidelines, and the anonymous reviewers for their helpful comments.
2011
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
mogele-etal-2006-smartweb
http://www.lrec-conf.org/proceedings/lrec2006/pdf/277_pdf.pdf
SmartWeb UMTS Speech Data Collection: The SmartWeb Handheld Corpus
In this paper we outline the German speech data collection for the SmartWeb project, which is funded by the German Ministry of Science and Education. We focus on the SmartWeb Handheld Corpus (SHC), which has been collected by the Bavarian Archive for Speech Signals (BAS) at the Phonetic Institute (IPSK) of Munich University. Signals of SHC are being recorded in real-life environments (indoor and outdoor) with real background noise as well as real transmission line errors. We developed a new elicitation method and recording technique, called situational prompting, which facilitates collecting realistic dialogue speech data in a cost efficient way. We can show that almost realistic speech queries to a dialogue system issued over a mobile PDA or smart phone can be collected very efficiently using an automatic speech server. We describe the technical and linguistic features of the resulting speech corpus, which will be publicly available at BAS or ELDA.
false
[]
[]
null
null
null
null
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mao-etal-2021-lightweight
https://aclanthology.org/2021.acl-long.226
Lightweight Cross-Lingual Sentence Representation Learning
Large-scale models for learning fixeddimensional cross-lingual sentence representations like LASER (Artetxe and Schwenk, 2019b) lead to significant improvement in performance on downstream tasks. However, further increases and modifications based on such large-scale models are usually impractical due to memory limitations. In this work, we introduce a lightweight dual-transformer architecture with just 2 layers for generating memory-efficient cross-lingual sentence representations. We explore different training tasks and observe that current cross-lingual training tasks leave a lot to be desired for this shallow architecture. To ameliorate this, we propose a novel cross-lingual language model, which combines the existing single-word masked language model with the newly proposed cross-lingual token-level reconstruction task. We further augment the training task by the introduction of two computationally-lite sentence-level contrastive learning tasks to enhance the alignment of cross-lingual sentence representation space, which compensates for the learning bottleneck of the lightweight transformer for generative tasks. Our comparisons with competing models on cross-lingual sentence retrieval and multilingual document classification confirm the effectiveness of the newly proposed training tasks for a shallow model. 1
false
[]
[]
null
null
null
We would like to thank all the reviewers for their valuable comments and suggestions to improve this paper. This work was partially supported by Grantin-Aid for Young Scientists #19K20343, JSPS.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
boyd-graber-etal-2012-besting
https://aclanthology.org/D12-1118
Besting the Quiz Master: Crowdsourcing Incremental Classification Games
Cost-sensitive classification, where the features used in machine learning tasks have a cost, has been explored as a means of balancing knowledge against the expense of incrementally obtaining new features. We introduce a setting where humans engage in classification with incrementally revealed features: the collegiate trivia circuit. By providing the community with a web-based system to practice, we collected tens of thousands of implicit word-byword ratings of how useful features are for eliciting correct answers. Observing humans' classification process, we improve the performance of a state-of-the art classifier. We also use the dataset to evaluate a system to compete in the incremental classification task through a reduction of reinforcement learning to classification. Our system learns when to answer a question, performing better than baselines and most human players.
false
[]
[]
null
null
null
We thank the many players who played our online quiz bowl to provide our data (and hopefully had fun doing so) and Carlo Angiuli, Arnav Moudgil, and Jerry Vinokurov for providing access to quiz bowl questions. This research was supported by NSF grant #1018625. Jordan Boyd-Graber is also supported by the Army Research Laboratory through ARL Cooperative Agreement W911NF-09-2-0072. Any opinions, findings, conclusions, or recommendations expressed are the authors' and do not necessarily reflect those of the sponsors.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kovatchev-etal-2021-vectors
https://aclanthology.org/2021.acl-long.96
Can vectors read minds better than experts? Comparing data augmentation strategies for the automated scoring of children's mindreading ability
In this paper we implement and compare 7 different data augmentation strategies for the task of automatic scoring of children's ability to understand others' thoughts, feelings, and desires (or "mindreading"). We recruit in-domain experts to re-annotate augmented samples and determine to what extent each strategy preserves the original rating. We also carry out multiple experiments to measure how much each augmentation strategy improves the performance of automatic scoring systems. To determine the capabilities of automatic systems to generalize to unseen data, we create UK-MIND-20-a new corpus of children's performance on tests of mindreading, consisting of 10,320 question-answer pairs. We obtain a new state-of-the-art performance on the MIND-CA corpus, improving macro-F1-score by 6 points. Results indicate that both the number of training examples and the quality of the augmentation strategies affect the performance of the systems. The taskspecific augmentations generally outperform task-agnostic augmentations. Automatic augmentations based on vectors (GloVe, FastText) perform the worst. We find that systems trained on MIND-CA generalize well to UK-MIND-20. We demonstrate that data augmentation strategies also improve the performance on unseen data.
true
[]
[]
Quality Education
null
null
We would like to thank Imogen Grumley Traynor and Irene Luque Aguilera for the annotation and the creation of the lists of synonyms and phrases. We also want to thank the anonymous reviewers for their feedback and suggestions. This project was funded by a grant from Wellcome to R. T. Devine.
2021
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
qin-etal-2021-dont
https://aclanthology.org/2021.emnlp-main.182
Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System
Consistency Identification has obtained remarkable success on open-domain dialogue, which can be used for preventing inconsistent response generation. However, in contrast to the rapid development in open-domain dialogue, few efforts have been made to the task-oriented dialogue direction. In this paper, we argue that consistency problem is more urgent in task-oriented domain. To facilitate the research, we introduce CI-ToD, a novel dataset for Consistency Identification in Taskoriented Dialog system. In addition, we not only annotate the single label to enable the model to judge whether the system response is contradictory, but also provide more finegrained labels (i.e., Dialogue History Inconsistency, User Query Inconsistency and Knowledge Base Inconsistency) to encourage model to know what inconsistent sources lead to it. Empirical results show that state-of-the-art methods only achieve 51.3%, which is far behind the human performance of 93.2%, indicating that there is ample room for improving consistency identification ability. Finally, we conduct exhaustive experiments and qualitative analysis to comprehend key challenges and provide guidance for future directions. All datasets and models are publicly available at https://github.com/yizhen20133868/CI-ToD.
false
[]
[]
null
null
null
This work was supported by the National Key R&D Program of China via grant 2020AAA0106501 and the National Natural Science Foundation of China (NSFC) via grant 61976072 and 61772153. This work was also supported by the Zhejiang Lab's International Talent Fund for Young Professionals.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
turian-etal-2010-word
https://aclanthology.org/P10-1040
Word Representations: A Simple and General Method for Semi-Supervised Learning
If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here:
false
[]
[]
null
null
null
Thank you to Magnus Sahlgren, Bob Carpenter, Percy Liang, Alexander Yates, and the anonymous reviewers for useful discussion. Thank you to Andriy Mnih for inducing his embeddings on RCV1 for us. Joseph Turian and Yoshua Bengio acknowledge the following agencies for research funding and computing support: NSERC, RQCHP, CIFAR. Lev Ratinov was supported by the Air Force Research Laboratory (AFRL) under prime contract no. FA8750-09-C-0181. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the author and do not necessarily reflect the view of the Air Force Research Laboratory (AFRL).
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yaseen-etal-2006-building
http://www.lrec-conf.org/proceedings/lrec2006/pdf/131_pdf.pdf
Building Annotated Written and Spoken Arabic LRs in NEMLAR Project
The NEMLAR project: Network for Euro-Mediterranean LAnguage Resource and human language technology development and support; (www.nemlar.org) is a project supported by the EC with partners from Europe and the Middle East; whose objective is to build a network of specialized partners to promote and support the development of Arabic Language Resources in the Mediterranean region. The project focused on identifying the state of the art of LRs in the region, assessing priority requirements through consultations with language industry and communication players, and establishing a protocol for developing and identifying a Basic Language Resource Kit (BLARK) for Arabic, and to assess first priority requirements. The BLARK is defined as the minimal set of language resources that is necessary to do any pre-competitive research and education, in addition to the development of crucial components for any future NLP industry. Following the identification of high priority resources the NEMLAR partners agreed to focus on, and produce three main resources, which are: 1) Annotated Arabic written corpus of about 500 K words, 2) Arabic speech corpus for TTS applications of 2x5 hours, and 3) Arabic broadcast news speech corpus of 40 hours Modern Standard Arabic. For each of the resources underlying linguistic models and assumptions of the corpus, technical specifications, methodologies for the collection and building of the resources, validation and verification mechanisms were put and applied for the three LRs.
false
[]
[]
null
null
null
The authors wish to thank the European Commission for the support granted through the INCO-MED programme. The INCO-MED programme has enhanced the development of the cultural dialogue and partnerships across the Mediterranean, as well as the advancement of science to the benefit of all involved parties. It was wise to select language technology as one of the areas to support.The authors also want to thank all of the project participants, cf. www.NEMLAR.org for their contributions.
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chae-2004-analysis
https://aclanthology.org/Y04-1006
An Analysis of the Korean [manyak ... V-telato] Construction : An Indexed Phrase Structure Grammar Approach
Concord adverbial constructions in Korean show unbounded dependency relationships between two non-empty entities. There are two different types of unboundedness involved: one between a concord adverbial and a verbal ending and the other between the adverbial as a modifier and a predicate. In addition, these unboundedness relationships exhibit properties of "downward movement" phenomena. In this paper, we examine the Indexed Phrase Structure Grammar analysis of the constructions presented in Chae (2003, 2004), and propose to introduce a new feature to solve its conceptual problem. Then, we provide an analysis of conditional-concessive constructions, which is a subtype of concord adverbial constructions. These constructions are special in the sense that they contain a seemingly incompatible combination of a conditional adverbial and a concessive verbal ending. We argue that they are basically conditional constructions despite their concessive meaning.
false
[]
[]
null
null
null
An earlier version of this paper was presented at a monthly meeting of the Korean Society for Language and Information on April 24, 2004. I appreciate valuable comments and suggestions from Beom-mo Kang, Yong-Beom Kim, Seungho Nam, Jae-Hak Yoon and others.
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
saggion-2007-shef
https://aclanthology.org/S07-1063
SHEF: Semantic Tagging and Summarization Techniques Applied to Cross-document Coreference
We describe experiments for the crossdocument coreference task in SemEval 2007. Our cross-document coreference system uses an in-house agglomerative clustering implementation to group documents referring to the same entity. Clustering uses vector representations created by summarization and semantic tagging analysis components. We present evaluation results for four system configurations demonstrating the potential of the applied techniques.
false
[]
[]
null
null
null
This work was partially supported by the EU-funded MUSING project (IST-2004-027097) and the EUfunded LIRICS project (eContent project 22236).
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
guo-etal-2020-evidence
https://aclanthology.org/2020.acl-main.544
Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEncoder
Generating inferential texts about an event in different perspectives requires reasoning over different contexts that the event occurs. Existing works usually ignore the context that is not explicitly provided, resulting in a context-independent semantic representation that struggles to support the generation. To address this, we propose an approach that automatically finds evidence for an event from a large text corpus, and leverages the evidence to guide the generation of inferential texts. Our approach works in an encoderdecoder manner and is equipped with a Vector Quantised-Variational Autoencoder, where the encoder outputs representations from a distribution over discrete variables. Such discrete representations enable automatically selecting relevant evidence, which not only facilitates evidence-aware generation, but also provides a natural way to uncover rationales behind the generation. Our approach provides state-ofthe-art performance on both Event2Mind and ATOMIC datasets. More importantly, we find that with discrete representations, our model selectively uses evidence to generate different inferential texts.
false
[]
[]
null
null
null
Daya Guo and Jian Yin are supported by the National Natural Science Foundation of China (U1711262, U1611264, U1711261, U1811261, U1811264, U1911203), National Key R&D Program of China (2018YFB1004404), Guangdong Basic and Applied Basic Research Foundation (2019B1515130001), Key R&D Program of Guangdong Province (2018B010107005). Jian Yin is the corresponding author.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dong-1990-transtar
https://aclanthology.org/C90-3066
Transtar - A Commercial English-Chinese MT System
null
false
[]
[]
null
null
null
null
1990
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
romero-etal-2021-task
https://aclanthology.org/2021.sigdial-1.46
A Task-Oriented Dialogue Architecture via Transformer Neural Language Models and Symbolic Injection
Recently, transformer language models have been applied to build both task-and non-taskoriented dialogue systems. Although transformers perform well on most of the NLP tasks, they perform poorly on context retrieval and symbolic reasoning. Our work aims to address this limitation by embedding the model in an operational loop that blends both natural language generation and symbolic injection. We evaluated our system on the multi-domain DSTC8 data set and reported joint goal accuracy of 75.8% (ranked among the first half positions), intent accuracy of 97.4% (which is higher than the reported literature), and a 15% improvement for success rate compared to a baseline with no symbolic injection. These promising results suggest that transformer language models can not only generate proper system responses but also symbolic representations that can further be used to enhance the overall quality of the dialogue management as well as serving as scaffolding for complex conversational reasoning.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
baker-sato-2003-framenet
https://aclanthology.org/P03-2030
The FrameNet Data and Software
The FrameNet project has developed a lexical knowledge base providing a unique level of detail as to the the possible syntactic realizations of the specific semantic roles evoked by each predicator, for roughly 7,000 lexical units, on the basis of annotating more than 100,000 example sentences extracted from corpora. An interim version of the FrameNet data was released in October, 2002 and is being widely used. A new, more portable version of the FrameNet software is also being made available to researchers elsewhere, including the Spanish FrameNet project. This demo and poster will briefly explain the principles of Frame Semantics and demonstrate the new unified tools for lexicon building and annotation and also FrameSQL, a search tool for finding patterns in annotated sentences. We will discuss the content and format of the data releases and how the software and data can be used by other NLP researchers.
false
[]
[]
null
null
null
null
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
saravanan-etal-2008-automatic
https://aclanthology.org/I08-1063
Automatic Identification of Rhetorical Roles using Conditional Random Fields for Legal Document Summarization
In this paper, we propose a machine learning approach to rhetorical role identification from legal documents. In our approach, we annotate roles in sample documents with the help of legal experts and take them as training data. Conditional random field model has been trained with the data to perform rhetorical role identification with reinforcement of rich feature sets. The understanding of structure of a legal document and the application of mathematical model can brings out an effective summary in the final stage. Other important new findings in this work include that the training of a model for one sub-domain can be extended to another sub-domains with very limited augmentation of feature sets. Moreover, we can significantly improve extraction-based summarization results by modifying the ranking of sentences with the importance of specific roles.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
prevot-etal-2013-quantitative-comparative
https://aclanthology.org/Y13-1007
A Quantitative Comparative Study of Prosodic and Discourse Units, the Case of French and Taiwan Mandarin
Studies of spontaneous conversational speech grounded on large and richly annotated corpora are still rare due to the scarcity of such resources. Comparative studies based on such resources are even more rarely found because of the extra-need of comparability in terms of content, genre and speaking style. The present paper presents our efforts for establishing such a dataset for two typologically diverse languages: French and Taiwan Mandarin. To the primary data, we added morphosyntactic, chunking, prosodic and discourse annotation in order to be able to carry out quantitative comparative studies of the syntaxdiscourse-prosody interfaces. We introduced our work on the data creation itself as well as some preliminary results of the boundary alignment between prosodic and discourse units and how POS and chunks are distributed on these boundaries.
false
[]
[]
null
null
null
This work has been realized thanks to the support of the France-Taiwan ORCHID Program, under grant 100-2911-I-001-504 and the NSC project 100-2410-H-001-093 granted to the second author, as well as ANR OTIM BLAN08-2-349062 for initial work on the French data. We would like also to thank our colleagues for the help at various stage of the Data preparation, in particular Roxane Bertrand, Yi-Fen Liu, Robert Espesser, Stéphane Rauzy, Brigitte Bigi, and Philippe Blache.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
basu-roy-chowdhury-etal-2019-instance
https://aclanthology.org/D19-6120
Instance-based Inductive Deep Transfer Learning by Cross-Dataset Querying with Locality Sensitive Hashing
Supervised learning models are typically trained on a single dataset and the performance of these models rely heavily on the size of the dataset i.e., the amount of data available with ground truth. Learning algorithms try to generalize solely based on the data that it is presented with during the training. In this work, we propose an inductive transfer learning method that can augment learning models by infusing similar instances from different learning tasks in Natural Language Processing (NLP) domain. We propose to use instance representations from a source dataset, without inheriting anything else from the source learning model. Representations of the instances of source and target datasets are learned, retrieval of relevant source instances is performed using soft-attention mechanism and locality sensitive hashing and then augmented into the model during training on the target dataset. Therefore, while learning from a training data, we also simultaneously exploit and infuse relevant local instance-level information from an external data. Using this approach we have shown significant improvements over the baseline for three major news classification datasets. Experimental evaluations also show that the proposed approach reduces dependency on labeled data by a significant margin for comparable performance. With our proposed cross dataset learning procedure we show that one can achieve competitive/better performance than learning from a single dataset.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
shaalan-etal-2009-syntactic
https://aclanthology.org/2009.mtsummit-caasl.9
Syntactic Generation of Arabic in Interlingua-based Machine Translation Framework
Arabic is a highly inflectional language, with a rich morphology, relatively free word order, and two types of sentences: nominal and verbal. Arabic natural language processing in general is still underdeveloped and Arabic natural language generation (NLG) is even less developed. In particular, Arabic natural language generation from Interlingua was only investigated using template-based approaches. Moreover, tools used for other languages are not easily adaptable to Arabic due to the Arabic language complexity at both the morphological and syntactic levels. In this paper, we report our attempt at developing a rule-based Arabic generator for task-oriented interlingua-based spoken dialogues. Examples of syntactic generation results from the Arabic generator will be given and will illustrate how the system works. Our proposed syntactic generator has been effectively evaluated using real test data and achieved satisfactory results.
false
[]
[]
null
null
null
null
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
cruz-etal-2017-annotating
https://aclanthology.org/W17-1808
Annotating Negation in Spanish Clinical Texts
In this paper we present ongoing work on annotating negation in Spanish clinical documents. A corpus of anamnesis and radiology reports has been annotated by two domain expert annotators with negation markers and negated events. The Dice coefficient for inter-annotator agreement is higher than 0.94 for negation markers and higher than 0.72 for negated events. The corpus will be publicly released when the annotation process is finished, constituting the first corpus annotated with negation for Spanish clinical reports available for the NLP community.
true
[]
[]
Good Health and Well-Being
null
null
This work has been partially funded by the Andalusian Regional Govenment (Bidamir Project TIC-07629) and the Spanish Government (IPHealth Project TIN2013-47153-C3-2-R). RM is supported by the Netherlands Organization for Scientific Research (NWO) via the Spinoza-prize awarded to Piek Vossen (SPI 30-673, 2014(SPI 30-673, -2019.
2017
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wu-etal-2021-counterfactual
https://aclanthology.org/2021.naacl-main.156
Counterfactual Supporting Facts Extraction for Explainable Medical Record Based Diagnosis with Graph Network
Providing a reliable explanation for clinical diagnosis based on the Electronic Medical Record (EMR) is fundamental to the application of Artificial Intelligence in the medical field. Current methods mostly treat the EMR as a text sequence and provide explanations based on a precise medical knowledge base, which is disease-specific and difficult to obtain for experts in reality. Therefore, we propose a counterfactual multi-granularity graph supporting facts extraction (CMGE) method to extract supporting facts from the irregular EMR itself without external knowledge bases in this paper. Specifically, we first structure the sequence of the EMR into a hierarchical graph network and then obtain the causal relationship between multi-granularity features and diagnosis results through counterfactual intervention on the graph. Features having the strongest causal connection with the results provide interpretive support for the diagnosis. Experimental results on real Chinese EMRs of the lymphedema demonstrate that our method can diagnose four types of EMRs correctly, and can provide accurate supporting facts for the results. More importantly, the results on different diseases demonstrate the robustness of our approach, which represents the potential application in the medical field 1 .
true
[]
[]
Good Health and Well-Being
null
null
This work is supported by the National Key Research and Development Program of China (No.2018YFB1005104) and the Key Research Program of the Chinese Academy of Sciences (ZDBS-SSW-JSC006).
2021
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
esteve-etal-2010-epac
http://www.lrec-conf.org/proceedings/lrec2010/pdf/650_Paper.pdf
The EPAC Corpus: Manual and Automatic Annotations of Conversational Speech in French Broadcast News
This paper presents the EPAC corpus which is composed by a set of 100 hours of conversational speech manually transcribed and by the outputs of automatic tools (automatic segmentation, transcription, POS tagging, etc.) applied on the entire French ESTER 1 audio corpus: this concerns about 1700 hours of audio recordings from radiophonic shows. This corpus was built during the EPAC project funded by the French Research Agency (ANR) from 2007 to 2010. This corpus increases significantly the amount of French manually transcribed audio recordings easily available and it is now included as a part of the ESTER 1 corpus in the ELRA catalog without additional cost. By providing a large set of automatic outputs of speech processing tools, the EPAC corpus should be useful to researchers who want to work on such data without having to develop and deal with such tools. These automatic annotations are various: segmentation and speaker diarization, one-best hypotheses from the LIUM automatic speech recognition system with confidence measures, but also word-lattices and confusion networks, named entities, part-of-speech tags, chunks, etc. The 100 hours of speech manually transcribed were split into three data sets in order to get an official training corpus, an official development corpus and an official test corpus. These data sets were used to develop and to evaluate some automatic tools which have been used to process the 1700 hours of audio recording. For example, on the EPAC test data set our ASR system yields a word error rate equals to 17.25%.
false
[]
[]
null
null
null
This research was supported by the ANR (Agence Nationale de la Recherche) under contract number ANR-06-MDCA-006.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sagot-martinez-alonso-2017-improving
https://aclanthology.org/W17-6304
Improving neural tagging with lexical information
Neural part-of-speech tagging has achieved competitive results with the incorporation of character-based and pre-trained word embeddings. In this paper, we show that a state-of-the-art bi-LSTM tagger can benefit from using information from morphosyntactic lexicons as additional input. The tagger, trained on several dozen languages, shows a consistent, average improvement when using lexical information, even when also using character-based embeddings, thus showing the complementarity of the different sources of lexical information. The improvements are particularly important for the smaller datasets.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
de-vriend-etal-2002-using
http://www.lrec-conf.org/proceedings/lrec2002/pdf/264.pdf
Using Grammatical Description as a Metalanguage Resource
The present paper is concerned with the advantages of a digitised descriptive grammar over its traditional print version. First we discuss the process of up-conversion of the ANS material and the main advantages the E-ANS has for the editorial staff. Then from the perspective of language resources, we discuss different applications of the grammatical descriptions for both human and machine users. The discussion is based on our experiences during the project 'Elektronisering van de ANS', a project in progress that is aimed at developing a digital version of the Dutch reference grammar Algemene Nederlandse Spraakkunst (ANS).
false
[]
[]
null
null
null
We would like to thank the members of the steering committee of this project for their comments and suggestions on the work presented in this paper: Gosse Bouma, Walter Daelemans, Carel Jansen, Gerard Kempen, Luuk Van Waes.
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tomeh-etal-2009-complexity
https://aclanthology.org/2009.mtsummit-papers.17
Complexity-Based Phrase-Table Filtering for Statistical Machine Translation
We describe an approach for filtering phrase tables in a Statistical Machine Translation system, which relies on a statistical independence measure called Noise, first introduced in (Moore, 2004). While previous work by (Johnson et al., 2007) also addressed the question of phrase table filtering, it relied on a simpler independence measure, the p-value, which is theoretically less satisfying than the Noise in this context. In this paper, we use Noise as the filtering criterion, and show that when we partition the bi-phrase tables in several sub-classes according to their complexity, using Noise leads to improvements in BLEU score that are unreachable using pvalue, while allowing a similar amount of pruning of the phrase tables.
false
[]
[]
null
null
null
This work was supported by the European Commission under the IST Project SMART (FP6-033917). Thanks to Eric Gaussier for his support at the be-ginning of this project, and to Sara Stymne and the anonymous reviewers for detailed and insightful comments.
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kulkarni-boyer-2018-toward
https://aclanthology.org/W18-0532
Toward Data-Driven Tutorial Question Answering with Deep Learning Conversational Models
There has been an increase in popularity of data-driven question answering systems given their recent success. This paper explores the possibility of building a tutorial question answering system for Java programming from data sampled from a community-based question answering forum. This paper reports on the creation of a dataset that could support building such a tutorial question answering system and discusses the methodology to create the 106,386 question strong dataset. We investigate how retrieval-based and generative models perform on the given dataset. The work also investigates the usefulness of using hybrid approaches such as combining retrieval-based and generative models. The results indicate that building datadriven tutorial systems using communitybased question answering forums holds significant promise.
true
[]
[]
Quality Education
null
null
null
2018
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
felt-etal-2015-making
https://aclanthology.org/K15-1020
Making the Most of Crowdsourced Document Annotations: Confused Supervised LDA
Corpus labeling projects frequently use low-cost workers from microtask marketplaces; however, these workers are often inexperienced or have misaligned incentives. Crowdsourcing models must be robust to the resulting systematic and nonsystematic inaccuracies. We introduce a novel crowdsourcing model that adapts the discrete supervised topic model sLDA to handle multiple corrupt, usually conflicting (hence "confused") supervision signals. Our model achieves significant gains over previous work in the accuracy of deduced ground truth.
false
[]
[]
null
null
null
Acknowledgments This work was supported by the collaborative NSF Grant IIS-1409739 (BYU) and IIS-1409287 (UMD). Boyd-Graber is also supported by NSF grants IIS-1320538 and NCSE-1422492. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsor.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
fernando-2013-segmenting
https://aclanthology.org/W13-3004
Segmenting Temporal Intervals for Tense and Aspect
Timelines interpreting interval temporal logic formulas are segmented into strings which serve as semantic representations for tense and aspect. The strings have bounded but refinable granularity, suitable for analyzing (im)perfectivity, durativity, telicity, and various relations including branching.
false
[]
[]
null
null
null
null
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bernth-1997-easyenglish
https://aclanthology.org/A97-1024
EasyEnglish: A Tool for Improving Document Quality
We describe the authoring tool, EasyEnglish, which is part of IBM's internal SGML editing environment, Information Development Workbench. EasyEnglish helps writers produce clearer and simpler English by pointing out ambiguity and complexity as well as performing some standard grammar checking. Where appropriate, EasyEnglish makes suggestions for rephrasings that may be substituted directly into the text by using the editor interface. EasyEnglish is based on a full parse by English Slot Grammar; this makes it possible to produce a higher degree of accuracy in error messages as well as handle a large variety of texts.
false
[]
[]
null
null
null
I would like to thank the following persons for contributions to EasyEnglish and to this paper: Michael McCord of IBM Research for use of his ESG grammar and parser, for contributing ideas to the design and implementation, for extensive work on the lexicons and lexical utilities, and for commenting on this paper; Andrew Tanabe of the IBM AS/400 Division for contributing ideas for some of the rules, for coordinating users and user input, for extensive testing, and for his role in incorporating EasyEnglish in IDWB; Sue Medeiros of IBM Research for reading and commenting on this paper.
1997
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
galvez-etal-2020-unifying
https://aclanthology.org/2020.sigdial-1.27
A unifying framework for modeling acoustic/prosodic entrainment: definition and evaluation on two large corpora
Acoustic/prosodic (a/p) entrainment has been associated with multiple positive social aspects of human-human conversations. However, research on its effects is still preliminary, first because how to model it is far from standardized, and second because most of the reported findings rely on small corpora or on corpora collected in experimental setups. The present article has a twofold purpose: 1) it proposes a unifying statistical framework for modeling a/p entrainment, and 2) it tests on two large corpora of spontaneous telephone interactions whether three metrics derived from this framework predict positive social aspects of the conversations. The corpora differ in their spoken language, domain, and positive social outcome attached. To our knowledge, this is the first article studying relations between a/p entrainment and positive social outcomes in such large corpora of spontaneous dialog. Our results suggest that our metrics effectively predict, up to some extent, positive social aspects of conversations, which not only validates the methodology, but also provides further insights into the elusive topic of entrainment in human-human conversation.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ustalov-etal-2018-unsupervised
https://aclanthology.org/L18-1164
An Unsupervised Word Sense Disambiguation System for Under-Resourced Languages
In this paper, we present Watasense, an unsupervised system for word sense disambiguation. Given a sentence, the system chooses the most relevant sense of each input word with respect to the semantic similarity between the given sentence and the synset constituting the sense of the target word. Watasense has two modes of operation. The sparse mode uses the traditional vector space model to estimate the most similar word sense corresponding to its context. The dense mode, instead, uses synset embeddings to cope with the sparsity problem. We describe the architecture of the present system and also conduct its evaluation on three different lexical semantic resources for Russian. We found that the dense mode substantially outperforms the sparse one on all datasets according to the adjusted Rand index.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zelenko-etal-2004-coreference
https://aclanthology.org/W04-0704
Coreference Resolution for Information Extraction
null
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wu-weld-2010-open
https://aclanthology.org/P10-1013
Open Information Extraction Using Wikipedia
Information-extraction (IE) systems seek to distill semantic relations from naturallanguage text, but most systems use supervised learning of relation-specific examples and are thus limited by the availability of training data. Open IE systems such as TextRunner, on the other hand, aim to handle the unbounded number of relations found on the Web. But how well can these open systems perform? This paper presents WOE, an open IE system which improves dramatically on TextRunner's precision and recall. The key to WOE's performance is a novel form of self-supervised learning for open extractors-using heuristic matches between Wikipedia infobox attribute values and corresponding sentences to construct training data. Like TextRunner, WOE's extractor eschews lexicalized features and handles an unbounded set of semantic relations. WOE can operate in two modes: when restricted to POS tag features, it runs as quickly as TextRunner, but when set to use dependency-parse features its precision and recall rise even higher.
false
[]
[]
null
null
null
We thank Oren Etzioni and Michele Banko from Turing Center at the University of Washington for providing the code of their software and useful discussions. We also thank Alan Ritter, Mausam, Peng Dai, Raphael Hoffmann, Xiao Ling, Stefan Schoenmackers, Andrey Kolobov and Daniel Suskin for valuable comments. This material is based upon work supported by the WRF / TJ Cable Professorship, a gift from Google and by the Air Force Research Laboratory (AFRL) under prime contract no. FA8750-09-C-0181. Any opinions,
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bangalore-etal-2001-impact
https://aclanthology.org/W01-0520
Impact of Quality and Quantity of Corpora on Stochastic Generation
EQUATION EQUATION
false
[]
[]
null
null
null
null
2001
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gurcke-etal-2021-assessing
https://aclanthology.org/2021.argmining-1.7
Assessing the Sufficiency of Arguments through Conclusion Generation
The premises of an argument give evidence or other reasons to support a conclusion. However, the amount of support required depends on the generality of a conclusion, the nature of the individual premises, and similar. An argument whose premises make its conclusion rationally worthy to be drawn is called sufficient in argument quality research. Previous work tackled sufficiency assessment as a standard text classification problem, not modeling the inherent relation of premises and conclusion. In this paper, we hypothesize that the conclusion of a sufficient argument can be generated from its premises. To study this hypothesis, we explore the potential of assessing sufficiency based on the output of large-scale pre-trained language models. Our best model variant achieves an F 1-score of .885, outperforming the previous state-of-the-art and being on par with human experts. While manual evaluation reveals the quality of the generated conclusions, their impact remains low ultimately.
false
[]
[]
null
null
null
We thank Katharina Brennig, Simon Seidl, Abdullah Burak, Frederike Gurcke and Dr. Maurice Gurcke for their feedback. We gratefully acknowledge the computing time provided the described experiments by the Paderborn Center for Parallel Computing (PC 2 ). This project has been partially funded by the German Research Foundation (DFG) within the project OASiS, project number 455913891, as part of the Priority Program "Robust Argumentation Machines (RATIO)" (SPP-1999).
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wedlake-1992-introduction
https://aclanthology.org/1992.tc-1.1
An Introduction to quality assurance and a guide to the implementation of BS5750
This paper introduces the philosophy of Quality Assurance and traces the development of the British Standard for Quality Systems-BS 5750. The key components of the Quality System are covered and there is a discussion on how to choose a Quality System which is most appropriate to the needs of the particular organisation. A comprehensive guide (including flowcharts) is also given which addresses the nature and scope of tasks which must be undertaken in implementing a Quality System commensurate with the requirements of a recognised international standard such as BS 5750. QUALITY ASSURANCE-AN INTRODUCTION The concept of seeking a guarantee in return for goods or items exchanged is not new. In fact, the well-known phrase 'my word is my bond', still in use today, is a form of guarantee or assurance that an agreement reached or an obligation undertaken will be honoured. Guarantees in respect of items purchased (in exchange for money) are usually not verbal agreements but take the form of signed receipts, which imply that items bought will be fit for the purpose for which they were advertised or intended and that someone is accountable if they fail to live up to those expectations.
false
[]
[]
null
null
null
null
1992
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
marivate-etal-2020-investigating
https://aclanthology.org/2020.rail-1.3
Investigating an Approach for Low Resource Language Dataset Creation, Curation and Classification: Setswana and Sepedi
The recent advances in Natural Language Processing have only been a boon for well represented languages, negating research in lesser known global languages. This is in part due to the availability of curated data and research resources. One of the current challenges concerning low-resourced languages are clear guidelines on the collection, curation and preparation of datasets for different use-cases. In this work, we take on the task of creating two datasets that are focused on news headlines (i.e short text) for Setswana and Sepedi and the creation of a news topic classification task from these datasets. In this study, we document our work, propose baselines for classification, and investigate an approach on data augmentation better suited to low-resourced languages in order to improve the performance of the classifiers.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
heck-etal-2015-naist
https://aclanthology.org/2015.iwslt-evaluation.17
The NAIST English speech recognition system for IWSLT 2015
null
false
[]
[]
null
null
null
null
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
grishina-2017-combining
https://aclanthology.org/W17-4809
Combining the output of two coreference resolution systems for two source languages to improve annotation projection
Although parallel coreference corpora can to a high degree support the development of SMT systems, there are no large-scale parallel datasets available due to the complexity of the annotation task and the variability in annotation schemes. In this study, we exploit an annotation projection method to combine the output of two coreference resolution systems for two different source languages (English, German) in order to create an annotated corpus for a third language (Russian). We show that our technique is superior to projecting annotations from a single source language, and we provide an in-depth analysis of the projected annotations in order to assess the perspectives of our approach.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
duppada-etal-2018-seernet
https://aclanthology.org/S18-1002
SeerNet at SemEval-2018 Task 1: Domain Adaptation for Affect in Tweets
The paper describes the best performing system for the SemEval-2018 Affect in Tweets (English) sub-tasks. The system focuses on the ordinal classification and regression sub-tasks for valence and emotion. For ordinal classification valence is classified into 7 different classes ranging from-3 to 3 whereas emotion is classified into 4 different classes 0 to 3 separately for each emotion namely anger, fear, joy and sadness. The regression sub-tasks estimate the intensity of valence and each emotion. The system performs domain adaptation of 4 different models and creates an ensemble to give the final prediction. The proposed system achieved 1 st position out of 75 teams which participated in the fore-mentioned subtasks. We outperform the baseline model by margins ranging from 49.2% to 76.4%, thus, pushing the state-of-the-art significantly.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mitchell-lapata-2008-vector
https://aclanthology.org/P08-1028
Vector-based Models of Semantic Composition
This paper proposes a framework for representing the meaning of phrases and sentences in vector space. Central to our approach is vector composition which we operationalize in terms of additive and multiplicative functions. Under this framework, we introduce a wide range of composition models which we evaluate empirically on a sentence similarity task. Experimental results demonstrate that the multiplicative models are superior to the additive alternatives when compared against human judgments.
false
[]
[]
null
null
null
null
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
fujii-etal-2012-effects
http://www.lrec-conf.org/proceedings/lrec2012/pdf/714_Paper.pdf
Effects of Document Clustering in Modeling Wikipedia-style Term Descriptions
Reflecting the rapid growth of science, technology, and culture, it has become common practice to consult tools on the World Wide Web for various terms. Existing search engines provide an enormous volume of information, but retrieved information is not organized. Hand-compiled encyclopedias provide organized information, but the quantity of information is limited. In this paper, aiming to integrate the advantages of both tools, we propose a method to organize a search result based on multiple viewpoints as in Wikipedia. Because viewpoints required for explanation are different depending on the type of a term, such as animal and disease, we model articles in Wikipedia to extract a viewpoint structure for each term type. To identify a set of term types, we independently use manual annotation and automatic document clustering for Wikipedia articles. We also propose an effective feature for clustering of Wikipedia articles. We experimentally show that the document clustering reduces the cost for the manual annotation while maintaining the accuracy for modeling Wikipedia articles.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kim-1996-internally
https://aclanthology.org/Y96-1042
Internally Headed Relative Clause Constructions in Korean
This paper attempts to analyze some grammatical aspects of the so called internally-headed relative clause construction in Korean. This paper proposes that the meaning of the external head kes is underspecified in the sense that its semantic content is filled in by co-indexing it to the internal head under appropriate conditions. This paper also argues that interpretation of kes is determined by the verb following it. Dealing also with the pragmatics of the construction, this paper argues that the crucial characteristics of the construction in question resides in pragmatics rather than in semantics.
false
[]
[]
null
null
null
null
1996
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
heinroth-etal-2012-adaptive
http://www.lrec-conf.org/proceedings/lrec2012/pdf/169_Paper.pdf
Adaptive Speech Understanding for Intuitive Model-based Spoken Dialogues
In this paper we present three approaches towards adaptive speech understanding. The target system is a model-based Adaptive Spoken Dialogue Manager, the OwlSpeak ASDM. We enhanced this system in order to properly react on non-understandings in real-life situations where intuitive communication is required. OwlSpeak provides a model-based spoken interface to an Intelligent Environment depending on and adapting to the current context. It utilises a set of ontologies used as dialogue models that can be combined dynamically during runtime. Besides the benefits the system showed in practice, real-life evaluations also conveyed some limitations of the model-based approach. Since it is unfeasible to model all variations of the communication between the user and the system beforehand, various situations where the system did not correctly understand the user input have been observed. Thus we present three enhancements towards a more sophisticated use of the ontology-based dialogue models and show how grammars may dynamically be adapted in order to understand intuitive user utterances. The evaluation of our approaches revealed the incorporation of a lexical-semantic knowledgebase into the recognition process to be the most promising approach.
false
[]
[]
null
null
null
The research leading to these results has received funding from the Transregional Collaborative Research Centre SF-B/TRR 62 "Companion-Technology for Cognitive Technical Systems" funded by the German Research Foundation (DFG).
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mireshghallah-etal-2022-mix
https://aclanthology.org/2022.acl-long.31
Mix and Match: Learning-free Controllable Text Generationusing Energy Language Models
Recent work on controlled text generation has either required attribute-based fine-tuning of the base language model (LM), or has restricted the parameterization of the attribute discriminator to be compatible with the base autoregressive LM. In this work, we propose Mix and Match LM, a global score-based alternative for controllable text generation that combines arbitrary pre-trained black-box models for achieving the desired attributes in the generated text without involving any fine-tuning or structural assumptions about the black-box models. We interpret the task of controllable generation as drawing samples from an energy-based model whose energy values are a linear combination of scores from black-box models that are separately responsible for fluency, the control attribute, and faithfulness to any conditioning context. We use a Metropolis-Hastings sampling scheme to sample from this energy-based model using bidirectional context and global attribute features. We validate the effectiveness of our approach on various controlled generation and style-based text revision tasks by outperforming recently proposed methods that involve extra training, fine-tuning, or restrictive assumptions over the form of models.
false
[]
[]
null
null
null
The authors would like to thank the anonymous reviewers and meta-reviewers for their helpful feedback. We also thank our colleagues at the UCSD/CMU Berg Lab for their helpful comments and feedback.
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
uslu-etal-2017-textimager
https://aclanthology.org/E17-3005
TextImager as a Generic Interface to R
R is a very powerful framework for statistical modeling. Thus, it is of high importance to integrate R with state-of-theart tools in NLP. In this paper, we present the functionality and architecture of such an integration by means of TextImager. We use the OpenCPU API to integrate R based on our own R-Server. This allows for communicating with R-packages and combining them with TextImager's NLPcomponents.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kraus-etal-2020-comparison
https://aclanthology.org/2020.lrec-1.54
A Comparison of Explicit and Implicit Proactive Dialogue Strategies for Conversational Recommendation
Recommendation systems aim at facilitating information retrieval for users by taking into account their preferences. Based on previous user behaviour, such a system suggests items or provides information that a user might like or find useful. Nonetheless, how to provide suggestions is still an open question. Depending on the way a recommendation is communicated influences the user's perception of the system. This paper presents an empirical study on the effects of proactive dialogue strategies on user acceptance. Therefore, an explicit strategy based on user preferences provided directly by the user, and an implicit proactive strategy, using autonomously gathered information, are compared. The results show that proactive dialogue systems significantly affect the perception of human-computer interaction. Although no significant differences are found between implicit and explicit strategies, proactivity significantly influences the user experience compared to reactive system behaviour. The study contributes new insights to the human-agent interaction and the voice user interface design. Furthermore, interesting tendencies are discovered that motivate future work.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
aji-etal-2021-paracotta
https://aclanthology.org/2021.paclic-1.56
ParaCotta: Synthetic Multilingual Paraphrase Corpora from the Most Diverse Translation Sample Pair
We release our synthetic parallel paraphrase corpus across 17 languages: Arabic, Catalan,
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yao-van-durme-2014-information
https://aclanthology.org/P14-1090
Information Extraction over Structured Data: Question Answering with Freebase
Answering natural language questions using the Freebase knowledge base has recently been explored as a platform for advancing the state of the art in open domain semantic parsing. Those efforts map questions to sophisticated meaning representations that are then attempted to be matched against viable answer candidates in the knowledge base. Here we show that relatively modest information extraction techniques, when paired with a webscale corpus, can outperform these sophisticated approaches by roughly 34% relative gain.
false
[]
[]
null
null
null
Acknowledgments We thank the Allen Institute for Artificial Intelligence for funding this work. We are also grateful to Jonathan Berant, Tom Kwiatkowski, Qingqing Cai, Adam Lopez, Chris Callison-Burch and Peter Clark for helpful discussion and to the reviewers for insightful comments.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bannard-2007-measure
https://aclanthology.org/W07-1101
A Measure of Syntactic Flexibility for Automatically Identifying Multiword Expressions in Corpora
Natural languages contain many multi-word sequences that do not display the variety of syntactic processes we would expect given their phrase type, and consequently must be included in the lexicon as multiword units. This paper describes a method for identifying such items in corpora, focussing on English verb-noun combinations. In an evaluation using a set of dictionary-published MWEs we show that our method achieves greater accuracy than existing MWE extraction methods based on lexical association.
false
[]
[]
null
null
null
Thanks to Tim Baldwin, Francis Bond, Ted Briscoe, Chris Callison-Burch, Mirella Lapata, Alex Las-carides, Andrew Smith, Takaaki Tanaka and two anonymous reviewers for helpful ideas and comments.
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ma-etal-2002-models
http://www.lrec-conf.org/proceedings/lrec2002/pdf/141.pdf
Models and Tools for Collaborative Annotation
The Annotation Graph Toolkit (AGTK) is a collection of software which facilitates development of linguistic annotation tools. AGTK provides a database interface which allows applications to use a database server for persistent storage. This paper discusses various modes of collaborative annotation and how they can be supported with tools built using AGTK and its database interface. We describe the relational database schema and API, and describe a version of the TableTrans tool which supports collaborative annotation. The remainder of the paper discusses a high-level query language for annotation graphs, along with optimizations, in support of expressive and efficient access to the annotations held on a large central server. The paper demonstrates that it is straightforward to support a variety of different levels of collaborative annotation with existing AGTK-based tools, with a minimum of additional programming effort.
false
[]
[]
null
null
null
This material is based upon work supported by the National Science Foundation under Grant Nos. 9978056 and 9980009 (Talkbank).
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dugan-etal-2022-feasibility
https://aclanthology.org/2022.findings-acl.151
A Feasibility Study of Answer-Unaware Question Generation for Education
We conduct a feasibility study into the applicability of answer-agnostic question generation models to textbook passages. We show that a significant portion of errors in such systems arise from asking irrelevant or uninterpretable questions and that such errors can be ameliorated by providing summarized input. We find that giving these models human-written summaries instead of the original text results in a significant increase in acceptability of generated questions (33% → 83%) as determined by expert annotators. We also find that, in the absence of human-written summaries, automatic summarization can serve as a good middle ground.
true
[]
[]
Quality Education
null
null
null
2022
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
rangarajan-sridhar-etal-2014-framework
https://aclanthology.org/C14-1092
A Framework for Translating SMS Messages
Short Messaging Service (SMS) has become a popular form of communication. While it is predominantly used for monolingual communication, it can be extremely useful for facilitating cross-lingual communication through statistical machine translation. In this work we present an application of statistical machine translation to SMS messages. We decouple the SMS translation task into normalization followed by translation so that one can exploit existing bitext resources and present a novel unsupervised normalization approach using distributed representation of words learned through neural networks. We describe several surrogate data that are good approximations to real SMS data feeds and use a hybrid translation approach using finite-state transducers. Both objective and subjective evaluation indicate that our approach is highly suitable for translating SMS messages.
true
[]
[]
Industry, Innovation and Infrastructure
null
null
null
2014
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
petukhova-bunt-2010-towards
http://www.lrec-conf.org/proceedings/lrec2010/pdf/195_Paper.pdf
Towards an Integrated Scheme for Semantic Annotation of Multimodal Dialogue Data
This paper investigates the applicability of existing dialogue act annotation schemes, designed for the analysis of spoken dialogue, to the semantic annotation of multimodal data, and the way a dialogue act annotation scheme can be extended to cover dialogue phenomena from multiple modalities.
false
[]
[]
null
null
null
This research was conducted within the project Multidimensional Dialogue Modelling, sponsored by the Netherlands Organisation for Scientific Research (NWO), under grant reference 017.003.090. We also very thankful to anonumous reviewers for their valuable comments.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mccoy-1984-correcting
https://aclanthology.org/P84-1090
Correcting Object-Related Misconceptions: How Should The System Respond?
Tills paper describes a computational method for correcting users' miseonceptioas concerning the objects modelled by a compute," s.ystem. The method involves classifying object-related misc,mce|,tions according to the knowledge-base feature involved in the incorrect information. For each resulting class sub-types are identified, :.:cording to the structure of the knowledge base, which indicate wh:LI i.formativn may be supporting the misconception and therefore what information to include in the response. Such a characteriza*i,,n, along with a model of what the user knows, enables the syst.cm to reas,m in a domain-independent way about how best to c~rrv,'t [he user.
false
[]
[]
null
null
null
I would like to thank Julia tlirschberg, Aravind Joshi, Martha Poll.~ck, and Bonnie Webber for their many helpful comments concerning this work.
1984
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
carter-1994-improving
https://aclanthology.org/A94-1010
Improving Language Models by Clustering Training Sentences
Many of the kinds of language model used in speech understanding suffer from imperfect modeling of intra-sentential contextual influences. I argue that this problem can be addressed by clustering the sentences in a training corpus automatically into subcorpora on the criterion of entropy reduction, and calculating separate language model parameters for each cluster. This kind of clustering offers a way to represent important contextual effects and can therefore significantly improve the performance of a model. It also offers a reasonably automatic means to gather evidence on whether a more complex, context-sensitive model using the same general kind of linguistic information is likely to reward the effort that would be required to develop it: if clustering improves the performance of a model, this proves the existence of further context dependencies, not exploited by the unclustered model. As evidence for these claims, I present results showing that clustering improves some models but not others for the ATIS domain. These results are consistent with other findings for such models, suggesting that the existence or otherwise of an improvement brought about by clustering is indeed a good pointer to whether it is worth developing further the unclustered model.
false
[]
[]
null
null
null
This research was partly funded by the Defence Research Agency, Malvern, UK, under assignment M85T51XX.I am grateful to Manny Rayner and Ian Lewin for useful comments on earlier versions of this paper. Responsibility for any remaining errors or unclarities rests in the customary place.
1994
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
li-etal-2016-litway
https://aclanthology.org/W16-3004
LitWay, Discriminative Extraction for Different Bio-Events
Even a simple biological phenomenon may introduce a complex network of molecular interactions. Scientific literature is one of the trustful resources delivering knowledge of these networks. We propose LitWay, a system for extracting semantic relations from texts. Lit-Way utilizes a hybrid method that combines both a rule-based method and a machine learning-based method. It is tested on the SeeDev task of BioNLP-ST 2016, achieves the state-of-the-art performance with the F-score of 43.2%, ranking first of all participating teams. To further reveal the linguistic characteristics of each event, we test the system solely with syntactic rules or machine learning, and different combinations of two methods. We find that it is difficult for one method to achieve good performance for all semantic relation types due to the complication of bio-events in the literatures.
true
[]
[]
Good Health and Well-Being
null
null
null
2016
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
pianta-etal-2008-textpro
http://www.lrec-conf.org/proceedings/lrec2008/pdf/645_paper.pdf
The TextPro Tool Suite
We present TextPro, a suite of modular Natural Language Processing (NLP) tools for analysis of Italian and English texts. The suite has been designed so as to integrate and reuse state of the art NLP components developed by researchers at FBK. The current version of the tool suite provides functions ranging from tokenization to chunking and Named Entity Recognition (NER). The system"s architecture is organized as a pipeline of processors wherein each stage accepts data from an initial input or from an output of a previous stage, executes a specific task, and sends the resulting data to the next stage, or to the output of the pipeline. TextPro performed the best on the task of Italian NER and Italian PoS Tagging at EVALITA 2007. When tested on a number of other standard English benchmarks, TextPro confirms that it performs as state of the art system. Distributions for Linux, Solaris and Windows are available, for both research and commercial purposes. A web-service version of the system is under development.
false
[]
[]
null
null
null
This work has been funded partly by the following projects: the ONTOTEXT sponsored by the Autonomous Province of Trento under the FUP-2004 research program, and partly by the Meaning and PATExpert (http://www.patexpert.org) projects sponsored by the European Commission. We wish to thanks Taku Kudo and Yuji Matsumoto for making available YamCha.
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lin-mitamura-2004-keyword
https://link.springer.com/chapter/10.1007/978-3-540-30194-3_19
Keyword translation from English to Chinese for multilingual QA
null
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
collier-etal-1999-genia
https://aclanthology.org/E99-1043
The GENIA project: corpus-based knowledge acquisition and information extraction from genome research papers
We present an outline of the genome information acquisition (GENIA) project for automatically extracting biochemical information from journal papers and abstracts. GENIA will be available over the Internet and is designed to aid in information extraction, retrieval and visualisation and to help reduce information overload on researchers. The vast repository of papers available online in databases such as MEDLINE is a natural environment in which to develop language engineering methods and tools and is an opportunity to show how language engineering can play a key role on the Internet.
true
[]
[]
Good Health and Well-Being
Industry, Innovation and Infrastructure
null
null
1999
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false