ID
stringlengths
11
54
url
stringlengths
33
64
title
stringlengths
11
184
abstract
stringlengths
17
3.87k
label_nlp4sg
bool
2 classes
task
sequence
method
sequence
goal1
stringclasses
9 values
goal2
stringclasses
9 values
goal3
stringclasses
1 value
acknowledgments
stringlengths
28
1.28k
year
stringlengths
4
4
sdg1
bool
1 class
sdg2
bool
1 class
sdg3
bool
2 classes
sdg4
bool
2 classes
sdg5
bool
2 classes
sdg6
bool
1 class
sdg7
bool
1 class
sdg8
bool
2 classes
sdg9
bool
2 classes
sdg10
bool
2 classes
sdg11
bool
2 classes
sdg12
bool
1 class
sdg13
bool
2 classes
sdg14
bool
1 class
sdg15
bool
1 class
sdg16
bool
2 classes
sdg17
bool
2 classes
kameswara-sarma-2018-learning
https://aclanthology.org/N18-4007
Learning Word Embeddings for Data Sparse and Sentiment Rich Data Sets
This research proposal describes two algorithms that are aimed at learning word embeddings for data sparse and sentiment rich data sets. The goal is to use word embeddings adapted for domain specific data sets in downstream applications such as sentiment classification. The first approach learns word embeddings in a supervised fashion via SWESA (Supervised Word Embeddings for Sentiment Analysis), an algorithm for sentiment analysis on data sets that are of modest size. SWESA leverages document labels to jointly learn polarity-aware word embeddings and a classifier to classify unseen documents. In the second approach domain adapted (DA) word embeddings are learned by exploiting the specificity of domain specific data sets and the breadth of generic word embeddings. The new embeddings are formed by aligning corresponding word vectors using Canonical Correlation Analysis (CCA) or the related nonlinear Kernel CCA. Experimental results on binary sentiment classification tasks using both approaches for standard data sets are presented.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
silfverberg-etal-2017-data
https://aclanthology.org/K17-2010
Data Augmentation for Morphological Reinflection
This paper presents the submission of the Linguistics Department of the University of Colorado at Boulder for the 2017 CoNLL-SIGMORPHON Shared Task on Universal Morphological Reinflection. The system is implemented as an RNN Encoder-Decoder. It is specifically geared toward a low-resource setting. To this end, it employs data augmentation for counteracting overfitting and a copy symbol for processing characters unseen in the training data. The system is an ensemble of ten models combined using a weighted voting scheme. It delivers substantial improvement in accuracy compared to a non-neural baseline system in presence of varying amounts of training data.
false
[]
[]
null
null
null
The third author has been partly sponsored by DARPA I20 in the program Low Resource Languages for Emergent Incidents (LORELEI) issued by DARPA/I20 under Contract No. HR0011-15-C-0113.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lehmann-1981-pragmalinguistics
https://aclanthology.org/J81-3004
Pragmalinguistics Theory and Practice
"Pragmalinguistics" or the occupation with pragmatic aspects of language can be important where computational linguists or artificial intelligence researchers are concerned with natural language interfaces to computers, with modelling dialogue behavior, or the like. What speakers intend with their utterances, how hearers react to what they hear, and what they take the words to mean will all play a role of increasing importance when natural language systems have matured enough to cope readily with syntax and semantics. Asking a sensible question to a user or giving him a reasonable response often enough depends not only on the "pure" meaning of some previous utterances but also on attitudes, expectations, and intentions that the user may have. These are partly conveyed in the user's utterances and have to be taken into account, if a system is to do more than just give factual answers to factual requests. Blakar writes on language as a means of social power. His paper is anecdotal; he draws conclusions without stating from what premises; and he is on the whole not very explicit. Gregersen postulates in his article on the relationships between social class and language usage that an economic analysis of "objective class positions" has to precede sociolinguistic studies proper, but fails to show how the results of such an analysis will influence sociolinguistics. Haeberlin writes on class-specific vocabulary as a communication problem. His ideas have been published before and in more detail.
false
[]
[]
null
null
null
null
1981
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
aggarwal-mamidi-2017-automatic
https://aclanthology.org/P17-3012
Automatic Generation of Jokes in Hindi
When it comes to computational language generation systems, humour is a relatively unexplored domain, especially more so for Hindi (or rather, for most languages other than English). Most researchers agree that a joke consists of two main parts-the setup and the punchline, with humour being encoded in the incongruity between the two. In this paper, we look at Dur se Dekha jokes, a restricted domain of humorous three liner poetry in Hindi. We analyze their structure to understand how humour is encoded in them and formalize it. We then develop a system which is successfully able to generate a basic form of these jokes.
false
[]
[]
null
null
null
The authors would like to thank all the evaluators for their time and help in rating the jokes. We would also like to thank Kaveri Anuranjana for all the time she spent helping us put this work on paper.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
rohanian-etal-2020-verbal
https://aclanthology.org/2020.acl-main.259
Verbal Multiword Expressions for Identification of Metaphor
Metaphor is a linguistic device in which a concept is expressed by mentioning another. Identifying metaphorical expressions, therefore, requires a non-compositional understanding of semantics. Multiword Expressions (MWEs), on the other hand, are linguistic phenomena with varying degrees of semantic opacity and their identification poses a challenge to computational models. This work is the first attempt at analysing the interplay of metaphor and MWEs processing through the design of a neural architecture whereby classification of metaphors is enhanced by informing the model of the presence of MWEs. To the best of our knowledge, this is the first "MWE-aware" metaphor identification system paving the way for further experiments on the complex interactions of these phenomena. The results and analyses show that this proposed architecture reach state-of-the-art on two different established metaphor datasets.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gupta-etal-2021-disfl
https://aclanthology.org/2021.findings-acl.293
Disfl-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering
Disfluencies is an under-studied topic in NLP, even though it is ubiquitous in human conversation. This is largely due to the lack of datasets containing disfluencies. In this paper, we present a new challenge question answering dataset, DISFL-QA, a derivative of SQUAD, where humans introduce contextual disfluencies in previously fluent questions. DISFL-QA contains a variety of challenging disfluencies that require a more comprehensive understanding of the text than what was necessary in prior datasets. Experiments show that the performance of existing state-of-the-art question answering models degrades significantly when tested on DISFL-QA in a zero-shot setting. We show data augmentation methods partially recover the loss in performance and also demonstrate the efficacy of using gold data for fine-tuning. We argue that we need large-scale disfluency datasets in order for NLP models to be robust to them. The dataset is publicly available at: https://github.com/ google-research-datasets/disfl-qa.
false
[]
[]
null
null
null
Constructing datasets for spoken problems. We would also like to bring attention to the fact that being a speech phenomenon, a spoken setup would have been an ideal choice for disfluencies dataset. This would have accounted for higher degree of confusion, hesitations, corrections, etc. while recalling parts of context on the fly, which otherwise one may find hard to create synthetically when given enough time to think. However, such a spoken setup is extremely tedious for data collection mainly due to: (i) privacy concerns with acquiring speech data from real world speech transcriptions, (ii) creating scenarios for simulated environment is a challenging task, and (iii) relatively low yield for cases containing disfluencies. In such cases, we believe that a targeted and purely textual mode of data collection can be more effective both in terms of cost and specificity.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mckeown-1983-paraphrasing
https://aclanthology.org/J83-1001
Paraphrasing Questions Using Given and new information
The design and implementation of a paraphrase component for a natural language question-answering system (CO-OP) is presented. The component is used to produce a paraphrase of a user's question to the system, which is presented to the user before the question is evaluated and answered. A major point made is the role of given and new information in formulating a paraphrase that differs in a meaningful way from the user's question. A description is also given of the transformational grammar that is used by the paraphraser. 2 For example, in the question "Which users work on projects sponsored by NASA?", the speaker makes the existential presupposition that there are projects sponsored by NASA.
false
[]
[]
null
null
null
This work was partially supported by an IBM fellowship and NSF grant MC78-08401. I would like to thank Dr. Aravind K. Joshi and Dr. Bonnie Webber for their invaluable comments on the style and content of this paper.
1983
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chiu-etal-2022-joint
https://aclanthology.org/2022.spnlp-1.5
A Joint Learning Approach for Semi-supervised Neural Topic Modeling
Topic models are some of the most popular ways to represent textual data in an interpretable manner. Recently, advances in deep generative models, specifically auto-encoding variational Bayes (AEVB), have led to the introduction of unsupervised neural topic models, which leverage deep generative models as opposed to traditional statistics-based topic models. We extend upon these neural topic models by introducing the Label-Indexed Neural Topic Model (LI-NTM), which is, to the extent of our knowledge, the first effective upstream semisupervised neural topic model. We find that LI-NTM outperforms existing neural topic models in document reconstruction benchmarks, with the most notable results in low labeled data regimes and for data-sets with informative labels; furthermore, our jointly learned classifier outperforms baseline classifiers in ablation studies.
false
[]
[]
null
null
null
AS is supported by R01MH123804, and FDV is supported by NSF IIS-1750358. All authors acknowledge insightful feedback from members of CS282 Fall 2021.
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ws-1993-acquisition
https://aclanthology.org/W93-0100
Acquisition of Lexical Knowledge from Text
null
false
[]
[]
null
null
null
null
1993
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
vanni-miller-2002-scaling
http://www.lrec-conf.org/proceedings/lrec2002/pdf/306.pdf
Scaling the ISLE Framework: Use of Existing Corpus Resources for Validation of MT Evaluation Metrics across Languages
This paper describes a machine translation (MT) evaluation (MTE) research program which has benefited from the availability of two collections of source language texts and the results of processing these texts with several commercial MT engines (DARPA 1994, Doyon, Taylor, & White 1999). The methodology entails the systematic development of a predic tive relationship between discrete, well-defined MTE metrics and specific information processing tasks that can be reliably performed with output of a given MT system. Unlike tests used in initial experiments on automated scoring (Jones and Rusk 2000), we employ traditional measures of MT output quality, selected from the International Standards for Language Engineering (ISLE) framework: Coherence, Clarity, Syntax, Morphology, General and Domain-specific Lexical robustness, to include Named-entity translation. Each test was originally validated on MT output produced by three Spanish-to-English systems (1994 DARPA MTE). We validate tests in the present work, however, with material taken from the MT Scale Evaluation research program produced by Japanese-to-English MT systems. Since Spanish and Japanese differ structurally on the morphological, syntactic, and discourse levels, a comparison of scores on tests measuring these output qualities should reveal how structural similarity, such as that enjoyed by Spanish and English, and structural contrast, such as that found between Japanese and English, affect the linguistic distinctions which must be accommodated by MT systems. Moreover, we show that metrics developed using Spanish-English MT output are equally effective when applied to Japanese-English MT output.
false
[]
[]
null
null
null
null
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
shimizu-etal-2013-constructing
https://aclanthology.org/2013.iwslt-papers.3
Constructing a speech translation system using simultaneous interpretation data
There has been a fair amount of work on automatic speech translation systems that translate in real-time, serving as a computerized version of a simultaneous interpreter. It has been noticed in the field of translation studies that simultaneous interpreters perform a number of tricks to make the content easier to understand in real-time, including dividing their translations into small chunks, or summarizing less important content. However, the majority of previous work has not specifically considered this fact, simply using translation data (made by translators) for learning of the machine translation system. In this paper, we examine the possibilities of additionally incorporating simultaneous interpretation data (made by simultaneous interpreters) in the learning process. First we collect simultaneous interpretation data from professional simultaneous interpreters of three levels, and perform an analysis of the data. Next, we incorporate the simultaneous interpretation data in the learning of the machine translation system. As a result, the translation style of the system becomes more similar to that of a highly experienced simultaneous interpreter. We also find that according to automatic evaluation metrics, our system achieves performance similar to that of a simultaneous interpreter that has 1 year of experience.
false
[]
[]
null
null
null
null
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
uehara-thepkanjana-2014-called
https://aclanthology.org/Y14-1016
The So-called Person Restriction of Internal State Predicates in Japanese in Contrast with Thai
Internal state predicates or ISPs refer to internal states of sentient beings, such as emotions, sensations and thought processes. Japanese ISPs with zero pronouns exhibit the "person restriction" in that the zero form of their subjects must be first person at the utterance time. This paper examines the person restriction of ISPs in Japanese in contrast with those in Thai, which is a zero pronominal language like Japanese. It is found that the person restriction is applicable to Japanese ISPs but not to Thai ones. This paper argues that the person restriction is not adequate to account for Japanese and Thai ISPs. We propose a new constraint to account for this phenomenon, i.e., the Experiencer-Conceptualizer Identity (ECI) Constraint, which states that "The experiencer of the situation/event must be identical with the conceptualizer of that situation/event." It is argued that both languages conventionalize the ECI constraint in ISP expressions but differ in how the ECI constraint is conventionalized.
false
[]
[]
null
null
null
This research work is partially supported by a Grant-in-Aid for Scientific Research from the Japan Society for the Promotion of Science (No. 24520416) awarded to the first author and the Ratchadaphiseksomphot Endowment Fund of Chulalongkorn University (RES560530179-HS) awarded to the second author.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chen-etal-2021-system
https://aclanthology.org/2021.autosimtrans-1.4
System Description on Automatic Simultaneous Translation Workshop
This paper shows our submission on the second automatic simultaneous translation workshop at NAACL2021. We participate in all the two directions of Chinese-to-English translation, Chinese audio→English text and Chinese text→English text. We do data filtering and model training techniques to get the best BLEU score and reduce the average lagging. We propose a two-stage simultaneous translation pipeline system which is composed of Quartznet and BPE-based transformer. We propose a competitive simultaneous translation system and achieves a BLEU score of 24.39 in the audio input track.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
oflazer-1995-error
https://aclanthology.org/1995.iwpt-1.24
Error-tolerant Finite State Recognition
Error-tolerant recognition enables the recognition of strings that deviate slightly fro� any string in the regular set recognized by the underlying finite state recognizer. In the context of natural language processing, it has applications in error-tolerant morphological analysis, and spe�g correction. After a description of the concepts and algorithms involved, we give examples from these two applications: In morphological analysis, error-tolerant recognition allows misspelled input word forms to� corrected, and morphologically analyzed concurrently. The algorithm can be applied to the moiphological analysis of any language whose morphology is fully captured by a single (and possibly very large) finite state transducer, regardless of the word formation processes (such as agglutination or productive compounding) and morphographemic phenomena involved. We present an .application to error tolerant analysis of agglutinative morphology of Turkish words. In spelling correction, error-tolerant recognition can be used to enumerate correct candidate forms from a given misspelled string within a certain edit distance. It can be applied to any language whose morphology is fully described by • a finite state transducer, or with a word list comprising all inflected forms with very large word lists of root• and inflected forms (some containing well over 200,000 forms), generating all candida� solutions within 10 to 45 milliseconds (with edit distance 1) on a SparcStation 10/41. For spelling correction in Turkish, error-tolerant recognition operating with a (circular) recognizer ofTurkish words (with about 29,000 states and 119,000 transitions) can generate all candidate words in less than 20 milliseconds (with edit distance 1). Spelling correction using a recognizer constructed from a large word German list that simulates compounding, also indicates that the approach is applicable in such cases.
false
[]
[]
null
null
null
This research was supported in part by a NATO Science for Stability Project Grant TU-LANGUAGE. I would like to thank Xerox Advanced Document Systems, and Lauri Karttunen of Xerox Pare and of Rank Xerox Research Centre (Grenoble) for providing us with the two-level transducer development software. Kemal Olku and Kurtulu� Yorulmaz of Bilkent University implemented some of the algorithms.
1995
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mangeot-2014-motamot
http://www.lrec-conf.org/proceedings/lrec2014/pdf/128_Paper.pdf
Mot\`aMot project: conversion of a French-Khmer published dictionary for building a multilingual lexical system
Economic issues related to the information processing techniques are very important. The development of such technologies is a major asset for developing countries like Cambodia and Laos, and emerging ones like Vietnam, Malaysia and Thailand. The MotAMot project aims to computerize an under-resourced language: Khmer, spoken mainly in Cambodia. The main goal of the project is the development of a multilingual lexical system targeted for Khmer. The macrostructure is a pivot one with each word sense of each language linked to a pivot axi. The microstructure comes from a simplification of the explanatory and combinatory dictionary. The lexical system has been initialized with data coming mainly from the conversion of the French-Khmer bilingual dictionary of Denis Richer from Word to XML format. The French part was completed with pronunciation and parts-of-speech coming from the FeM French-english-Malay dictionary. The Khmer headwords noted in IPA in the Richer dictionary were converted to Khmer writing with OpenFST, a finite state transducer tool. The resulting resource is available online for lookup, editing, download and remote programming via a REST API on a Jibiki platform.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
francopoulo-etal-2016-providing
https://aclanthology.org/W16-4711
Providing and Analyzing NLP Terms for our Community
By its own nature, the Natural Language Processing (NLP) community is a priori the best equipped to study the evolution of its own publications, but works in this direction are rare and only recently have we seen a few attempts at charting the field. In this paper, we use the algorithms, resources, standards, tools and common practices of the NLP field to build a list of terms characteristic of ongoing research, by mining a large corpus of scientific publications, aiming at the largest possible exhaustivity and covering the largest possible time span. Study of the evolution of this term list through time reveals interesting insights on the dynamics of field and the availability of the term database and of the corpus (for a large part) make possible many further comparative studies in addition to providing a test field for a new graphic interface designed to perform visual time analytics of large sized thesauri.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
goldberg-elhadad-2007-svm
https://aclanthology.org/P07-1029
SVM Model Tampering and Anchored Learning: A Case Study in Hebrew NP Chunking
We study the issue of porting a known NLP method to a language with little existing NLP resources, specifically Hebrew SVM-based chunking. We introduce two SVM-based methods-Model Tampering and Anchored Learning. These allow fine grained analysis of the learned SVM models, which provides guidance to identify errors in the training corpus, distinguish the role and interaction of lexical features and eventually construct a model with ∼10% error reduction. The resulting chunker is shown to be robust in the presence of noise in the training corpus, relies on less lexical features than was previously understood and achieves an F-measure performance of 92.2 on automatically PoS-tagged text. The SVM analysis methods also provide general insight on SVM-based chunking.
false
[]
[]
null
null
null
null
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hendrix-1982-natural
https://aclanthology.org/J82-2002
Natural-Language Interface
A major problem faced by would-be users of computer systems is that computers generally make use of special-purpose languages familiar only to those trained in computer science. For a large number of applications requiring interaction between humans and computer systems, it would be highly desirable for machines to converse in English or other natural languages familiar to their human users.
false
[]
[]
null
null
null
null
1982
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
irvine-etal-2014-american
http://www.lrec-conf.org/proceedings/lrec2014/pdf/914_Paper.pdf
The American Local News Corpus
We present the American Local News Corpus (ALNC), containing over 4 billion words of text from 2, 652 online newspapers in the United States. Each article in the corpus is associated with a timestamp, state, and city. All 50 U.S. states and 1, 924 cities are represented. We detail our method for taking daily snapshots of thousands of local and national newspapers and present two example corpus analyses. The first explores how different sports are talked about over time and geography. The second compares per capita murder rates with news coverage of murders across the 50 states. The ALNC is about the same size as the Gigaword corpus and is growing continuously. Version 1.0 is available for research use.
false
[]
[]
null
null
null
We would like to thank the creators of the Newspaper Map website for providing us with their database of U.S. newspapers. This material is based on research sponsored by DARPA under contract HR0011-09-1-0044 and by the Johns Hopkins University Human Language Technology Center of Excellence. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA or the U.S. Government.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wissing-etal-2004-spoken
http://www.lrec-conf.org/proceedings/lrec2004/pdf/71.pdf
A Spoken Afrikaans Language Resource Designed for Research on Pronunciation Variations
In this contribution, the design, collection, annotation and planned distribution of a new spoken language resource of Afrikaans (SALAR) is discussed. The corpus contains speech of mother tongue speakers of Afrikaans, and is intended to become a primary national language resource for phonetic research and research on pronunciation variations. As such, the corpus is designed to expose pronunciation variations due to regional accents, speech rate (normal and fast speech) and speech mode (read and spontaneous speech). The corpus is collected by the Potchefstroom Campus of the NorthWest University, but in all phases of the corpus creation process there was a close collaboration with ELIS-UG (Belgium), one of the institutions that has been engaged in the creation of the Spoken Dutch Corpus (CGN).
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lu-paladini-adell-2012-beyond
https://aclanthology.org/2012.amta-commercial.10
Beyond MT: Source Content Quality and Process Automation
This document introduces the strategy implemented at CA Technologies to exploit Machine Translation (MT) at the corporate-wide level. We will introduce the different approaches followed to further improve the quality of the output of the machine translation engine once the engines have reached a maximum level of customization. Senior team support, clear communication between the parties involved and improvement measurement are the key components for the success of the initiative.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
liu-etal-2018-towards-less
https://aclanthology.org/D18-1297
Towards Less Generic Responses in Neural Conversation Models: A Statistical Re-weighting Method
Sequence-to-sequence neural generation models have achieved promising performance on short text conversation tasks. However, they tend to generate generic/dull responses, leading to unsatisfying dialogue experience. We observe that in conversation tasks, each query could have multiple responses, which forms a 1-ton or m-ton relationship in the view of the total corpus. The objective function used in standard sequence-to-sequence models will be dominated by loss terms with generic patterns. Inspired by this observation, we introduce a statistical re-weighting method that assigns different weights for the multiple responses of the same query, and trains the standard neural generation model with the weights. Experimental results on a large Chinese dialogue corpus show that our method improves the acceptance rate of generated responses compared with several baseline models and significantly reduces the number of generated generic responses.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ide-suderman-2009-bridging
https://aclanthology.org/W09-3004
Bridging the Gaps: Interoperability for GrAF, GATE, and UIMA
This paper explores interoperability for data represented using the Graph Annotation Framework (GrAF) (Ide and Suderman, 2007) and the data formats utilized by two general-purpose annotation systems: the General Architecture for Text Engineering (GATE) (Cunningham, 2002) and the Unstructured Information Management Architecture (UIMA). GrAF is intended to serve as a "pivot" to enable interoperability among different formats, and both GATE and UIMA are at least implicitly designed with an eye toward interoperability with other formats and tools. We describe the steps required to perform a round-trip rendering from GrAF to GATE and GrAF to UIMA CAS and back again, and outline the commonalities as well as the differences and gaps that came to light in the process.
false
[]
[]
null
null
null
This work was supported by an IBM UIMA Innovation Award and National Science Foundation grant INT-0753069.
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nangia-bowman-2019-human
https://aclanthology.org/P19-1449
Human vs. Muppet: A Conservative Estimate of Human Performance on the GLUE Benchmark
The GLUE benchmark (Wang et al., 2019b) is a suite of language understanding tasks which has seen dramatic progress in the past year, with average performance moving from 70.0 at launch to 83.9, state of the art at the time of writing (May 24, 2019). Here, we measure human performance on the benchmark, in order to learn whether significant headroom remains for further progress. We provide a conservative estimate of human performance on the benchmark through crowdsourcing: Our annotators are non-experts who must learn each task from a brief set of instructions and 20 examples. In spite of limited training, these annotators robustly outperform the state of the art on six of the nine GLUE tasks and achieve an average score of 87.1. Given the fast pace of progress however, the headroom we observe is quite limited. To reproduce the datapoor setting that our annotators must learn in, we also train the BERT model (Devlin et al., 2019) in limited-data regimes, and conclude that low-resource sentence classification remains a challenge for modern neural network approaches to text understanding.
false
[]
[]
null
null
null
This work was made possible in part by a donation to NYU from Eric and Wendy Schmidt made by recommendation of the Schmidt Futures program and by funding from Samsung Research. We gratefully acknowledge the support of NVIDIA Corporation with the donation of a Titan V GPU used at NYU for this research. We thank Alex Wang and Amanpreet Singh for their help with conducting GLUE evaluations, and we thank Jason Phang for his help with training the BERT model.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
heyer-etal-1990-knowledge
https://aclanthology.org/C90-3073
Knowledge Representation and Semantics in a Complex Domain: The UNIX Natural Language Help System GOETHE
Natural language help systems for complex domains requirc, in our view, an integration of semantic representation and knowledge base in order to adequately and efficiently deal with cognitively misconceived user input ut. We present such an integration by way of the notiml of a frame-semae~tics that has been implemented for the purposes of a natural language help system for UNIX.
false
[]
[]
null
null
null
null
1990
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhou-etal-2022-hierarchical
https://aclanthology.org/2022.findings-acl.170
Hierarchical Recurrent Aggregative Generation for Few-Shot NLG
Large pretrained models enable transfer learning to low-resource domains for language generation tasks. However, previous end-to-end approaches do not account for the fact that some generation sub-tasks, specifically aggregation and lexicalisation, can benefit from transfer learning to different extents. To exploit these varying potentials for transfer learning, we propose a new hierarchical approach for few-shot and zero-shot generation. Our approach consists of a three-moduled jointly trained architecture: the first module independently lexicalises the distinct units of information in the input as sentence sub-units (e.g. phrases), the second module recurrently aggregates these sub-units to generate a unified intermediate output, while the third module subsequently post-edits it to generate a coherent and fluent final text. We perform extensive empirical analysis and ablation studies on fewshot and zero-shot settings across 4 datasets. Automatic and human evaluation shows that the proposed hierarchical approach is consistently capable of achieving state-of-the-art results when compared to previous work. 1
false
[]
[]
null
null
null
null
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chen-etal-2021-employing
https://aclanthology.org/2021.rocling-1.30
Employing low-pass filtered temporal speech features for the training of ideal ratio mask in speech enhancement
x [n] x m [n] m m m D × 1 M X = [ 0 1 ... M −1 ], D × M
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hempelmann-etal-2005-evaluating
https://aclanthology.org/W05-0211
Evaluating State-of-the-Art Treebank-style Parsers for Coh-Metrix and Other Learning Technology Environments
This paper evaluates a series of freely available, state-of-the-art parsers on a standard benchmark as well as with respect to a set of data relevant for measuring text cohesion. We outline advantages and disadvantages of existing technologies and make recommendations. Our performance report uses traditional measures based on a gold standard as well as novel dimensions for parsing evaluation. To our knowledge this is the first attempt to evaluate parsers accross genres and grade levels for the implementation in learning technology.
true
[]
[]
Quality Education
null
null
This research was funded by Institute for Educations Science Grant IES R3056020018-02. Any opinions, findings, and conclusions or recommendations expressed in this article are those of the authors and do not necessarily reflect the views of the IES. We are grateful to Philip M. McCarthy for his assistance in preparing some of our data.
2005
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
roark-etal-2009-deriving
https://aclanthology.org/D09-1034
Deriving lexical and syntactic expectation-based measures for psycholinguistic modeling via incremental top-down parsing
A number of recent publications have made use of the incremental output of stochastic parsers to derive measures of high utility for psycholinguistic modeling, following the work of Hale (2001; 2003; 2006). In this paper, we present novel methods for calculating separate lexical and syntactic surprisal measures from a single incremental parser using a lexicalized PCFG. We also present an approximation to entropy measures that would otherwise be intractable to calculate for a grammar of that size. Empirical results demonstrate the utility of our methods in predicting human reading times.
false
[]
[]
null
null
null
Thanks to Michael Collins, John Hale and Shravan Vasishth for valuable discussions about this work. This research was supported in part by NSF Grant #BCS-0826654. Any opinions, findings, conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the NSF.
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
su-etal-1989-smoothing
https://aclanthology.org/O89-1010
Smoothing Statistic Databases in a Machine Translation System
null
false
[]
[]
null
null
null
null
1989
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
raake-2002-content
http://www.lrec-conf.org/proceedings/lrec2002/pdf/107.pdf
Does the Content of Speech Influence its Perceived Sound Quality?
From a user's perspective, the speech quality of modern telecommunication systems often differs from that of traditional wireline telephone systems. One aspect is a changed sound of the interlocutor's voice-introduced by an expansion of the transmissionbandwidth to wide-band, by low-bitrate coding and/or by the acoustic properties of specific user-interfaces. In order to quantify the effect of transmission on speech quality, subjective data to be correlated to transmission characteristics have to be collected in auditory tests. In this paper, a study is presented investigating in how far the content of specific speech material used in a listening-only test impacts its perceived sound quality. A set of French speech data was presented to two different groups of listeners: French native speakers and listeners without knowledge of French. The speech material consists of different text types, such as everyday speech or semantically unpredictable sentences (SUS). The listeners were asked to rate the sound quality of the transmitted voice on a onedimensional category rating scale. The French listeners' ratings were found to be lower for SUS, while those of the non-French listeners did not show any major dependency on text material. Hence, it can be stated that if a given speech sign is understood by the listeners, they are unable to separate form from function and reflect content in their ratings of sound.
false
[]
[]
null
null
null
This work has been carried out at the Institute of Communication Acoustics, Ruhr-University Bochum (Prof. J. Blauert, PD. U. Jekosch). It was performed in the framework of a PROCOPE co-operation with the LMA, CNRS, Marseille, France (Dr. G. Canévet). The author would like to thank U. Jekosch, S. Möller and S. Schaden for fruitful discussions and S. Meunier (CNRS Marseille) for her help in auditory test organization at the LMA.
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
duan-etal-2012-twitter
https://aclanthology.org/C12-1047
Twitter Topic Summarization by Ranking Tweets using Social Influence and Content Quality
In this paper, we propose a time-line based framework for topic summarization in Twitter. We summarize topics by sub-topics along time line to fully capture rapid topic evolution in Twitter. Specifically, we rank and select salient and diversified tweets as a summary of each sub-topic. We have observed that ranking tweets is significantly different from ranking sentences in traditional extractive document summarization. We model and formulate the tweet ranking in a unified mutual reinforcement graph, where the social influence of users and the content quality of tweets are taken into consideration simultaneously in a mutually reinforcing manner. Extensive experiments are conducted on 3.9 million tweets. The results show that the proposed approach outperforms previous approaches by 14% improvement on average ROUGE-1. Moreover, we show how the content quality of tweets and the social influence of users effectively improve the performance of measuring the salience of tweets.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
goodman-1985-repairing
https://aclanthology.org/P85-1026
Repairing Reference Identification Failures by Relaxation
The goal of thls work is the enrichment of human-machlne mteractIons in a natural language
false
[]
[]
null
null
null
null
1985
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bruce-wiebe-1998-word
https://aclanthology.org/W98-1507
Word-Sense Distinguishability and Inter-Coder Agreement
It. is common in NLP that the categories into which text is classified do not have fully objective definitions. Examples of such categories are lexical distinctions such as part-of-speech tags and wordsense distinctions, sentence level distinctions such as phrase attachment, and discourse level distinct.icms such as topic or speech-act categorization. This p>1per presents an approach to analy?-ing the agrcen1ent arnong lnnnan judges for the purpose of formulating a refined and more reliable set of category designations. We use these techniques to analyze the sense tags assigned by five judgps to the noun intcr•est. The initial tag set is takmi from Longman's Dictionary of Contemporary i:nglish. Through this process of analysis, we automatically identify and assign a revised set of sense tags for the data. The revised tags exhibit high reliability as measured by Cohen's r;.. Such techniques are important for formulating and evaluating both human and automated classification systems.
false
[]
[]
null
null
null
null
1998
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lauscher-etal-2018-investigating
https://aclanthology.org/D18-1370
Investigating the Role of Argumentation in the Rhetorical Analysis of Scientific Publications with Neural Multi-Task Learning Models
Exponential growth in the number of scientific publications yields the need for effective automatic analysis of rhetorical aspects of scientific writing. Acknowledging the argumentative nature of scientific text, in this work we investigate the link between the argumentative structure of scientific publications and rhetorical aspects such as discourse categories or citation contexts. To this end, we (1) augment a corpus of scientific publications annotated with four layers of rhetoric annotations with argumentation annotations and (2) investigate neural multi-task learning architectures combining argument extraction with a set of rhetorical classification tasks. By coupling rhetorical classifiers with the extraction of argumentative components in a joint multi-task learning setting, we obtain significant performance gains for different rhetorical analysis tasks.
false
[]
[]
null
null
null
This research was partly funded by the German Research Foundation (DFG), grant number EC 477/5-1 (LOC-DB). We thank our four annotators for their dedicated annotation effort and the anonymous reviewers for constructive and insightful comments.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
martschat-strube-2015-latent
https://aclanthology.org/Q15-1029
Latent Structures for Coreference Resolution
Machine learning approaches to coreference resolution vary greatly in the modeling of the problem: while early approaches operated on the mention pair level, current research focuses on ranking architectures and antecedent trees. We propose a unified representation of different approaches to coreference resolution in terms of the structure they operate on. We represent several coreference resolution approaches proposed in the literature in our framework and evaluate their performance. Finally, we conduct a systematic analysis of the output of these approaches, highlighting differences and similarities.
false
[]
[]
null
null
null
This work has been funded by the Klaus Tschira Foundation, Heidelberg, Germany. The first author has been supported by a HITS PhD scholarship. We thank the anonymous reviewers and our colleagues Benjamin Heinzerling, Yufang Hou and Nafise Moosavi for feedback on earlier drafts of this paper. Furthermore, we are grateful to Anders Björkelund for helpful comments on cost functions.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
muller-2020-pymmax2
https://aclanthology.org/2020.law-1.16
pyMMAX2: Deep Access to MMAX2 Projects from Python
pyMMAX2 is an API for processing MMAX2 stand-off annotation data in Python. It provides a lightweight basis for the development of code which opens up the Java-and XML-based ecosystem of MMAX2 for more recent, Python-based NLP and data science methods. While pyMMAX2 is pure Python, and most functionality is implemented from scratch, the API re-uses the complex implementation of the essential business logic for MMAX2 annotation schemes by interfacing with the original MMAX2 Java libraries. pyMMAX2 is available for download at http://github.com/nlpAThits/pyMMAX2.
false
[]
[]
null
null
null
The work described in this paper was done as part of the project DeepCurate, which is funded by the German Federal Ministry of Education and Research (BMBF) (No. 031L0204) and the Klaus Tschira Foundation, Heidelberg, Germany. We thank the anonymous reviewers for their helpful suggestions.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bhattasali-etal-2018-processing
https://aclanthology.org/W18-4904
Processing MWEs: Neurocognitive Bases of Verbal MWEs and Lexical Cohesiveness within MWEs
Multiword expressions have posed a challenge in the past for computational linguistics since they comprise a heterogeneous family of word clusters and are difficult to detect in natural language data. In this paper, we present a fMRI study based on language comprehension to provide neuroimaging evidence for processing MWEs. We investigate whether different MWEs have distinct neural bases, e.g. if verbal MWEs involve separate brain areas from non-verbal MWEs and if MWEs with varying levels of cohesiveness activate dissociable brain regions. Our study contributes neuroimaging evidence illustrating that different MWEs elicit spatially distinct patterns of activation. We also adapt an association measure, usually used to detect MWEs, as a cognitively plausible metric for language processing.
false
[]
[]
null
null
null
This material is based upon work supported by the National Science Foundation under Grant No. 1607441.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
farrow-dzikovska-2009-context
https://aclanthology.org/W09-1502
Context-Dependent Regression Testing for Natural Language Processing
Regression testing of natural language systems is problematic for two main reasons: component input and output is complex, and system behaviour is context-dependent. We have developed a generic approach which solves both of these issues. We describe our regression tool, CONTEST, which supports context-dependent testing of dialogue system components, and discuss the regression test sets we developed, designed to effectively isolate components from changes and problems earlier in the pipeline. We believe that the same approach can be used in regression testing for other dialogue systems, as well as in testing any complex NLP system containing multiple components.
false
[]
[]
null
null
null
This work has been supported in part by Office of Naval Research grant N000140810043. We thank Charles Callaway for help with generation and tutoring tests.
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
manuvinakurike-etal-2018-conversational
https://aclanthology.org/W18-5033
Conversational Image Editing: Incremental Intent Identification in a New Dialogue Task
We present "conversational image editing", a novel real-world application domain combining dialogue, visual information, and the use of computer vision. We discuss the importance of dialogue incrementality in this task, and build various models for incremental intent identification based on deep learning and traditional classification algorithms. We show how our model based on convolutional neural networks outperforms models based on random forests, long short term memory networks, and conditional random fields. By training embeddings based on image-related dialogue corpora, we outperform pre-trained out-of-the-box embeddings, for intention identification tasks. Our experiments also provide evidence that incremental intent processing may be more efficient for the user and could save time in accomplishing tasks.
false
[]
[]
null
null
null
This work was supported by a generous gift of Adobe Systems Incorporated to USC/ICT, and the first author's internship at Adobe Research. The first and last authors were also funded by the U.S. Army Research Laboratory. Statements and opinions expressed do not necessarily reflect the position or policy of the U.S. Government, and no official endorsement should be inferred.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
roth-2017-role
https://aclanthology.org/W17-6934
Role Semantics for Better Models of Implicit Discourse Relations
Predicting the structure of a discourse is challenging because relations between discourse segments are often implicit and thus hard to distinguish computationally. I extend previous work to classify implicit discourse relations by introducing a novel set of features on the level of semantic roles. My results demonstrate that such features are helpful, yielding results competitive with other feature-rich approaches on the PDTB. My main contribution is an analysis of improvements that can be traced back to role-based features, providing insights into why and when role semantics is helpful.
false
[]
[]
null
null
null
This research was supported in part by the Cluster of Excellence "Multimodal Computing and Interaction" of the German Excellence Initiative, and a DFG Research Fellowship (RO 4848/1-1).
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yamaguchi-etal-2021-frustratingly
https://aclanthology.org/2021.emnlp-main.249
Frustratingly Simple Pretraining Alternatives to Masked Language Modeling
Masked language modeling (MLM), a selfsupervised pretraining objective, is widely used in natural language processing for learning text representations. MLM trains a model to predict a random sample of input tokens that have been replaced by a [MASK] placeholder in a multi-class setting over the entire vocabulary. When pretraining, it is common to use alongside MLM other auxiliary objectives on the token or sequence level to improve downstream performance (e.g. next sentence prediction). However, no previous work so far has attempted in examining whether other simpler linguistically intuitive or not objectives can be used standalone as main pretraining objectives. In this paper, we explore five simple pretraining objectives based on token-level classification tasks as replacements of MLM. Empirical results on GLUE and SQUAD show that our proposed methods achieve comparable or better performance to MLM using a BERT-BASE architecture. We further validate our methods using smaller models, showing that pretraining a model with 41% of the BERT-BASE's parameters, BERT-MEDIUM results in only a 1% drop in GLUE scores with our best objective. 1 * Work was done while at the University of Sheffield.
false
[]
[]
null
null
null
NA is supported by EPSRC grant EP/V055712/1, part of the European Commission CHIST-ERA programme, call 2019 XAI: Explainable Machine Learning-based Artificial Intelligence. KM is supported by Amazon through the Alexa Fellowship scheme.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
herzig-etal-2016-classifying
https://aclanthology.org/W16-3609
Classifying Emotions in Customer Support Dialogues in Social Media
Providing customer support through social media channels is gaining increasing popularity. In such a context, automatic detection and analysis of the emotions expressed by customers is important, as is identification of the emotional techniques (e.g., apology, empathy, etc.) in the responses of customer service agents. Result of such an analysis can help assess the quality of such a service, help and inform agents about desirable responses, and help develop automated service agents for social media interactions. In this paper, we show that, in addition to text based turn features, dialogue features can significantly improve detection of emotions in social media customer service dialogues and help predict emotional techniques used by customer service agents. Got excited to pick up the latest bundle since it was on sale today, but now I can't download it at all. Bummer. =/ Yeah, no problems there. The error is coming when I actually try to download the games. Error code: 412344 Uh oh! To check, were you able to purchase that title? Let's confirm by signing in at http://t.co/53fsdfd real quick. Appreciate that! Let's power cycle and unplug modem/router for 2 mins then try again. Seems to be working now. Weird. I tried that 3 different times earlier. Thanks. Odd, but glad to hear that's sorted! Happy gaming, and we'll be here to help if any other questions or concerns arise.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
pighin-etal-2012-analysis
http://www.lrec-conf.org/proceedings/lrec2012/pdf/337_Paper.pdf
An Analysis (and an Annotated Corpus) of User Responses to Machine Translation Output
We present an annotated resource consisting of open-domain translation requests, automatic translations and user-provided corrections collected from casual users of the translation portal http://reverso.net. The layers of annotation provide: 1) quality assessments for 830 correction suggestions for translations into English, at the segment level, and 2) 814 usefulness assessments for English-Spanish and English-French translation suggestions, a suggestion being useful if it contains at least local clues that can be used to improve translation quality. We also discuss the results of our preliminary experiments concerning 1) the development of an automatic filter to separate useful from non-useful feedback, and 2) the incorporation in the machine translation pipeline of bilingual phrases extracted from the suggestions. The annotated data, available for download from ftp://mi.eng.cam.ac.uk/data/faust/LW-UPC-Oct11-FAUST-feedback-annotation.tgz, is released under a Creative Commons license. To our best knowledge, this is the first resource of this kind that has ever been made publicly available.
false
[]
[]
null
null
null
This research has been partially funded by the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement numbers 247762 (FAUST project, FP7-ICT-2009-4-247762) and 247914 (MOLTO project, FP7-ICT-2009-4-247914) and by the Spanish Ministry of Education and Science (OpenMT-2, TIN2009-14675-C03).
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yonezaki-enomoto-1980-database
https://aclanthology.org/C80-1032
Database System Based on Intensional Logic
Model theoretic semantics of database systems is studied. As Rechard Montague has done in his work, 5 we translate statements of DDL and DML into intensional logic and the latter is interpreted with reference to a suitable model. Major advantages of its approach include (i) it leads itself to the design of database systems which can handle historical data, (ii) it provides with a formal description of database semantics.
false
[]
[]
null
null
null
Our thanks are due to Mr. Kenichi Murata for fruitful discussions and encouragement and to Prof. Takuya Katayama and many other people whose ideas we have unwittingly absorbed over the years.
1980
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wing-baldridge-2011-simple
https://aclanthology.org/P11-1096
Simple supervised document geolocation with geodesic grids
We investigate automatic geolocation (i.e. identification of the location, expressed as latitude/longitude coordinates) of documents. Geolocation can be an effective means of summarizing large document collections and it is an important component of geographic information retrieval. We describe several simple supervised methods for document geolocation using only the document's raw text as evidence. All of our methods predict locations in the context of geodesic grids of varying degrees of resolution. We evaluate the methods on geotagged Wikipedia articles and Twitter feeds. For Wikipedia, our best method obtains a median prediction error of just 11.8 kilometers. Twitter geolocation is more challenging: we obtain a median error of 479 km, an improvement on previous results for the dataset.
false
[]
[]
null
null
null
This research was supported by a grant from the Morris Memorial Trust Fund of the New York Community Trust and from the Longhorn Innovation Fund for Technology. This paper benefited from reviewer comments and from discussion in the Natural Language Learning reading group at UT Austin, with particular thanks to Matt Lease.
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
schulz-etal-2020-named
https://aclanthology.org/2020.lrec-1.553
Named Entities in Medical Case Reports: Corpus and Experiments
We present a new corpus comprising annotations of medical entities in case reports, originating from PubMed Central's open access library. In the case reports, we annotate cases, conditions, findings, factors and negation modifiers. Moreover, where applicable, we annotate relations between these entities. As such, this is the first corpus of this kind made available to the scientific community in English. It enables the initial investigation of automatic information extraction from case reports through tasks like Named Entity Recognition, Relation Extraction and (sentence/paragraph) relevance detection. Additionally, we present four strong baseline systems for the detection of medical entities made available through the annotated dataset.
true
[]
[]
Good Health and Well-Being
null
null
The research presented in this article is funded by the German Federal Ministry of Education and Research (BMBF) through the project QURATOR (Unternehmen Region, Wachstumskern, grant no. 03WKDA1A), see http://qurator. ai. We want to thank our medical experts for their help annotating the data set, especially Ashlee Finckh and Sophie Klopfenstein.
2020
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nakashole-mitchell-2014-language
https://aclanthology.org/P14-1095
Language-Aware Truth Assessment of Fact Candidates
This paper introduces FactChecker, language-aware approach to truth-finding. FactChecker differs from prior approaches in that it does not rely on iterative peer voting, instead it leverages language to infer believability of fact candidates. In particular, FactChecker makes use of linguistic features to detect if a given source objectively states facts or is speculative and opinionated. To ensure that fact candidates mentioned in similar sources have similar believability, FactChecker augments objectivity with a co-mention score to compute the overall believability score of a fact candidate. Our experiments on various datasets show that FactChecker yields higher accuracy than existing approaches.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
We thank members of the NELL team at CMU for their helpful comments. This research was supported by DARPA under contract number FA8750-13-2-0005.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
gu-etal-2022-phrase
https://aclanthology.org/2022.acl-long.444
Phrase-aware Unsupervised Constituency Parsing
Recent studies have achieved inspiring success in unsupervised grammar induction using masked language modeling (MLM) as the proxy task. Despite their high accuracy in identifying low-level structures, prior arts tend to struggle in capturing high-level structures like clauses, since the MLM task usually only requires information from local context. In this work, we revisit LM-based constituency parsing from a phrase-centered perspective. Inspired by the natural reading process of human readers, we propose to regularize the parser with phrases extracted by an unsupervised phrase tagger to help the LM model quickly manage low-level structures. For a better understanding of high-level structures, we propose a phrase-guided masking strategy for LM to emphasize more on reconstructing nonphrase words. We show that the initial phrase regularization serves as an effective bootstrap, and phrase-guided masking improves the identification of high-level structures. Experiments on the public benchmark with two different backbone models demonstrate the effectiveness and generality of our method.
false
[]
[]
null
null
null
Research was supported in part by US DARPA KAIROS Program No. FA8750-19-2-1004, So-cialSim Program No. W911NF-17-C-0099, and INCAS Program No. HR001121C0165, National Science Foundation IIS-19-56151, IIS-17-41317, and IIS 17-04532, IBM-Illinois Discovery Accelerator Institute, and the Molecule Maker Lab Institute: An AI Research Institutes program supported by NSF under Award No. 2019897. Any opinions, findings, and conclusions or recommendations expressed herein are those of the authors and do not necessarily represent the views, either expressed or implied, of DARPA or the U.S. Government. The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies.
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lagarda-casacuberta-2008-applying
https://aclanthology.org/2008.eamt-1.14
Applying boosting to statistical machine translation
Boosting is a general method for improving the accuracy of a given learning algorithm under certain restrictions. In this work, AdaBoost, one of the most popular boosting algorithms, is adapted and applied to statistical machine translation. The appropriateness of this technique in this scenario is evaluated on a real translation task. Results from preliminary experiments confirm that statistical machine translation can take advantage from this technique, improving the translation quality.
false
[]
[]
null
null
null
null
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
darwish-etal-2014-verifiably
https://aclanthology.org/D14-1154
Verifiably Effective Arabic Dialect Identification
Several recent papers on Arabic dialect identification have hinted that using a word unigram model is sufficient and effective for the task. However, most previous work was done on a standard fairly homogeneous dataset of dialectal user comments. In this paper, we show that training on the standard dataset does not generalize, because a unigram model may be tuned to topics in the comments and does not capture the distinguishing features of dialects. We show that effective dialect identification requires that we account for the distinguishing lexical, morphological, and phonological phenomena of dialects. We show that accounting for such can improve dialect detection accuracy by nearly 10% absolute.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
miranda-etal-2018-multilingual
https://aclanthology.org/D18-1483
Multilingual Clustering of Streaming News
Clustering news across languages enables efficient media monitoring by aggregating articles from multilingual sources into coherent stories. Doing so in an online setting allows scalable processing of massive news streams. To this end, we describe a novel method for clustering an incoming stream of multilingual documents into monolingual and crosslingual story clusters. Unlike typical clustering approaches that consider a small and known number of labels, we tackle the problem of discovering an ever growing number of cluster labels in an online fashion, using real news datasets in multiple languages. Our method is simple to implement, computationally efficient and produces state-of-the-art results on datasets in German, English and Spanish.
false
[]
[]
null
null
null
We would like to thank Esma Balkır, Nikos Papasarantopoulos, Afonso Mendes, Shashi Narayan and the anonymous reviewers for their feedback. This project was supported by the European H2020 project SUMMA, grant agreement 688139 (see http://www.summa-project.eu) and by a grant from Bloomberg.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
popowich-etal-1997-lexicalist
https://aclanthology.org/1997.tmi-1.9
A lexicalist approach to the translation of colloquial text
Colloquial English (CE) as found in television programs or typical conversations is different than text found in technical manuals, newspapers and books. Phrases tend to be shorter and less sophisticated. In this paper, we look at some of the theoretical and implementational issues involved in translating CE. We present a fully automatic large-scale multilingual natural language processing system for translation of CE input text, as found in the commercially transmitted closed-caption television signal, into simple target sentences. Our approach is based on the Whitelock's Shake and Bake machine translation paradigm, which relies heavily on lexical resources. The system currently translates from English to Spanish with the translation modules for Brazilian Portuguese under development.
false
[]
[]
null
null
null
null
1997
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
stajner-popovic-2018-improving
https://aclanthology.org/W18-7006
Improving Machine Translation of English Relative Clauses with Automatic Text Simplification
This article explores the use of automatic sentence simplification as a preprocessing step in neural machine translation of English relative clauses into grammatically complex languages. Our experiments on English-to-Serbian and Englishto-German translation show that this approach can reduce technical post-editing effort (number of post-edit operations) to obtain correct translation. We find that larger improvements can be achieved for more complex target languages, as well as for MT systems with lower overall performance. The improvements mainly originate from correctly simplified sentences with relatively complex structure, while simpler structures are already translated sufficiently well using the original source sentences.
false
[]
[]
null
null
null
Acknowledgements: This research was supported by the ADAPT Centre for Digital Content Technology at Dublin City University, funded under the Science Foundation Ireland Research Centres Programme (Grant 13/RC/2106) and co-
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jansen-etal-2018-worldtree
https://aclanthology.org/L18-1433
WorldTree: A Corpus of Explanation Graphs for Elementary Science Questions supporting Multi-hop Inference
Developing methods of automated inference that are able to provide users with compelling human-readable justifications for why the answer to a question is correct is critical for domains such as science and medicine, where user trust and detecting costly errors are limiting factors to adoption. One of the central barriers to training question answering models on explainable inference tasks is the lack of gold explanations to serve as training data. In this paper we present a corpus of explanations for standardized science exams, a recent challenge task for question answering. We manually construct a corpus of detailed explanations for nearly all publicly available standardized elementary science question (approximately 1,680 3 rd through 5 th grade questions) and represent these as "explanation graphs"-sets of lexically overlapping sentences that describe how to arrive at the correct answer to a question through a combination of domain and world knowledge. We also provide an explanation-centered tablestore, a collection of semi-structured tables that contain the knowledge to construct these elementary science explanations. Together, these two knowledge resources map out a substantial portion of the knowledge required for answering and explaining elementary science exams, and provide both structured and free-text training data for the explainable inference task.
true
[]
[]
Quality Education
null
null
null
2018
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
skantze-etal-2013-exploring
https://aclanthology.org/W13-4029
Exploring the effects of gaze and pauses in situated human-robot interaction
In this paper, we present a user study where a robot instructs a human on how to draw a route on a map, similar to a Map Task. This setup has allowed us to study user reactions to the robot's conversational behaviour in order to get a better understanding of how to generate utterances in incremental dialogue systems. We have analysed the participants' subjective rating, task completion, verbal responses, gaze behaviour, drawing activity, and cognitive load. The results show that users utilise the robot's gaze in order to disambiguate referring expressions and manage the flow of the interaction. Furthermore, we show that the user's behaviour is affected by how pauses are realised in the robot's speech.
false
[]
[]
null
null
null
Gabriel Skantze is supported by the Swedish research council (VR) project Incremental processing in multimodal conversational systems (2011-6237). Anna Hjalmarsson is supported by the Swedish Research Council (VR) project Classifying and deploying pauses for flow control in conversational systems . Catharine Oertel is supported by GetHomeSafe (EU 7th Framework STREP 288667).
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gao-etal-2021-learning
https://aclanthology.org/2021.textgraphs-1.6
Learning Clause Representation from Dependency-Anchor Graph for Connective Prediction
Semantic representation that supports the choice of an appropriate connective between pairs of clauses inherently addresses discourse coherence, which is important for tasks such as narrative understanding, argumentation, and discourse parsing. We propose a novel clause embedding method that applies graph learning to a data structure we refer to as a dependencyanchor graph. The dependency anchor graph incorporates two kinds of syntactic information, constituency structure and dependency relations, to highlight the subject and verb phrase relation. This enhances coherencerelated aspects of representation. We design a neural model to learn a semantic representation for clauses from graph convolution over latent representations of the subject and verb phrase. We evaluate our method on two new datasets: a subset of a large corpus where the source texts are published novels, and a new dataset collected from students' essays. The results demonstrate a significant improvement over tree-based models, confirming the importance of emphasizing the subject and verb phrase. The performance gap between the two datasets illustrates the challenges of analyzing student's written text, plus a potential evaluation task for coherence modeling and an application for suggesting revisions to students.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nan-etal-2020-reasoning
https://aclanthology.org/2020.acl-main.141
Reasoning with Latent Structure Refinement for Document-Level Relation Extraction
Document-level relation extraction requires integrating information within and across multiple sentences of a document and capturing complex interactions between inter-sentence entities. However, effective aggregation of relevant information in the document remains a challenging research question. Existing approaches construct static document-level graphs based on syntactic trees, co-references or heuristics from the unstructured text to model the dependencies. Unlike previous methods that may not be able to capture rich non-local interactions for inference, we propose a novel model that empowers the relational reasoning across sentences by automatically inducing the latent document-level graph. We further develop a refinement strategy, which enables the model to incrementally aggregate relevant information for multi-hop reasoning. Specifically, our model achieves an F 1 score of 59.05 on a large-scale documentlevel dataset (DocRED), significantly improving over the previous results, and also yields new state-of-the-art results on the CDR and GDA dataset. Furthermore, extensive analyses show that the model is able to discover more accurate inter-sentence relations. * * Equally Contributed. † † Work done during internship at SUTD.
false
[]
[]
null
null
null
We would like to thank the anonymous reviewers for their thoughtful and constructive comments. This research is supported by Ministry of Education, Singapore, under its Academic Research Fund (AcRF) Tier 2 Programme (MOE AcRF Tier 2 Award No: MOE2017-T2-1-156). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not reflect the views of the Ministry of Education, Singapore.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
martin-etal-2020-leveraging
https://aclanthology.org/2020.cllrd-1.5
Leveraging Non-Specialists for Accurate and Time Efficient AMR Annotation
Meaning Representations (AMRs), a syntax-free representation of phrase semantics (Banarescu et al., 2013), are useful for capturing the meaning of a phrase and reflecting the relationship between concepts that are referred to. However, annotating AMRs is time consuming and expensive. The existing annotation process requires expertly trained workers who have knowledge of an extensive set of guidelines for parsing phrases. In this paper, we propose a cost-saving two-step process for the creation of a corpus of AMR-phrase pairs for spatial referring expressions. The first step uses non-specialists to perform simple annotations that can be leveraged in the second step to accelerate the annotation performed by the experts. We hypothesize that our process will decrease the cost per annotation and improve consistency across annotators. Few corpora of spatial referring expressions exist and the resulting language resource will be valuable for referring expression comprehension and generation modeling.
false
[]
[]
null
null
null
This work is partially supported by the National Science Foundation award number 1849357.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
elder-etal-2020-make
https://aclanthology.org/2020.emnlp-main.230
How to Make Neural Natural Language Generation as Reliable as Templates in Task-Oriented Dialogue
Neural Natural Language Generation (NLG) systems are well known for their unreliability. To overcome this issue, we propose a data augmentation approach which allows us to restrict the output of a network and guarantee reliability. While this restriction means generation will be less diverse than if randomly sampled, we include experiments that demonstrate the tendency of existing neural generation approaches to produce dull and repetitive text, and we argue that reliability is more important than diversity for this task. The system trained using this approach scored 100% in semantic accuracy on the E2E NLG Challenge dataset, the same as a template system.
false
[]
[]
null
null
null
We thank the anonymous reviewers for their helpful comments. This research is supported by Science Foundation Ireland in the ADAPT Centre for Digital Content Technology. The ADAPT Centre for Digital Content Technology is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nagesh-2015-exploring
https://aclanthology.org/N15-2006
Exploring Relational Features and Learning under Distant Supervision for Information Extraction Tasks
Information Extraction (IE) has become an indispensable tool in our quest to handle the data deluge of the information age. IE can broadly be classified into Named-entity Recognition (NER) and Relation Extraction (RE). In this thesis, we view the task of IE as finding patterns in unstructured data, which can either take the form of features and/or be specified by constraints. In NER, we study the categorization of complex relational 1 features and outline methods to learn feature combinations through induction. We demonstrate the efficacy of induction techniques in learning : i) rules for the identification of named entities in text-the novelty is the application of induction techniques to learn in a very expressive declarative rule language ii) a richer sequence labeling model-enabling optimal learning of discriminative features. In RE, our investigations are in the paradigm of distant supervision, which facilitates the creation of large albeit noisy training data. We devise an inference framework in which constraints can be easily specified in learning relation extractors. In addition, we reformulate the learning objective in a max-margin framework. To the best of our knowledge, our formulation is the first to optimize multi-variate non-linear performance measures such as F β for a latent variable structure prediction task.
false
[]
[]
null
null
null
null
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
iosif-etal-2012-associative
http://www.lrec-conf.org/proceedings/lrec2012/pdf/536_Paper.pdf
Associative and Semantic Features Extracted From Web-Harvested Corpora
We address the problem of automatic classification of associative and semantic relations between words, and particularly those that hold between nouns. Lexical relations such as synonymy, hypernymy/hyponymy, constitute the fundamental types of semantic relations. Associative relations are harder to define, since they include a long list of diverse relations, e.g., "Cause-Effect", "Instrument-Agency". Motivated by findings from the literature of psycholinguistics and corpus linguistics, we propose features that take advantage of general linguistic properties. For evaluation we merged three datasets assembled and validated by cognitive scientists. A proposed priming coefficient that measures the degree of asymmetry in the order of appearance of the words in text achieves the best classification results, followed by context-based similarity metrics. The web-based features achieve classification accuracy that exceeds 85%.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sayeed-etal-2012-grammatical
https://aclanthology.org/N12-1085
Grammatical structures for word-level sentiment detection
Existing work in fine-grained sentiment analysis focuses on sentences and phrases but ignores the contribution of individual words and their grammatical connections. This is because of a lack of both (1) annotated data at the word level and (2) algorithms that can leverage syntactic information in a principled way. We address the first need by annotating articles from the information technology business press via crowdsourcing to provide training and testing data. To address the second need, we propose a suffix-tree data structure to represent syntactic relationships between opinion targets and words in a sentence that are opinion-bearing. We show that a factor graph derived from this data structure acquires these relationships with a small number of word-level features. We demonstrate that our supervised model performs better than baselines that ignore syntactic features and constraints.
false
[]
[]
null
null
null
This paper is based upon work supported by the US National Science Foundation under Grant IIS-0729459. Additional support came from the Cluster of Excellence "Multimodal Computing and Innovation", Germany. Jordan Boyd-Graber is also supported by US National Science Foundation Grant NSF grant #1018625 and the Army Research Laboratory through ARL Cooperative Agreement W911NF-09-2-0072. Any opinions, findings, conclusions, or recommendations expressed are the authors' and do not necessarily reflect those of the sponsors.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
feng-2003-cooperative
https://aclanthology.org/N03-3010
Cooperative model-based language understanding
In this paper, we propose a novel Cooperative Model for natural language understanding in a dialogue system. We build this based on both Finite State Model (FSM) and Statistical Learning Model (SLM). FSM provides two strategies for language understanding and have a high accuracy but little robustness and flexibility. Statistical approach is much more robust but less accurate. Cooperative Model incorporates all the three strategies together and thus can suppress all the shortcomings of different strategies and has all the advantages of the three strategies.
false
[]
[]
null
null
null
The author would like to thank Deepak Ravichandran for his invaluable help of the whole work.
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
romanov-etal-2019-adversarial
https://aclanthology.org/N19-1088
Adversarial Decomposition of Text Representation
In this paper, we present a method for adversarial decomposition of text representation. This method can be used to decompose a representation of an input sentence into several independent vectors, each of them responsible for a specific aspect of the input sentence. We evaluate the proposed method on two case studies: the conversion between different social registers and diachronic language change. We show that the proposed method is capable of fine-grained controlled change of these aspects of the input sentence. It is also learning a continuous (rather than categorical) representation of the style of the sentence, which is more linguistically realistic. The model uses adversarial-motivational training and includes a special motivational loss, which acts opposite to the discriminator and encourages a better decomposition. Furthermore, we evaluate the obtained meaning embeddings on a downstream task of paraphrase detection and show that they significantly outperform the embeddings of a regular autoencoder.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
reeves-1982-terminology
https://aclanthology.org/1982.tc-1.12
Terminology for translators
Should a technical translator be a subject specialist with additional linguistic skills? Or a trained linguist with some specialist knowledge? It is an old debate and plainly in practice successful translators can derive from both categories. Indeed in the past entry into the profession was often largely determined by personal circumstances -an engineer who had acquired linguistic knowledge through overseas postings might turn later in his career, or as a side-line, to translating engineering texts. A language graduate, finding him or usually herself, confined to earning a living from the home, acquired knowledge of a technical area in a self-teaching process. Today, however, the enormous growth of scientific discovery and technological innovation together with the internationalisation of trade make, as we all know, the systematic training of translators a necessity. Decisions therefore have to be taken about the most efficacious methods to be adopted in the training process and the old question of linguist versus specialist recurs with fresh urgency. Or at least it would appear to. But there is a further complicating factor: the technical translator is principally concerned with the language in which the message is expressed, whereas the sender was principally concerned with the topic of the message. The sender used the special language of his area to describe and analyse the extra-linguistic reality that was his primary interest. But the translator's primary interest is the special language itself: in short a subject's terminology assumes first importance for the translator. And the moment we speak of terminology in this context as 'an aggregate of terms representing the system of concepts in an individual subject field'(1), we are reminded that the translator also needs to understand the principles according to which a particular terminology is established, the relationship between various monolingual terminologies and between the specialist terminologies of one language and those of another language(2). Thus the technical translator has to be an expert in three discrete disciplines: translation itself, a technical specialism and the theory and practice of terminology.
false
[]
[]
null
null
null
null
1982
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yusupov-kuratov-2018-nips
https://aclanthology.org/C18-1312
NIPS Conversational Intelligence Challenge 2017 Winner System: Skill-based Conversational Agent with Supervised Dialog Manager
We present bot#1337: a dialog system developed for the 1 st NIPS Conversational Intelligence Challenge 2017 (ConvAI). The aim of the competition was to implement a bot capable of conversing with humans based on a given passage of text. To enable conversation, we implemented a set of skills for our bot, including chitchat , topic detection, text summarization, question answering and question generation. The system has been trained in a supervised setting using a dialogue manager to select an appropriate skill for generating a response. The latter allows a developer to focus on the skill implementation rather than the finite state machine based dialog manager. The proposed system bot#1337 won the competition with an average dialogue quality score of 2.78 out of 5 given by human evaluators. Source code and trained models for the bot#1337 are available on GitHub.
false
[]
[]
null
null
null
We thank Mikhail Burtsev, Luiza Sayfullina and Mikhail Pavlov for comments that greatly improved the manuscript. We would also like to thank the Reason8.ai company for providing computational resources and grant for NIPS 2017 ticket. We thank Neural Systems and Deep Learning Lab of MIPT for ideas and support.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
trawinski-2003-licensing
https://aclanthology.org/W03-1813
Licensing Complex Prepositions via Lexical Constraints
In this paper, we will investigate a cross-linguistic phenomenon referred to as complex prepositions (CPs), which is a frequent type of multiword expressions (MWEs) in many languages. Based on empirical data, we will point out the problems of the traditional treatment of CPs as complex lexical categories, and, thus, propose an analysis using the formal paradigm of the HPSG in the tradition of (Pollard and Sag, 1994). Our objective is to provide an approach to CPs which (1) convincingly explains empirical data, (2) is consistent with the underlying formal framework and does not require any extensions or modifications of the existing description apparatus, (3) is computationally tractable.
false
[]
[]
null
null
null
I would like to thank Manfred Sailer, Frank Richter, and the anonymous reviewers of the ACL-2003 Workshop on Multiword Expressions: Analysis, Acquisition and Treatment in Sapporo for their interesting comments on the issue presented in this paper and Carmella Payne for help with English.
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yeniterzi-oflazer-2010-syntax
https://aclanthology.org/P10-1047
Syntax-to-Morphology Mapping in Factored Phrase-Based Statistical Machine Translation from English to Turkish
We present a novel scheme to apply factored phrase-based SMT to a language pair with very disparate morphological structures. Our approach relies on syntactic analysis on the source side (English) and then encodes a wide variety of local and non-local syntactic structures as complex structural tags which appear as additional factors in the training data. On the target side (Turkish), we only perform morphological analysis and disambiguation but treat the complete complex morphological tag as a factor, instead of separating morphemes. We incrementally explore capturing various syntactic substructures as complex tags on the English side, and evaluate how our translations improve in BLEU scores. Our maximal set of source and target side transformations, coupled with some additional techniques, provide an 39% relative improvement from a baseline 17.08 to 23.78 BLEU, all averaged over 10 training and test sets. Now that the syntactic analysis on the English side is available, we also experiment with more long distance constituent reordering to bring the English constituent order close to Turkish, but find that these transformations do not provide any additional consistent tangible gains when averaged over the 10 sets.
false
[]
[]
null
null
null
We thank Joakim Nivre for providing us with the parser. This publication was made possible by the generous support of the Qatar Foundation through Carnegie Mellon University's Seed Research program. The statements made herein are solely the responsibility of the authors.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhang-etal-2012-whitepaper
https://aclanthology.org/W12-4401
Whitepaper of NEWS 2012 Shared Task on Machine Transliteration
Transliteration is defined as phonetic translation of names across languages. Transliteration of Named Entities (NEs) is necessary in many applications, such as machine translation, corpus alignment, cross-language IR, information extraction and automatic lexicon acquisition. All such systems call for high-performance transliteration, which is the focus of shared task in the NEWS 2012 workshop. The objective of the shared task is to promote machine transliteration research by providing a common benchmarking platform for the community to evaluate the state-of-the-art technologies.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lee-lee-2014-postech
https://aclanthology.org/W14-1709
POSTECH Grammatical Error Correction System in the CoNLL-2014 Shared Task
This paper describes the POSTECH grammatical error correction system. Various methods are proposed to correct errors such as rule-based, probability n-gram vector approaches and router-based approach. Google N-gram count corpus is used mainly as the correction resource. Correction candidates are extracted from NUCLE training data and each candidate is evaluated with development data to extract high precision rules and n-gram frames. Out of 13 participating teams, our system is ranked 4 th on both the original and revised annotation.
false
[]
[]
null
null
null
This research was supported by the MSIP(The Ministry of Science, ICT and Future Planning), Korea and Microsoft Research, under IT/SW Creative research program supervised by the NIPA(National IT Industry Promotion Agency) (NIPA-2013-H0503-13-1006) and this research was supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology(2010-0019523).
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
aoki-yamamoto-2007-opinion
https://aclanthology.org/Y07-1007
Opinion Extraction based on Syntactic Pieces
This paper addresses a task of opinion extraction from given documents and its positive/negative classification. We propose a sentence classification method using a notion of syntactic piece. Syntactic piece is a minimum unit of structure, and is used as an alternative processing unit of n-gram and whole tree structure. We compute its semantic orientation, and classify opinion sentences into positive or negative. We have conducted an experiment on more than 5000 opinion sentences of multiple domains, and have proven that our approach attains high performance at 91% precision.
false
[]
[]
null
null
null
null
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
junczys-dowmunt-grundkiewicz-2016-phrase
https://aclanthology.org/D16-1161
Phrase-based Machine Translation is State-of-the-Art for Automatic Grammatical Error Correction
In this work, we study parameter tuning towards the M 2 metric, the standard metric for automatic grammar error correction (GEC) tasks. After implementing M 2 as a scorer in the Moses tuning framework, we investigate interactions of dense and sparse features, different optimizers, and tuning strategies for the CoNLL-2014 shared task. We notice erratic behavior when optimizing sparse feature weights with M 2 and offer partial solutions. We find that a bare-bones phrase-based SMT setup with task-specific parameter-tuning outperforms all previously published results for the CoNLL-2014 test set by a large margin (46.37% M 2 over previously 41.75%, by an SMT system with neural features) while being trained on the same, publicly available data. Our newly introduced dense and sparse features widen that gap, and we improve the state-of-the-art to 49.49% M 2 .
false
[]
[]
null
null
null
The authors would like to thank Colin Cherry for his help with Batch Mira hyper-parameters and Kenneth Heafield for many helpful comments and discussions. This work was partially funded by the Polish National Science Centre (Grant No. 2014/15/N/ST6/02330) and by Facebook. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Facebook.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
childs-etal-1998-coreference
https://aclanthology.org/X98-1010
Coreference Resolution Strategies From an Application Perspective
As part of our TIPSTER III research program, we have continued our research into strategies to resolve coreferences within a free text document; this research was begun during our TIPSTER II research program. In the TIPSTER II Proceedings paper, "An Evaluation of Coreference Resolution Strategies for Acquiring Associated Information," the goal was to evaluate the contributions of various techniques for associating an entity with three types of information: 1) name variations, 2) descriptive phrases, and 3) location information.
false
[]
[]
null
null
null
null
1998
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
costa-jussa-etal-2020-multilingual
https://aclanthology.org/2020.cl-2.1
Multilingual and Interlingual Semantic Representations for Natural Language Processing: A Brief Introduction
We introduce the Computational Linguistics special issue on Multilingual and Interlingual Semantic Representations for Natural Language Processing. We situate the special issue's five articles in the context of our fast-changing field, explaining our motivation for this project. We offer a brief summary of the work in the issue, which includes developments on lexical and sentential semantic representations, from symbolic and neural perspectives. 1. Motivation This special issue arose from our observation of two trends in the fields of computational linguistics and natural language processing. The first trend is a matter of increasing demand for language technologies that serve diverse populations, particularly those whose languages have received little attention in the research community.
false
[]
[]
null
null
null
We thank Kyle Lo for assistance with the S2ORC data. MRC is supported in part by a Google Faculty Research Award 2018, Spanish Ministerio de Economía y Competitividad, the European Regional Development Fund and the Agencia Estatal de Investigación, through the postdoctoral senior grant Ramón y Cajal, the contract TEC2015-69266-P (MINECO/FEDER,EU) and the contract PCIN-2017-079 (AEI/MINECO). N. A. S. is supported by National Science Foundation grant IIS-1562364. C. E. B. is funded by the German Federal Ministry of Education and Research under the funding code 01IW17001 (Deeplee). Responsibility for the content of this publication is with the authors.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kotlerman-etal-2012-sentence
https://aclanthology.org/S12-1005
Sentence Clustering via Projection over Term Clusters
This paper presents a novel sentence clustering scheme based on projecting sentences over term clusters. The scheme incorporates external knowledge to overcome lexical variability and small corpus size, and outperforms common sentence clustering methods on two reallife industrial datasets.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
clodfelder-2003-lsa
https://aclanthology.org/W03-0319
An LSA Implementation Against Parallel Texts in French and English
This paper presents the results of applying the Latent Semantic Analysis (LSA) methodology to a small collection of parallel texts in French and English. The goal of the analysis was to determine what the methodology might reveal regarding the difficulty level of either the machinetranslation (MT) task or the text-alignment (TA) task. In a perfectly parallel corpus where the texts are exactly aligned, it is expected that the word distributions between the two languages be perfectly symmetrical. Where they are symmetrical, the difficulty level of the machine-translation or the textalignment task should be low. The results of this analysis show that even in a perfectly aligned corpus, the word distributions between the two languages deviate and because they do, LSA may contribute much to our understanding of the difficulty of the MT and TA tasks. 1. Credits This paper discusses an implementation of the Latent Semantic Analysis (LSA) methodology against a small collection of perfectly parallel texts in French and English 1. The texts were made available by the HLT-NAACL and are taken from daily House journals of the Canadian Parliament. They were edited by Ulrich Germann. The LSA procedures were implemented in R, a system for statistical computation and graphics, and were written by John C.
false
[]
[]
null
null
null
null
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
suglia-etal-2020-compguesswhat
https://aclanthology.org/2020.acl-main.682
CompGuessWhat?!: A Multi-task Evaluation Framework for Grounded Language Learning
Approaches to Grounded Language Learning typically focus on a single task-based final performance measure that may not depend on desirable properties of the learned hidden representations, such as their ability to predict salient attributes or to generalise to unseen situations. To remedy this, we present GROLLA, an evaluation framework for Grounded Language Learning with Attributes with three subtasks: 1) Goal-oriented evaluation; 2) Object attribute prediction evaluation; and 3) Zeroshot evaluation. We also propose a new dataset CompGuessWhat?! as an instance of this framework for evaluating the quality of learned neural representations, in particular concerning attribute grounding. To this end, we extend the original GuessWhat?! dataset by including a semantic layer on top of the perceptual one. Specifically, we enrich the Vi-sualGenome scene graphs associated with the GuessWhat?! images with abstract and situated attributes. By using diagnostic classifiers, we show that current models learn representations that are not expressive enough to encode object attributes (average F1 of 44.27). In addition, they do not learn strategies nor representations that are robust enough to perform well when novel scenes or objects are involved in gameplay (zero-shot best accuracy 50.06%).
false
[]
[]
null
null
null
We thank Arash Eshghi and Yonatan Bisk for fruitful discussions in the early stages of the project.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
grishman-ksiezyk-1990-causal
https://aclanthology.org/C90-3023
Causal and Temporal Text Analysis: The Role of the Domain Model
It is generally recognized that interpreting natural language input may require access to detailed knowledge of the domain involved. This is particularly tree for multi-sentence discourse, where we must not only analyze the individual sentences but also establish the connections between them. Simple semantic constraints --an object classification hierarchy, a catalog of meaningful semantic relations --are not sufficient. However, the appropriate structure for integrating a language analyzer with a complex dynamic (time-dependent) model ---one which can scale up beyond 'toy' domains --is not yet well understood. To explore these design issues, we have developed a system which uses a rich model of a real, nontrivial piece of equipment in order to analyze, in depth, reports of the failure of this equipment. This system has been fully implemented and demonstrated on actual failure reports. In outlining this system over the next few pages, we focus particularly on the language analysis components which require detailed domain knowledge, and how these requirements have affected the design of the domain model.
false
[]
[]
null
null
null
This research was supported by the Defense Advazlced Research Projects Agency under Contract N00014-85-K-0163 from the Office of Naval Research.
1990
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
seyffarth-2019-modeling
https://aclanthology.org/W19-1003
Modeling the Induced Action Alternation and the Caused-Motion Construction with Tree Adjoining Grammar (TAG) and Semantic Frames
The induced action alternation and the caused-motion construction are two phenomena that allow English verbs to be interpreted as motion-causing events. This is possible when a verb is used with a direct object and a directional phrase, even when the verb does not lexically signify causativity or motion, as in "Sylvia laughed Mary off the stage". While participation in the induced action alternation is a lexical property of certain verbs, the caused-motion construction is not anchored in the lexicon. We model both phenomena with XMG-2 and use the TuLiPA parser to create compositional semantic frames for example sentences. We show how such frames represent the key differences between these two phenomena at the syntax-semantics interface, and how TAG can be used to derive distinct analyses for them.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wilks-1976-semantics
https://aclanthology.org/1976.earlymt-1.20
Semantics and world knowledge in MT
I presented very simple and straightforward paragraphs from recent newspapers to show that even t h e most congenial real t e x t s require, for their translation, sume notions of Inference, knowledge, and what 1 call "preferenrules", 'in 1974, I wm t for a year to the Institute for Semantic and Cognitive Studieq in Switzerland and then to the University of Edinburgh, where I have worked on theoretical defects in that Stanford model and ways of overcoming them in a later implementation.
false
[]
[]
null
null
null
null
1976
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
di-marco-navigli-2013-clustering
https://aclanthology.org/J13-3008
Clustering and Diversifying Web Search Results with Graph-Based Word Sense Induction
Web search result clustering aims to facilitate information search on the Web. Rather than the results of a query being presented as a flat list, they are grouped on the basis of their similarity and subsequently shown to the user as a list of clusters. Each cluster is intended to represent a different meaning of the input query, thus taking into account the lexical ambiguity (i.e., polysemy) issue. Existing Web clustering methods typically rely on some shallow notion of textual similarity between search result snippets, however. As a result, text snippets with no word in common tend to be clustered separately even if they share the same meaning, whereas snippets with words in common may be grouped together even if they refer to different meanings of the input query. In this article we present a novel approach to Web search result clustering based on the automatic discovery of word senses from raw text, a task referred to as Word Sense Induction. Key to our approach is to first acquire the various senses (i.e., meanings) of an ambiguous query and then cluster the search results based on their semantic similarity to the word senses induced. Our experiments, conducted on data sets of ambiguous queries, show that our approach outperforms both Web clustering and search engines.
false
[]
[]
null
null
null
The authors gratefully acknowledge the support of the ERC Starting Grant MultiJEDI no. 259234
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lisowska-underwood-2006-rote
http://www.lrec-conf.org/proceedings/lrec2006/pdf/187_pdf.pdf
ROTE: A Tool to Support Users in Defining the Relative Importance of Quality Characteristics
This paper describes the Relative Ordering Tool for Evaluation (ROTE) which is designed to support the process of building a parameterised quality model for evaluation. It is a very simple tool which enables users to specify the relative importance of quality characteristics (and associated metrics) to reflect the users' particular requirements. The tool allows users to order any number of quality characteristics by comparing them in a pair-wise fashion. The tool was developed in the context of a collaborative project developing a text mining system. A full scale evaluation of the text mining system was designed and executed for three different users and the ROTE tool was successfully applied by those users during that process. The tool will be made available for general use by the evaluation community.
false
[]
[]
null
null
null
null
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
xiao-guo-2012-multi
https://aclanthology.org/C12-1174
Multi-View AdaBoost for Multilingual Subjectivity Analysis
Subjectivity analysis has received increasing attention in natural language processing field. Most of the subjectivity analysis works however are conducted on single languages. In this paper, we propose to perform multilingual subjectivity analysis by combining multi-view learning and AdaBoost techniques. We aim to show that by boosting multi-view classifiers we can develop more effective multilingual subjectivity analysis tools for new languages as well as increase the classification performance for English data. We empirically evaluate our two multi-view AdaBoost approaches on the multilingual MPQA dataset. The experimental results show the multi-view AdaBoost approaches significantly outperform existing monolingual and multilingual methods.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
maxwell-2015-accounting
https://aclanthology.org/W15-4809
Accounting for Allomorphy in Finite-state Transducers
Building morphological parsers with existing finite state toolkits can result in something of a mis-match between the programming language of the toolkit and the linguistic concepts familiar to the average linguist. We illustrate this mismatch with a particular linguistic construct, suppletive allomorphy, and discuss ways to encode suppletive allomorphy in the Stuttgart Finite State tools (sfst). The complexity of the general solution motivates our work in providing an alternative formalism for morphology and phonology, one which can be translated automatically into sfst or other morphological parsing engines.
false
[]
[]
null
null
null
null
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nilsson-etal-2007-generalizing
https://aclanthology.org/P07-1122
Generalizing Tree Transformations for Inductive Dependency Parsing
Previous studies in data-driven dependency parsing have shown that tree transformations can improve parsing accuracy for specific parsers and data sets. We investigate to what extent this can be generalized across languages/treebanks and parsers, focusing on pseudo-projective parsing, as a way of capturing non-projective dependencies, and transformations used to facilitate parsing of coordinate structures and verb groups. The results indicate that the beneficial effect of pseudo-projective parsing is independent of parsing strategy but sensitive to language or treebank specific properties. By contrast, the construction specific transformations appear to be more sensitive to parsing strategy but have a constant positive effect over several languages.
false
[]
[]
null
null
null
null
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
liang-etal-2021-iterative-multi
https://aclanthology.org/2021.findings-emnlp.152
An Iterative Multi-Knowledge Transfer Network for Aspect-Based Sentiment Analysis
Aspect-based sentiment analysis (ABSA) mainly involves three subtasks: aspect term extraction, opinion term extraction, and aspect-level sentiment classification, which are typically handled in a separate or joint manner. However, previous approaches do not well exploit the interactive relations among three subtasks and do not pertinently leverage the easily available document-level labeled domain/sentiment knowledge, which restricts their performances. To address these issues, we propose a novel Iterative Multi-Knowledge Transfer Network (IMKTN) for end-to-end ABSA. For one thing, through the interactive correlations between the ABSA subtasks, our IMKTN transfers the task-specific knowledge from any two of the three subtasks to another one at the token level by utilizing a well-designed routing algorithm, that is, any two of the three subtasks will help the third one. For another, our IMKTN pertinently transfers the document-level knowledge, i.e., domainspecific and sentiment-related knowledge, to the aspect-level subtasks to further enhance the corresponding performance. Experimental results on three benchmark datasets demonstrate the effectiveness and superiority of our approach.
false
[]
[]
null
null
null
The research work descried in this paper has been supported by the National Key R&D Program of China (2019YFB1405200) and the National Nature Science Foundation of China (No. 61976015, 61976016, 61876198 and 61370130). The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve this paper.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
shi-etal-2021-neural
https://aclanthology.org/2021.emnlp-main.298
Neural Natural Logic Inference for Interpretable Question Answering
Many open-domain question answering problems can be cast as a textual entailment task, where a question and candidate answers are concatenated to form hypotheses. A QA system then determines if the supporting knowledge bases, regarded as potential premises, entail the hypotheses. In this paper, we investigate a neural-symbolic QA approach that integrates natural logic reasoning within deep learning architectures, towards developing effective and yet explainable question answering models. The proposed model gradually bridges a hypothesis and candidate premises following natural logic inference steps to build proof paths. Entailment scores between the acquired intermediate hypotheses and candidate premises are measured to determine if a premise entails the hypothesis. As the natural logic reasoning process forms a tree-like, hierarchical structure, we embed hypotheses and premises in a Hyperbolic space rather than Euclidean space to acquire more precise representations. Empirically, our method outperforms prior work on answering multiple-choice science questions, achieving the best results on two publicly available datasets. The natural logic inference process inherently provides evidence to help explain the prediction process.
false
[]
[]
null
null
null
We thank the anonymous reviewers for their constructive comments, and gratefully acknowledge the support of the National Key Research and Development Program of China (2020AAA0106501), and the National Natural Science Foundation of China (61976073).
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
schofield-mehr-2016-gender
https://aclanthology.org/W16-0204
Gender-Distinguishing Features in Film Dialogue
Film scripts provide a means of examining generalized western social perceptions of accepted human behavior. In particular, we focus on how dialogue in films describes gender, identifying linguistic and structural differences in speech for men and women and in same and different-gendered pairs. Using the Cornell Movie-Dialogs Corpus (Danescu-Niculescu-Mizil et al., 2012a), we identify significant linguistic and structural features of dialogue that differentiate genders in conversation and analyze how those effects relate to existing literature on gender in film. Author's Note (July 2020) The subsequent work below makes gender determinations based on a binary assignment assessed using statistics from most common baby names. We regret and recommend against this heuristic for several reasons:
false
[]
[]
null
null
null
We thank C. Danescu-Niculescu-Mizil, L. Lee, D. Mimno, J. Hessel, and the members of the NLP and Social Interaction course at Cornell for their support and ideas in developing this paper. We thank the workshop chairs and our anonymous reviewers for their thoughtful comments and suggestions.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hying-2007-corpus
https://aclanthology.org/W07-1601
A Corpus-Based Analysis of Geometric Constraints on Projective Prepositions
This paper presents a corpus-based method for automatic evaluation of geometric constraints on projective prepositions. The method is used to find an appropriate model of geometric constraints for a twodimensional domain. Two simple models are evaluated against the uses of projective prepositions in a corpus of natural language dialogues to find the best parameters of these models. Both models cover more than 96% of the data correctly. An extra treatment of negative uses of projective prepositions (e.g. A is not above B) improves both models getting close to full coverage.
false
[]
[]
null
null
null
null
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
goutte-etal-2004-aligning
https://aclanthology.org/P04-1064
Aligning words using matrix factorisation
Aligning words from sentences which are mutual translations is an important problem in different settings, such as bilingual terminology extraction, Machine Translation, or projection of linguistic features. Here, we view word alignment as matrix factorisation. In order to produce proper alignments, we show that factors must satisfy a number of constraints such as orthogonality. We then propose an algorithm for orthogonal non-negative matrix factorisation, based on a probabilistic model of the alignment data, and apply it to word alignment. This is illustrated on a French-English alignment task from the Hansard.
false
[]
[]
null
null
null
We acknowledge the Machine Learning group at XRCE for discussions related to the topic of word alignment. We would like to thank the three anonymous reviewers for their comments.
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
schulte-im-walde-2010-comparing
http://www.lrec-conf.org/proceedings/lrec2010/pdf/632_Paper.pdf
Comparing Computational Models of Selectional Preferences - Second-order Co-Occurrence vs. Latent Semantic Clusters
This paper presents a comparison of three computational approaches to selectional preferences: (i) an intuitive distributional approach that uses second-order co-occurrence of predicates and complement properties; (ii) an EM-based clustering approach that models the strengths of predicate-noun relationships by latent semantic clusters; and (iii) an extension of the latent semantic clusters by incorporating the MDL principle into the EM training, thus explicitly modelling the predicate-noun selectional preferences by WordNet classes. We describe various experiments on German data and two evaluations, and demonstrate that the simple distributional model outperforms the more complex cluster-based models in most cases, but does itself not always beat the powerful frequency baseline.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nightingale-tanaka-2003-comparing
https://aclanthology.org/W03-0321
Comparing the Sentence Alignment Yield from Two News Corpora Using a Dictionary-Based Alignment System
null
false
[]
[]
null
null
null
null
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
noklestad-softeland-2007-tagging
https://aclanthology.org/W07-2436
Tagging a Norwegian Speech Corpus
This paper describes work on the grammatical tagging of a newly created Norwegian speech corpus: the first corpus of modern Norwegian speech. We use an iterative procedure to perform computer-aided manual tagging of a part of the corpus. This material is then used to train the final taggers, which are applied to the rest of the corpus. We experiment with taggers that are based on three different data-driven methods: memory-based learning, decision trees, and hidden Markov models, and find that the decision tree tagger performs best. We also test the effects of removing pauses and/or hesitations from the material before training and applying the taggers. We conclude that these attempts at cleaning up hurt the performance of the taggers, indicating that such material, rather than functioning as noise, actually contributes important information about the grammatical function of the words in their nearest context.
false
[]
[]
null
null
null
null
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nikulasdottir-etal-2018-open
https://aclanthology.org/L18-1495
Open ASR for Icelandic: Resources and a Baseline System
Developing language resources is an important task when creating a speech recognition system for a less-resourced language. In this paper we describe available language resources and their preparation for use in a large vocabulary speech recognition (LVSR) system for Icelandic. The content of a speech corpus is analysed and training and test sets compiled, a pronunciation dictionary is extended, and text normalization for language modeling performed. An ASR system based on neural networks is implemented using these resources and tested using different acoustic training sets. Experimental results show a clear increase in word-error-rate (WER) when using smaller training sets, indicating that extension of the speech corpus for training would improve the system. When testing on data with known vocabulary only, the WER is 7.99%, but on an open vocabulary test set the WER is 15.72%. Furthermore, impact of the content of the acoustic training corpus is examined. The current results indicate that an ASR system could profit from carefully selected phonotactical data, however, further experiments are needed to verify this impression.
false
[]
[]
null
null
null
The project Open ASR for Icelandic was supported by the Icelandic Language Technology Fund (ILTF).
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wang-ng-2013-beam
https://aclanthology.org/N13-1050
A Beam-Search Decoder for Normalization of Social Media Text with Application to Machine Translation
Social media texts are written in an informal style, which hinders other natural language processing (NLP) applications such as machine translation. Text normalization is thus important for processing of social media text. Previous work mostly focused on normalizing words by replacing an informal word with its formal form. In this paper, to further improve other downstream NLP applications, we argue that other normalization operations should also be performed, e.g., missing word recovery and punctuation correction. A novel beam-search decoder is proposed to effectively integrate various normalization operations. Empirical results show that our system obtains statistically significant improvements over two strong baselines in both normalization and translation tasks, for both Chinese and English.
false
[]
[]
null
null
null
We thank all the anonymous reviewers for their comments which have helped us improve this paper. This research is supported by the Singapore National Research Foundation under its International Research Centre @ Singapore Funding Initiative and administered by the IDM Programme Office.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hazarika-etal-2018-conversational
https://aclanthology.org/N18-1193
Conversational Memory Network for Emotion Recognition in Dyadic Dialogue Videos
Emotion recognition in conversations is crucial for the development of empathetic machines. Present methods mostly ignore the role of inter-speaker dependency relations while classifying emotions in conversations. In this paper, we address recognizing utterance-level emotions in dyadic conversational videos. We propose a deep neural framework, termed conversational memory network, which leverages contextual information from the conversation history. The framework takes a multimodal approach comprising audio, visual and textual features with gated recurrent units to model past utterances of each speaker into memories. Such memories are then merged using attention-based hops to capture inter-speaker dependencies. Experiments show an accuracy improvement of 3−4% over the state of the art.
false
[]
[]
null
null
null
This research was supported in part by the National Natural Science Foundation of China under Grant no. 61472266 and by the National University of Singapore (Suzhou) Research Institute, 377 Lin Quan Street, Suzhou Industrial Park, Jiang Su, People's Republic of China, 215123.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jin-1991-translation
https://aclanthology.org/1991.mtsummit-papers.14
Translation Accuracy and Translation Efficiency
ULTRA (Universal Language Translator) is a multi-lingua] bidirectional translation system between English, Spanish, German, Japanese and Chinese. It employs an interlingua] structure to translate among these five languages. An interlingual representation is used as a deep structure through which any pair of these languages can be translated in either direction. This paper describes some techniques used in the Chinese system to solve problems in word ordering, language equivalency, Chinese verb constituent and prepositional phrase attachment. By means of these techniques translation quality has been significantly improved. Heuristic search, which results in translation efficiency, is also discussed,
false
[]
[]
null
null
null
null
1991
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gene-2021-post
https://aclanthology.org/2021.triton-1.22
The Post-Editing Workflow: Training Challenges for LSPs, Post-Editors and Academia
Language technology is already largely adopted by most Language Service Providers (LSPs) and integrated into their traditional translation processes. In this context, there are many different approaches to applying Post-Editing (PE) of a machine translated text, involving different workflow processes and steps that can be more or less effective and favorable. In the present paper, we propose a 3-step Post-Editing Workflow (PEW). Drawing from industry insight, this paper aims to provide a basic framework for LSPs and Post-Editors on how to streamline Post-Editing workflows in order to improve quality, achieve higher profitability and better return on investment and standardize and facilitate internal processes in terms of management and linguist effort when it comes to PE services. We argue that a comprehensive PEW consists in three essential tasks: Pre-Editing, Post-Editing and Annotation/Machine Translation (MT) evaluation processes (Guerrero, 2018) supported by three essential roles: Pre-Editor, Post-Editor and Annotator (Gene, 2020). Furthermore, the present paper demonstrates the training challenges arising from this PEW, supported by empirical research results, as reflected in a digital survey among language industry professionals (Gene, 2020), which was conducted in the context of a Post-Editing Webinar. Its sample comprised 51 representatives of LSPs and 12 representatives of SLVs (Single Language Vendors) representatives.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
quasthoff-wolff-2000-flexible
http://www.lrec-conf.org/proceedings/lrec2000/pdf/226.pdf
A Flexible Infrastructure for Large Monolingual Corpora
In this paper we describe a flexible and portable infrastructure for setting up large monolingual language corpora. The approach is based on collecting a large amount of monolingual text from various sources. The input data is processed on the basis of a sentencebased text segmentation algorithm. We describe the entry structure of the corpus database as well as various query types and tools for information extraction. Among them, the extraction and usage of sentence-based word collocations is discussed in detail. Finally we give an overview of different application for this language resource. A WWW interface allows for public access to most of the data and information extraction tools (http://wortschatz.uni-leipzig.de).
false
[]
[]
null
null
null
null
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false