ID
stringlengths
11
54
url
stringlengths
33
64
title
stringlengths
11
184
abstract
stringlengths
17
3.87k
label_nlp4sg
bool
2 classes
task
sequence
method
sequence
goal1
stringclasses
9 values
goal2
stringclasses
9 values
goal3
stringclasses
1 value
acknowledgments
stringlengths
28
1.28k
year
stringlengths
4
4
sdg1
bool
1 class
sdg2
bool
1 class
sdg3
bool
2 classes
sdg4
bool
2 classes
sdg5
bool
2 classes
sdg6
bool
1 class
sdg7
bool
1 class
sdg8
bool
2 classes
sdg9
bool
2 classes
sdg10
bool
2 classes
sdg11
bool
2 classes
sdg12
bool
1 class
sdg13
bool
2 classes
sdg14
bool
1 class
sdg15
bool
1 class
sdg16
bool
2 classes
sdg17
bool
2 classes
lakomkin-etal-2017-gradascent
https://aclanthology.org/W17-5222
GradAscent at EmoInt-2017: Character and Word Level Recurrent Neural Network Models for Tweet Emotion Intensity Detection
The WASSA 2017 EmoInt shared task has the goal to predict emotion intensity values of tweet messages. Given the text of a tweet and its emotion category (anger, joy, fear, and sadness), the participants were asked to build a system that assigns emotion intensity values. Emotion intensity estimation is a challenging problem given the short length of the tweets, the noisy structure of the text and the lack of annotated data. To solve this problem, we developed an ensemble of two neural models, processing input on the character. and word-level with a lexicon-driven system. The correlation scores across all four emotions are averaged to determine the bottom-line competition metric, and our system ranks place forth in full intensity range and third in 0.5-1 range of intensity among 23 systems at the time of writing (June 2017).
false
[]
[]
null
null
null
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 642667 (SECURE). We would like to thank Dr. Cornelius Weber and Dr. Sven Magg for their helpful comments and suggestions.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
stolcke-1995-efficient
https://aclanthology.org/J95-2002
An Efficient Probabilistic Context-Free Parsing Algorithm that Computes Prefix Probabilities
We describe an extension of Earley's parser for stochastic context-free grammars that computes the following quantities given a stochastic context-free grammar and an input string: a) probabilities of successive prefixes being generated by the grammar; b) probabilities of substrings being generated by the nonterminals, including the entire string being generated by the grammar; c) most likely (Viterbi) parse of the string; d) posterior expected number of applications of each grammar production, as required for reestimating rule probabilities. Probabilities (a) and (b) are computed incrementally in a single left-to-right pass over the input. Our algorithm compares favorably to standard bottom-up parsing methods for SCFGs in that it works efficiently on sparse grammars by making use of Earley's top-down control structure. It can process any context-free rule format without conversion to some normal form, and combines computations for (a) through (d) in a single algorithm. Finally, the algorithm has simple extensions for processing partially bracketed inputs, and for finding partial parses and their likelihoods on ungrammatical inputs.
false
[]
[]
null
null
null
Thanks are due Dan Jurafsky and Steve Omohundro for extensive discussions on the topics in this paper, and Fernando Pereira for helpful advice and pointers. Jerry Feldman, Terry Regier, Jonathan Segal, Kevin Thompson, and the anonymous reviewers provided valuable comments for improving content and presentation.
1995
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
radev-etal-2003-evaluation
https://aclanthology.org/P03-1048
Evaluation Challenges in Large-Scale Document Summarization
We present a large-scale meta evaluation of eight evaluation measures for both single-document and multi-document summarizers. To this end we built a corpus consisting of (a) 100 Million automatic summaries using six summarizers and baselines at ten summary lengths in both English and Chinese, (b) more than 10,000 manual abstracts and extracts, and (c) 200 Million automatic document and summary retrievals using 20 queries. We present both qualitative and quantitative results showing the strengths and drawbacks of all evaluation methods and how they rank the different summarizers. Cluster 2
false
[]
[]
null
null
null
null
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
o-donnaile-2014-tools
https://aclanthology.org/W14-4603
Tools facilitating better use of online dictionaries: Technical aspects of Multidict, Wordlink and Clilstore
The Internet contains a plethora of openly available dictionaries of many kinds, translating between thousands of language pairs. Three tools are described, Multidict, Wordlink and Clilstore, all openly available at multidict.net, which enable these diverse resources to be harnessed, unified, and utilised in ergonomic fashion. They are of particular benefit to intermediate level language learners, but also to researchers and learners of all kinds. Multidict facilitates finding and using online dictionaries in hundreds of languages, and enables easy switching between different dictionaries and target languages. It enables the utilization of page-image dictionaries in the Web Archive. Wordlink can link most webpages word by word to online dictionaries via Multidict. Clilstore is an open store of language teaching materials utilizing the power of Wordlink and Multidict. The programing and database structures and ideas behind Multidict, Wordlink and Clilstore are described.
false
[]
[]
null
null
null
Multidict and Wordlink were first developed under the EC financed 44 POOLS-T 45 project. Clilstore was developed, and Multidict and Wordlink further developed under TOOLS 46 project financed by the EC's Lifelong Learning Programme. Much of the credit for their development goes to the suggestions, user testing and feedback by the project teams from 9 different European countries, and in particular to the project leader Kent Andersen. Wordlink was inspired by Kent's Textblender program. We are grateful to Kevin Scannell for the Irish lemmatization table used by Multidict, and to Mìcheal Bauer and Will Robertson for the Scottish Gaelic lemmatization table.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nothman-etal-2014-command
https://aclanthology.org/W14-5207
Command-line utilities for managing and exploring annotated corpora
Users of annotated corpora frequently perform basic operations such as inspecting the available annotations, filtering documents, formatting data, and aggregating basic statistics over a corpus. While these may be easily performed over flat text files with stream-processing UNIX tools, similar tools for structured annotation require custom design. Dawborn and Curran (2014) have developed a declarative description and storage for structured annotation, on top of which we have built generic command-line utilities. We describe the most useful utilities-some for quick data exploration, others for high-level corpus management-with reference to comparable UNIX utilities. We suggest that such tools are universally valuable for working with structured corpora; in turn, their utility promotes common storage and distribution formats for annotated text.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
joshi-schabes-1989-evaluation
https://aclanthology.org/H89-2053
An Evaluation of Lexicalization in Parsing
In this paper, we evaluate a two-pass parsing strategy proposed for the so-called 'lexicalized' grammar. In 'lexicalized' grammars (Schabes, Abeill$ and Joshi, 1988), each elementary structure is systematically associated with a lexical item called anchor. These structures specify extended domains of locality (as compared to CFGs) over which constraints can be stated. The 'grammar' consists of a lexicon where each lexical item is associated with a finite number of structures for which that item is the anchor. There are no separate grammar rules. There are, of course, ~rules' which tell us how these structures are combined. A general two-pass parsing strategy for 'lexicalized' grammars follows naturally. In the first stage, the parser selects a set of elementary structures associated with the lexical items in the input sentence, and in the second stage the sentence is parsed with respect to this set. We evaluate this strategy with respect to two characteristics. First, the amount of filtering on the entire grammar is evaluated: once the first pass is performed, the parser uses only a subset of the grammar. Second, we evaluate the use of non-local information: the structures selected during the first pass encode the morphological value (and therefore the position in the string) of their anchor; this enables the parser to use non-local information to guide its search. We take Lexicalized Tree Adjoining Grammars as an instance of lexicallzed grammar. We illustrate the organization of the grammar. Then we show how a general Earley-type TAG parser (Schabes and Joshi, 1988) can take advantage of lexicalization. Empirical data show that the filtering of the grammar and the non-local information provided by the two-pass strategy improve the performance of the parser.
false
[]
[]
null
null
null
null
1989
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gonzalez-etal-2019-elirf
https://aclanthology.org/S19-2031
ELiRF-UPV at SemEval-2019 Task 3: Snapshot Ensemble of Hierarchical Convolutional Neural Networks for Contextual Emotion Detection
This paper describes the approach developed by the ELiRF-UPV team at SemEval 2019 Task 3: Contextual Emotion Detection in Text. We have developed a Snapshot Ensemble of 1D Hierarchical Convolutional Neural Networks to extract features from 3-turn conversations in order to perform contextual emotion detection in text. This Snapshot Ensemble is obtained by averaging the models selected by a Genetic Algorithm that optimizes the evaluation measure. The proposed ensemble obtains better results than a single model and it obtains competitive and promising results on Contextual Emotion Detection in Text.
false
[]
[]
null
null
null
This work has been partially supported by the Spanish MINECO and FEDER founds under project AMIC (TIN2017-85854-C4-2-R) and the GiSPRO project (PROMETEU/2018/176). Work of José-Ángel González is also financed by Universitat Politècnica de València under grant PAID-01-17.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gardent-perez-beltrachini-2012-using
https://aclanthology.org/W12-4614
Using FB-LTAG Derivation Trees to Generate Transformation-Based Grammar Exercises
Using a Feature-Based Lexicalised Tree Adjoining Grammar (FB-LTAG), we present an approach for generating pairs of sentences that are related by a syntactic transformation and we apply this approach to create language learning exercises. We argue that the derivation trees of an FB-LTAG provide a good level of representation for capturing syntactic transformations. We relate our approach to previous work on sentence reformulation, question generation and grammar exercise generation. We evaluate precision and linguistic coverage. And we demonstrate the genericity of the proposal by applying it to a range of transformations including the Passive/Active transformation, the pronominalisation of an NP, the assertion / yes-no question relation and the assertion / wh-question transformation.
false
[]
[]
null
null
null
The research presented in this paper was partially supported by the European Fund for Regional Development within the framework of the INTER-REG IV A Allegro Project. We would also like to thank German Kruszewski and Elise Fetet for their help in developing and annotating the test data.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kawahara-etal-2002-construction
http://www.lrec-conf.org/proceedings/lrec2002/pdf/302.pdf
Construction of a Japanese Relevance-tagged Corpus
This paper describes our corpus annotation project. The annotated corpus has relevance tags which consist of predicate-argument relations, relations between nouns, and coreferences. To construct this relevance-tagged corpus, we investigated a large corpus and established the specification of the annotation. This paper shows the specification and difficult tagging problems which have emerged through the annotation so far.
false
[]
[]
null
null
null
null
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
oepen-etal-2005-holistic
https://aclanthology.org/2005.eamt-1.27
Holistic regression testing for high-quality MT: some methodological and technological reflections
We review the techniques and tools used for regression testing, the primary quality assurance measure, in a multi-site research project working towards a high-quality Norwegian-English MT demonstrator. A combination of hand-constructed test suites, domain-specific corpora, specialized software tools, and somewhat rigid release procedures is used for semi-automated diagnostic and regression evaluation. Based on project-internal experience so far, we comment on a range of methodological aspects and desiderata for systematic evaluation in MT development and show analogies to evaluation work in other NLP tasks.
false
[]
[]
null
null
null
null
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dunn-adams-2020-geographically
https://aclanthology.org/2020.lrec-1.308
Geographically-Balanced Gigaword Corpora for 50 Language Varieties
While text corpora have been steadily increasing in overall size, even very large corpora are not designed to represent global population demographics. For example, recent work has shown that existing English gigaword corpora over-represent inner-circle varieties from the US and the UK (Dunn, 2019b). To correct implicit geographic and demographic biases, this paper uses country-level population demographics to guide the construction of gigaword web corpora. The resulting corpora explicitly match the ground-truth geographic distribution of each language, thus equally representing language users from around the world. This is important because it ensures that speakers of under-resourced language varieties (i.e., Indian English or Algerian French) are represented, both in the corpora themselves but also in derivative resources like word embeddings.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wang-etal-2017-crowd
https://aclanthology.org/D17-1205
CROWD-IN-THE-LOOP: A Hybrid Approach for Annotating Semantic Roles
Crowdsourcing has proven to be an effective method for generating labeled data for a range of NLP tasks. However, multiple recent attempts of using crowdsourcing to generate gold-labeled training data for semantic role labeling (SRL) reported only modest results, indicating that SRL is perhaps too difficult a task to be effectively crowdsourced. In this paper, we postulate that while producing SRL annotation does require expert involvement in general, a large subset of SRL labeling tasks is in fact appropriate for the crowd. We present a novel workflow in which we employ a classifier to identify difficult annotation tasks and route each task either to experts or crowd workers according to their difficulties. Our experimental evaluation shows that the proposed approach reduces the workload for experts by over two-thirds, and thus significantly reduces the cost of producing SRL annotation at little loss in quality.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ashida-etal-2020-building
https://aclanthology.org/2020.aacl-srw.9
Building a Part-of-Speech Tagged Corpus for Drenjongke (Bhutia)
This research paper reports on the generation of the first Drenjongke corpus based on texts taken from a phrase book for beginners, written in the Tibetan script. A corpus of sentences was created after correcting errors in the text scanned through optical character reading (OCR). A total of 34 Part-of-Speech (PoS) tags were defined based on manual annotation performed by the three authors, one of whom is a native speaker of Drenjongke. The first corpus of the Drenjongke language comprises 275 sentences and 1379 tokens, which we plan to expand with other materials to promote further studies of this language.
false
[]
[]
null
null
null
The authors are grateful to Jin-Dong Kim and three anonymous reviewers for their feedback on the paper, Mamoru Komachi for insightful discussions regarding the annotation process, Arseny Tolmachev for the post-acceptance mentorship, and the AACL-IJCNLP SRW committee members for providing support of various kinds. Of course, all remaining errors are of our own. Thanks also go to Jigmee Wangchuk Bhutia and Lopen Karma Gyaltsen Drenjongpo for allowing us to edit and publish the contents of the phrase book.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kuribayashi-etal-2020-language
https://aclanthology.org/2020.acl-main.47
Language Models as an Alternative Evaluator of Word Order Hypotheses: A Case Study in Japanese
We examine a methodology using neural language models (LMs) for analyzing the word order of language. This LM-based method has the potential to overcome the difficulties existing methods face, such as the propagation of preprocessor errors in count-based methods. In this study, we explore whether the LMbased method is valid for analyzing the word order. As a case study, this study focuses on Japanese due to its complex and flexible word order. To validate the LM-based method, we test (i) parallels between LMs and human word order preference, and (ii) consistency of the results obtained using the LM-based method with previous linguistic studies. Through our experiments, we tentatively conclude that LMs display sufficient word order knowledge for usage as an analysis tool. Finally, using the LMbased method, we demonstrate the relationship between the canonical word order and topicalization, which had yet to be analyzed by largescale experiments.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
pruksachatkun-etal-2020-intermediate
https://aclanthology.org/2020.acl-main.467
Intermediate-Task Transfer Learning with Pretrained Language Models: When and Why Does It Work?
While pretrained models such as BERT have shown large gains across natural language understanding tasks, their performance can be improved by further training the model on a data-rich intermediate task, before fine-tuning it on a target task. However, it is still poorly understood when and why intermediate-task training is beneficial for a given target task. To investigate this, we perform a large-scale study on the pretrained RoBERTa model with 110 intermediate-target task combinations. We further evaluate all trained models with 25 probing tasks meant to reveal the specific skills that drive transfer. We observe that intermediate tasks requiring high-level inference and reasoning abilities tend to work best. We also observe that target task performance is strongly correlated with higher-level abilities such as coreference resolution. However, we fail to observe more granular correlations between probing and target task performance, highlighting the need for further work on broad-coverage probing benchmarks. We also observe evidence that the forgetting of knowledge learned during pretraining may limit our analysis, highlighting the need for further work on transfer learning methods in these settings.
false
[]
[]
null
null
null
This project has benefited from support to SB by Eric and Wendy Schmidt (made by recommendation of the Schmidt Futures program), by Samsung Research (under the project Improving Deep Learning using Latent Structure), by Intuit, Inc., and by NVIDIA Corporation (with the donation of a Titan V GPU).
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
xu-etal-2002-domain
http://www.lrec-conf.org/proceedings/lrec2002/pdf/351.pdf
A Domain Adaptive Approach to Automatic Acquisition of Domain Relevant Terms and their Relations with Bootstrapping
In this paper, we present an unsupervised hybrid text-mining approach to automatic acquisition of domain relevant terms and their relations. We deploy the TFIDF-based term classification method to acquire domain relevant single-word terms. Further, we apply two strategies in order to learn lexico-syntatic patterns which indicate paradigmatic and domain relevant syntagmatic relations between the extracted terms. The first one uses an existing ontology as initial knowledge for learning lexico-syntactic patterns, while the second is based on different collocation acquisition methods to deal with the free-word order languages like German. This domain-adaptive method yields good results even when trained on relatively small training corpora. It can be applied to different real-world applications, which need domain-relevant ontology, for example , information extraction, information retrieval or text classification.
false
[]
[]
null
null
null
null
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
feldman-etal-2006-cross
http://www.lrec-conf.org/proceedings/lrec2006/pdf/554_pdf.pdf
A Cross-language Approach to Rapid Creation of New Morpho-syntactically Annotated Resources
We take a novel approach to rapid, low-cost development of morpho-syntactically annotated resources without using parallel corpora or bilingual lexicons. The overall research question is how to exploit language resources and properties to facilitate and automate the creation of morphologically annotated corpora for new languages. This portability issue is especially relevant to minority languages, for which such resources are likely to remain unavailable in the foreseeable future. We compare the performance of our system on languages that belong to different language families (Romance vs. Slavic), as well as different language pairs within the same language family (Portuguese via Spanish vs. Catalan via Spanish). We show that across language families, the most difficult category is the category of nominals (the noun homonymy is challenging for morphological analysis and the order variation of adjectives within a sentence makes it challenging to create a realiable model), whereas different language families present different challenges with respect to their morpho-syntactic descriptions: for the Slavic languages, case is the most challenging category; for the Romance languages, gender is more challenging than case. In addition, we present an alternative evaluation metric for our system, where we measure how much human labor will be needed to convert the result of our tagging to a high precision annotated resource.
false
[]
[]
null
null
null
We would like to thank Maria das Graças Volpe Nunes, Sandra Maria Aluísio, and Ricardo Hasegawa for giving us access to the NILC corpus annotated with PALAVRAS and to Carlos Rodríguez Penagos for letting us use the CLiC-TALP corpus.
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
biemann-etal-2008-asv
http://www.lrec-conf.org/proceedings/lrec2008/pdf/447_paper.pdf
ASV Toolbox: a Modular Collection of Language Exploration Tools
ASV Toolbox is a modular collection of tools for the exploration of written language data both for scientific and educational purposes. It includes modules that operate on word lists or texts and allow to perform various linguistic annotation, classification and clustering tasks, including language detection, POS-tagging, base form reduction, named entity recognition, and terminology extraction. On a more abstract level, the algorithms deal with various kinds of word similarity, using pattern-based and statistical approaches. The collection can be used to work on large real-world data sets as well as for studying the underlying algorithms. Each module of the ASV Toolbox is designed to work either on a plain text files or with a connection to a MySQL database. While it is especially designed to work with corpora of the Leipzig Corpora Collection, it can easily be adapted to other sources.
false
[]
[]
null
null
null
null
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ghannay-etal-2016-evaluation
https://aclanthology.org/W16-2511
Evaluation of acoustic word embeddings
Recently, researchers in speech recognition have started to reconsider using whole words as the basic modeling unit, instead of phonetic units. These systems rely on a function that embeds an arbitrary or fixed dimensional speech segments to a vector in a fixed-dimensional space, named acoustic word embedding. Thus, speech segments of words that sound similarly will be projected in a close area in a continuous space. This paper focuses on the evaluation of acoustic word embeddings. We propose two approaches to evaluate the intrinsic performances of acoustic word embeddings in comparison to orthographic representations in order to evaluate whether they capture discriminative phonetic information. Since French language is targeted in experiments, a particular focus is made on homophone words.
false
[]
[]
null
null
null
This work was partially funded by the European Commission through the EUMSSI project, under the contract number 611057, in the framework of the FP7-ICT-2013-10 call, by the French National Research Agency (ANR) through the VERA project, under the contract number ANR-12-BS02-006-01, and by the Région Pays de la Loire.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sloetjes-wittenburg-2008-annotation
http://www.lrec-conf.org/proceedings/lrec2008/pdf/208_paper.pdf
Annotation by Category: ELAN and ISO DCR
The Data Category Registry is one of the ISO initiatives towards the establishment of standards for Language Resource management, creation and coding. Successful application of the DCR depends on the availability of tools that can interact with it. This paper describes the first steps that have been taken to provide users of the multimedia annotation tool ELAN, with the means to create references from tiers and annotations to data categories defined in the ISO Data Category Registry. It first gives a brief description of the capabilities of ELAN and the structure of the documents it creates. After a concise overview of the goals and current state of the ISO DCR infrastructure, a description is given of how the preliminary connectivity with the DCR is implemented in ELAN.
false
[]
[]
null
null
null
null
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wan-etal-2018-ibm
https://aclanthology.org/K18-2009
IBM Research at the CoNLL 2018 Shared Task on Multilingual Parsing
This paper presents the IBM Research AI submission to the CoNLL 2018 Shared Task on Parsing Universal Dependencies. Our system implements a new joint transition-based parser, based on the Stack-LSTM framework and the Arc-Standard algorithm, that handles tokenization, part-of-speech tagging, morphological tagging and dependency parsing in one single model. By leveraging a combination of character-based modeling of words and recursive composition of partially built linguistic structures we qualified 13th overall and 7th in low resource. We also present a new sentence segmentation neural architecture based on Stack-LSTMs that was the 4th best overall.
false
[]
[]
null
null
null
We thank Radu Florian, Todd Ward and Salim Roukos for useful discussions.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
vincze-etal-2010-hungarian
http://www.lrec-conf.org/proceedings/lrec2010/pdf/465_Paper.pdf
Hungarian Dependency Treebank
Herein, we present the process of developing the first Hungarian Dependency TreeBank. First, short references are made to dependency grammars we considered important in the development of our Treebank. Second, mention is made of existing dependency corpora for other languages. Third, we present the steps of converting the Szeged Treebank into dependency-tree format: from the originally phrase-structured treebank, we produced dependency trees by automatic conversion, checked and corrected them thereby creating the first manually annotated dependency corpus for Hungarian. We also go into detail about the two major sets of problems, i.e. coordination and predicative nouns and adjectives. Fourth, we give statistics on the treebank: by now, we have completed the annotation of business news, newspaper articles, legal texts and texts in informatics, at the same time, we are planning to convert the entire corpus into dependency tree format. Finally, we give some hints on the applicability of the system: the present database may be utilized-among others-in information extraction and machine translation as well.
false
[]
[]
null
null
null
The research was -in part -supported by NKTH within the framework of TUDORKA and MASZEKER projects (Ányos Jedlik programs).
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
marasovic-etal-2020-natural
https://aclanthology.org/2020.findings-emnlp.253
Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs
Natural language rationales could provide intuitive, higher-level explanations that are easily understandable by humans, complementing the more broadly studied lower-level explanations based on gradients or attention weights. We present the first study focused on generating natural language rationales across several complex visual reasoning tasks: visual commonsense reasoning, visual-textual entailment, and visual question answering. The key challenge of accurate rationalization is comprehensive image understanding at all levels: not just their explicit content at the pixel level, but their contextual contents at the semantic and pragmatic levels. We present RATIONALE VT TRANSFORMER, an integrated model that learns to generate free-text rationales by combining pretrained language models with object recognition, grounded visual semantic frames, and visual commonsense graphs. Our experiments show that free-text rationalization is a promising research direction to complement model interpretability for complex visual-textual reasoning tasks. In addition, we find that integration of richer semantic and pragmatic visual features improves visual fidelity of rationales.
false
[]
[]
null
null
null
The authors thank Sarah Pratt for her assistance with the grounded situation recognizer, Amandalynne Paullada, members of the AllenNLP team, and anonymous reviewers for helpful feedback.This research was supported in part by NSF (IIS1524371, IIS-1714566), DARPA under the CwC program through the ARO (W911NF-15-1-0543), DARPA under the MCS program through NIWC Pacific (N66001-19-2-4031), and gifts from Allen Institute for Artificial Intelligence.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
rei-etal-2018-scoring
https://aclanthology.org/P18-2101
Scoring Lexical Entailment with a Supervised Directional Similarity Network
We present the Supervised Directional Similarity Network (SDSN), a novel neural architecture for learning task-specific transformation functions on top of generalpurpose word embeddings. Relying on only a limited amount of supervision from task-specific scores on a subset of the vocabulary, our architecture is able to generalise and transform a general-purpose distributional vector space to model the relation of lexical entailment. Experiments show excellent performance on scoring graded lexical entailment, raising the stateof-the-art on the HyperLex dataset by approximately 25%.
false
[]
[]
null
null
null
Daniela Gerz and Ivan Vulić are supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909). We would like to thank the NVIDIA Corporation for the donation of the Titan GPU that was used for this research.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lai-pustejovsky-2019-dynamic
https://aclanthology.org/W19-0601
A Dynamic Semantics for Causal Counterfactuals
Under the standard approach to counterfactuals, to determine the meaning of a counterfactual sentence, we consider the "closest" possible world(s) where the antecedent is true, and evaluate the consequent. Building on the standard approach, some researchers have found that the set of worlds to be considered is dependent on context; it evolves with the discourse. Others have focused on how to define the "distance" between possible worlds, using ideas from causal modeling. This paper integrates the two ideas. We present a semantics for counterfactuals that uses a distance measure based on causal laws, that can also change over time. We show how our semantics can be implemented in the Haskell programming language.
false
[]
[]
null
null
null
We would like to thank the reviewers for their helpful comments.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhang-duh-2021-approaching
https://aclanthology.org/2021.mtsummit-at4ssl.7
Approaching Sign Language Gloss Translation as a Low-Resource Machine Translation Task
A cascaded Sign Language Translation system first maps sign videos to gloss annotations and then translates glosses into a spoken languages. This work focuses on the second-stage gloss translation component, which is challenging due to the scarcity of publicly available parallel data. We approach gloss translation as a low-resource machine translation task and investigate two popular methods for improving translation quality: hyperparameter search and backtranslation. We discuss the potentials and pitfalls of these methods based on experiments on the RWTH-PHOENIX-Weather 2014T dataset.
true
[]
[]
Reduced Inequalities
null
null
null
2021
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
sha-2020-gradient
https://aclanthology.org/2020.emnlp-main.701
Gradient-guided Unsupervised Lexically Constrained Text Generation
Lexically-constrained generation requires the target sentence to satisfy some lexical constraints, such as containing some specific words or being the paraphrase to a given sentence, which is very important in many real-world natural language generation applications. Previous works usually apply beamsearch-based methods or stochastic searching methods to lexically-constrained generation. However, when the search space is too large, beam-search-based methods always fail to find the constrained optimal solution. At the same time, stochastic search methods always cost too many steps to find the correct optimization direction. In this paper, we propose a novel method G2LC to solve the lexically-constrained generation as an unsupervised gradient-guided optimization problem. We propose a differentiable objective function and use the gradient to help determine which position in the sequence should be changed (deleted or inserted/replaced by another word). The word updating process of the inserted/replaced word also benefits from the guidance of gradient. Besides, our method is free of parallel data training, which is flexible to be used in the inference stage of any pre-trained generation model. We apply G2LC to two generation tasks: keyword-to-sentence generation and unsupervised paraphrase generation. The experiment results show that our method achieves state-of-the-art compared to previous lexically-constrained methods.
false
[]
[]
null
null
null
We would like to thank the three anonymous reviewers and the anonymous meta-reviewer for so many good suggestions.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
freitag-2004-toward
https://aclanthology.org/C04-1052
Toward Unsupervised Whole-Corpus Tagging
null
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
medlock-2006-introduction
http://www.lrec-conf.org/proceedings/lrec2006/pdf/200_pdf.pdf
An Introduction to NLP-based Textual Anonymisation
We introduce the problem of automatic textual anonymisation and present a new publicly-available, pseudonymised benchmark corpus of personal email text for the task, dubbed ITAC (Informal Text Anonymisation Corpus). We discuss the method by which the corpus was constructed, and consider some important issues related to the evaluation of textual anonymisation systems. We also present some initial baseline results on the new corpus using a state of the art HMM-based tagger.
false
[]
[]
null
null
null
null
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
raybaud-etal-2011-broadcast
https://aclanthology.org/2011.mtsummit-systems.3
Broadcast news speech-to-text translation experiments
We present S2TT, an integrated speech-totext translation system based on POCKET-SPHINX and MOSES. It is compared to different baselines based on ANTS-the broadcast news transcription system developed at LORIA's Speech group, MOSES and Google's translation tools. A small corpus of reference transcriptions of broadcast news from the evaluation campaign ESTER2 was translated by human experts for evaluation. The Word Error Rate (WER) of the recognition stage of both systems are evaluated, and BLEU is used to score the translations. Furthermore, the reference transcriptions are automatically translated using MOSES and GOOGLE in order to evaluate the impact of recognition errors on translation quality.
false
[]
[]
null
null
null
null
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mctait-etal-1999-building
https://aclanthology.org/1999.tc-1.11
A Building Blocks Approach to Translation Memory
Traditional Translation Memory systems that find the best match between a SL input sentence and SL sentences in a database of previously translated sentences are not ideal. Studies in the cognitive processes underlying human translation reveal that translators very rarely process SL text at the level of the sentence. The units with which translators work are usually much smaller i.e. word, syntactic unit, clause or group of meaningful words. A building blocks approach (a term borrowed from the theoretical framework discussed in Lange et al (1997)), is advantageous in that it extracts fragments of text, from a traditional TM database, that more closely represent those with which a human translator works. The text fragments are combined with the intention of producing TL translations that are more accurate, thus requiring less postediting on the part of the translator.
false
[]
[]
null
null
null
null
1999
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
pinnis-etal-2018-tilde
https://aclanthology.org/L18-1214
Tilde MT Platform for Developing Client Specific MT Solutions
In this paper, we present Tilde MT, a custom machine translation (MT) platform that provides linguistic data storage (parallel, monolingual corpora, multilingual term collections), data cleaning and normalisation, statistical and neural machine translation system training and hosting functionality, as well as wide integration capabilities (a machine user API and popular computer-assisted translation tool plugins). We provide details for the most important features of the platform, as well as elaborate typical MT system training workflows for client-specific MT solution development.
true
[]
[]
Industry, Innovation and Infrastructure
null
null
null
2018
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
yang-etal-2014-joint
https://aclanthology.org/D14-1071
Joint Relational Embeddings for Knowledge-based Question Answering
Transforming a natural language (NL) question into a corresponding logical form (LF) is central to the knowledge-based question answering (KB-QA) task. Unlike most previous methods that achieve this goal based on mappings between lexicalized phrases and logical predicates, this paper goes one step further and proposes a novel embedding-based approach that maps NL-questions into LFs for KB-QA by leveraging semantic associations between lexical representations and KBproperties in the latent space. Experimental results demonstrate that our proposed method outperforms three KB-QA baseline methods on two publicly released QA data sets.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dziob-etal-2019-plwordnet
https://aclanthology.org/2019.gwc-1.45
plWordNet 4.1 - a Linguistically Motivated, Corpus-based Bilingual Resource
The paper presents the latest release of the Polish WordNet, namely plWord-Net 4.1. The most significant developments since 3.0 version include new relations for nouns and verbs, mapping semantic role-relations from the valency lexicon Walenty onto the plWord-Net structure and sense-level interlingual mapping. Several statistics are presented in order to illustrate the development and contemporary state of the wordnet.
false
[]
[]
null
null
null
The work co-financed as part of the investment in the CLARIN-PL research infrastructure funded by the Polish Ministry of Science and Higher Education and the project funded by the National Science Centre, Poland under the grant agreement No UMO-2015/18/M/HS2/00100.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lin-eisner-2018-neural
https://aclanthology.org/N18-1085
Neural Particle Smoothing for Sampling from Conditional Sequence Models
We introduce neural particle smoothing, a sequential Monte Carlo method for sampling annotations of an input string from a given probability model. In contrast to conventional particle filtering algorithms, we train a proposal distribution that looks ahead to the end of the input string by means of a right-to-left LSTM. We demonstrate that this innovation can improve the quality of the sample. To motivate our formal choices, we explain how our neural model and neural sampler can be viewed as low-dimensional but nonlinear approximations to working with HMMs over very large state spaces.
false
[]
[]
null
null
null
This work has been generously supported by a Google Faculty Research Award and by Grant No. 1718846 from the National Science Foundation.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kupsc-etal-2004-pronominal
http://www.lrec-conf.org/proceedings/lrec2004/pdf/671.pdf
Pronominal Anaphora Resolution for Unrestricted Text
The paper presents an anaphora resolution algorithm for unrestricted text. In particular, we examine portability of a knowledge-based approach of (Mitamura et al., 2002), proposed for a domain-specific task. We obtain up to 70% accuracy on unrestricted text, which is a significant improvement (almost 20%) over a baseline we set for general text. As the overall results leave much room for improvement, we provide a detailed error analysis and investigate possible enhancements.
false
[]
[]
null
null
null
This work was supported in part by the Advanced Research and Development Activity (ARDA) under AQUAINT contract MDA904-01-C-0988. We would like to thank Curtis Huttenhower, for his work on integrating the tools we used for text analysis, as well as three anonymous reviewers and Adam Przepiórkowski for useful comments on earlier versions of this paper.
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
aizawa-2002-method
https://aclanthology.org/C02-1045
A Method of Cluster-Based Indexing of Textual Data
This paper presents a framework for clustering in text-based information retrieval systems. The prominent feature of the proposed method is that documents, terms, and other related elements of textual information are clustered simultaneously into small overlapping clusters. In the paper, the mathematical formulation and implementation of the clustering method are briefly introduced, together with some experimental results.
false
[]
[]
null
null
null
null
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
xanthos-etal-2006-exploring
https://aclanthology.org/W06-3205
Exploring variant definitions of pointer length in MDL
Within the information-theoretical framework described by (Rissanen, 1989; de Marcken, 1996; Goldsmith, 2001), pointers are used to avoid repetition of phonological material. Work with which we are familiar has assumed that there is only one way in which items could be pointed to. The purpose of this paper is to describe and compare several different methods, each of which satisfies MDL's basic requirements, but which have different consequences for the treatment of linguistic phenomena. In particular, we assess the conditions under which these different ways of pointing yield more compact descriptions of the data, both from a theoretical and an empirical perspective.
false
[]
[]
null
null
null
This research was supported by a grant of the Swiss National Science Foundation to the first author.
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
biggins-etal-2012-university
https://aclanthology.org/S12-1097
University\_Of\_Sheffield: Two Approaches to Semantic Text Similarity
This paper describes the University of Sheffield's submission to SemEval-2012 Task 6: Semantic Text Similarity. Two approaches were developed. The first is an unsupervised technique based on the widely used vector space model and information from WordNet. The second method relies on supervised machine learning and represents each sentence as a set of n-grams. This approach also makes use of information from WordNet. Results from the formal evaluation show that both approaches are useful for determining the similarity in meaning between pairs of sentences with the best performance being obtained by the supervised approach. Incorporating information from WordNet also improves performance for both approaches.
false
[]
[]
null
null
null
This research has been supported by a Google Research Award.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gavrila-etal-2012-domain
http://www.lrec-conf.org/proceedings/lrec2012/pdf/1003_Paper.pdf
Same domain different discourse style - A case study on Language Resources for data-driven Machine Translation
Data-driven machine translation (MT) approaches became very popular during last years, especially for language pairs for which it is difficult to find specialists to develop transfer rules. Statistical (SMT) or example-based (EBMT) systems can provide reasonable translation quality for assimilation purposes, as long as a large amount of training data is available. Especially SMT systems rely on parallel aligned corpora which have to be statistical relevant for the given language pair. The construction of large domain specific parallel corpora is time-and cost-consuming; the current practice relies on one or two big such corpora per language pair. Recent developed strategies ensure certain portability to other domains through specialized lexicons or small domain specific corpora. In this paper we discuss the influence of different discourse styles on statistical machine translation systems. We investigate how a pure SMT performs when training and test data belong to same domain but the discourse style varies.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
veaux-etal-2013-towards
https://aclanthology.org/W13-3917
Towards Personalised Synthesised Voices for Individuals with Vocal Disabilities: Voice Banking and Reconstruction
When individuals lose the ability to produce their own speech, due to degenerative diseases such as motor neurone disease (MND) or Parkinson's, they lose not only a functional means of communication but also a display of their individual and group identity. In order to build personalized synthetic voices, attempts have been made to capture the voice before it is lost, using a process known as voice banking. But, for some patients, the speech deterioration frequently coincides or quickly follows diagnosis. Using HMM-based speech synthesis, it is now possible to build personalized synthetic voices with minimal data recordings and even disordered speech. The power of this approach is that it is possible to use the patient's recordings to adapt existing voice models pre-trained on many speakers. When the speech has begun to deteriorate, the adapted voice model can be further modified in order to compensate for the disordered characteristics found in the patient's speech. The University of Edinburgh has initiated a project for voice banking and reconstruction based on this speech synthesis technology. At the current stage of the project, more than fifteen patients with MND have already been recorded and five of them have been delivered a reconstructed voice. In this paper, we present an overview of the project as well as subjective assessments of the reconstructed voices and feedback from patients and their families.
false
[]
[]
null
null
null
null
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lange-ljunglof-2018-demonstrating
https://aclanthology.org/W18-7105
Demonstrating the MUSTE Language Learning Environment
We present a language learning application that relies on grammars to model the learning outcome. Based on this concept we can provide a powerful framework for language learning exercises with an intuitive user interface and a high reliability. Currently the application aims to augment existing language classes and support students by improving the learner attitude and the general learning outcome. Extensions beyond that scope are promising and likely to be added in the future.
true
[]
[]
Quality Education
null
null
null
2018
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
von-glasersfeld-1974-yerkish
https://aclanthology.org/J74-3007
The Yerkish Language for Non-Human Primates
null
false
[]
[]
null
null
null
null
1974
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dinu-etal-2017-stylistic
https://doi.org/10.26615/978-954-452-049-6_028
On the stylistic evolution from communism to democracy: Solomon Marcus study case
null
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
mckeown-2006-lessons
https://aclanthology.org/W06-1401
Lessons Learned from Large Scale Evaluation of Systems that Produce Text: Nightmares and Pleasant Surprises
As the language generation community explores the possibility of an evaluation program for language generation, it behooves us to examine our experience in evaluation of other systems that produce text as output. Large scale evaluation of summarization systems and of question answering systems has been carried out for several years now. Summarization and question answering systems produce text output given text as input, while language generation produces text from a semantic representation. Given that the output has the same properties, we can learn from the mistakes and the understandings gained in earlier evaluations. In this invited talk, I will discuss what we have learned in the large scale summarization evaluations carried out in the Document Understanding Conferences (DUC) from 2001 to present, and in the large scale question answering evaluations carried out in TREC (e.g., the definition pilot) as well as the new large scale evaluations being carried out in the DARPA GALE (Global Autonomous Language Environment) program. DUC was developed and run by NIST and provides a forum for regular evaluation of summarization systems. NIST oversees the gathering of data, including both input documents and gold standard summaries, some of which is done by NIST and some of which is done by LDC. Each year, some 30 to 50 document sets were gathered as test data and somewhere between two to nine summaries were written for each of the input sets. NIST has carried out both manual and automatic evaluation by comparing system output against the gold standard summaries written by humans. The results are made public at the annual conference. In the most recent years, the number of participants has grown to 25 or 30 sites from all over the world.
false
[]
[]
null
null
null
This material is based upon work supported in part by the ARDA AQUAINT program (Contract No. MDA908-02-C-0008 and Contract No. NBCHC040040) and the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR0011-06-C-0023 and Contract No. N66001-00-1-8919. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the DARPA or ARDA.
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hambardzumyan-etal-2021-warp
https://aclanthology.org/2021.acl-long.381
WARP: Word-level Adversarial ReProgramming
Transfer learning from pretrained language models recently became the dominant approach for solving many NLP tasks. A common approach to transfer learning for multiple tasks that maximize parameter sharing trains one or more task-specific layers on top of the language model. In this paper, we present an alternative approach based on adversarial reprogramming, which extends earlier work on automatic prompt generation. Adversarial reprogramming attempts to learn task-specific word embeddings that, when concatenated to the input text, instruct the language model to solve the specified task. Using up to 25K trainable parameters per task, this approach outperforms all existing methods with up to 25M trainable parameters on the public leaderboard of the GLUE benchmark. Our method, initialized with task-specific human-readable prompts, also works in a few-shot setting, outperforming GPT-3 on two SuperGLUE tasks with just 32 training samples.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
funakoshi-etal-2009-probabilistic
https://aclanthology.org/W09-0634
A Probabilistic Model of Referring Expressions for Complex Objects
This paper presents a probabilistic model both for generation and understanding of referring expressions. This model introduces the concept of parts of objects, modelling the necessity to deal with the characteristics of separate parts of an object in the referring process. This was ignored or implicit in previous literature. Integrating this concept into a probabilistic formulation, the model captures human characteristics of visual perception and some type of pragmatic implicature in referring expressions. Developing this kind of model is critical to deal with more complex domains in the future. As a first step in our research, we validate the model with the TUNA corpus to show that it includes conventional domain modeling as a subset.
false
[]
[]
null
null
null
null
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
edunov-etal-2019-pre
https://aclanthology.org/N19-1409
Pre-trained language model representations for language generation
Pre-trained language model representations have been successful in a wide range of language understanding tasks. In this paper, we examine different strategies to integrate pretrained representations into sequence to sequence models and apply it to neural machine translation and abstractive summarization. We find that pre-trained representations are most effective when added to the encoder network which slows inference by only 14%. Our experiments in machine translation show gains of up to 5.3 BLEU in a simulated resource-poor setup. While returns diminish with more labeled data, we still observe improvements when millions of sentence-pairs are available. Finally, on abstractive summarization we achieve a new state of the art on the full text version of CNN-DailyMail. 1
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
li-etal-2015-tree
https://aclanthology.org/D15-1278
When Are Tree Structures Necessary for Deep Learning of Representations?
Recursive neural models, which use syntactic parse trees to recursively generate representations bottom-up, are a popular architecture. However there have not been rigorous evaluations showing for exactly which tasks this syntax-based method is appropriate. In this paper, we benchmark recursive neural models against sequential recurrent neural models, enforcing applesto-apples comparison as much as possible. We investigate 4 tasks: (1) sentiment classification at the sentence level and phrase level; (2) matching questions to answerphrases; (3) discourse parsing; (4) semantic relation extraction. Our goal is to understand better when, and why, recursive models can outperform simpler models. We find that recursive models help mainly on tasks (like semantic relation extraction) that require longdistance connection modeling, particularly on very long sequences. We then introduce a method for allowing recurrent models to achieve similar performance: breaking long sentences into clause-like units at punctuation and processing them separately before combining. Our results thus help understand the limitations of both classes of models, and suggest directions for improving recurrent models.
false
[]
[]
null
null
null
We would especially like to thank Richard Socher and Kai-Sheng Tai for insightful comments, advice, and suggestions. We would also like to thank Sam Bowman, Ignacio Cases, Jon Gauthier, Kevin Gu, Gabor Angeli, Sida Wang, Percy Liang and other members of the Stanford NLP group, as well as the anonymous reviewers for their helpful advice on various aspects of this work. We acknowledge the support of NVIDIA Corporation with the donation of Tesla K40 GPUs We gratefully acknowledge support from an Enlight Foundation Graduate Fellowship, a gift from Bloomberg L.P., the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) contract no. FA8750-13-2-0040, and the NSF via award IIS-1514268. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of Bloomberg L.P., DARPA, AFRL, NSF, or the US government.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chatterjee-etal-2017-textual
https://aclanthology.org/W17-7506
Textual Relations and Topic-Projection: Issues in Text Categorization
Categorization of text is done on the basis of its aboutness. Understanding what a text is about often involves a subjective dimension. Developments in linguistics, however, can provide some important insights about what underlies the process of text categorization in general and topic spotting in particular. More specifically, theoretical underpinnings from formal linguistics and systemic functional linguistics may give some important insights about the way challenges can be dealt with. Under this situation, this paper seeks to present a theoretical framework which can take care of the categorization of text in terms of relational hierarchies embodied in the overall organization of the text.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ostling-etal-2013-automated
https://aclanthology.org/W13-1705
Automated Essay Scoring for Swedish
We present the first system developed for automated grading of high school essays written in Swedish. The system uses standard text quality indicators and is able to compare vocabulary and grammar to large reference corpora of blog posts and newspaper articles. The system is evaluated on a corpus of 1 702 essays, each graded independently by the student's own teacher and also in a blind re-grading process by another teacher. We show that our system's performance is fair, given the low agreement between the two human graders, and furthermore show how it could improve efficiency in a practical setting where one seeks to identify incorrectly graded essays.
true
[]
[]
Quality Education
null
null
We would like to thank the anonymous reviewers for their useful comments.
2013
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
dwivedi-shrivastava-2017-beyond
https://aclanthology.org/W17-7526
Beyond Word2Vec: Embedding Words and Phrases in Same Vector Space
Word embeddings are being used for several linguistic problems and NLP tasks. Improvements in solutions to such problems are great because of the recent breakthroughs in vector representation of words and research in vector space models. However, vector embeddings of phrases keeping semantics intact with words has been challenging. We propose a novel methodology using Siamese deep neural networks to embed multi-word units and fine-tune the current state-of-the-art word embeddings keeping both in the same vector space. We show several semantic relations between words and phrases using the embeddings generated by our system and evaluate that the similarity of words and their corresponding paraphrases are maximized using the modified embeddings.
false
[]
[]
null
null
null
We would like to thank Naveen Kumar Laskari for discussions during the course of this work and Pruthwik Mishra and Saurav Jha for their valuable suggestions.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sankepally-oard-2018-initial
https://aclanthology.org/L18-1328
An Initial Test Collection for Ranked Retrieval of SMS Conversations
This paper describes a test collection for evaluating systems that search English SMS (Short Message Service) conversations. The collection is built from about 120,000 text messages. Topic development involved identifying typical types of information needs, then generating topics of each type for which relevant content might be found in the collection. Relevance judgments were then made for groups of messages that were most highly ranked by one or more of several ranked retrieval systems. The resulting TREC style test collection can be used to compare some alternative retrieval system designs.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lin-etal-2012-combining
https://aclanthology.org/P12-1106
Combining Coherence Models and Machine Translation Evaluation Metrics for Summarization Evaluation
An ideal summarization system should produce summaries that have high content coverage and linguistic quality. Many state-ofthe-art summarization systems focus on content coverage by extracting content-dense sentences from source articles. A current research focus is to process these sentences so that they read fluently as a whole. The current AE-SOP task encourages research on evaluating summaries on content, readability, and overall responsiveness. In this work, we adapt a machine translation metric to measure content coverage, apply an enhanced discourse coherence model to evaluate summary readability, and combine both in a trained regression model to evaluate overall responsiveness. The results show significantly improved performance over AESOP 2011 submitted metrics.
false
[]
[]
null
null
null
This research is supported by the Singapore National Research Foundation under its International Research Centre @ Singapore Funding Initiative and administered by the IDM Programme Office.3 Our metrics are publicly available at http://wing. comp.nus.edu.sg/˜linzihen/summeval/.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
abu-jbara-radev-2011-clairlib
https://aclanthology.org/P11-4021
Clairlib: A Toolkit for Natural Language Processing, Information Retrieval, and Network Analysis
In this paper we present Clairlib, an opensource toolkit for Natural Language Processing, Information Retrieval, and Network Analysis. Clairlib provides an integrated framework intended to simplify a number of generic tasks within and across those three areas. It has a command-line interface, a graphical interface, and a documented API. Clairlib is compatible with all the common platforms and operating systems. In addition to its own functionality, it provides interfaces to external software and corpora. Clairlib comes with a comprehensive documentation and a rich set of tutorials and visual demos.
false
[]
[]
null
null
null
We would like to thank Mark Hodges, Anthony Fader, Mark Joseph, Joshua Gerrish, Mark Schaller, Jonathan dePeri, Bryan Gibson, Chen Huang, Arzucan Ozgur, and Prem Ganeshkumar who contributed to the development of Clairlib.This work was supported in part by grants R01-LM008106 and U54-DA021519 from the US National Institutes of Health, U54 DA021519, IDM 0329043, DHB 0527513, 0534323, and 0527513 from the National Science Foundation, and W911NF-09-C-0141 from IARPA.
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yu-etal-2013-candidate
https://aclanthology.org/W13-4420
Candidate Scoring Using Web-Based Measure for Chinese Spelling Error Correction
Chinese character correction involves two major steps: 1) Providing candidate corrections for all or partially identified characters in a sentence, and 2) Scoring all altered sentences and identifying which is the best corrected sentence. In this paper a web-based measure is used to score candidate sentences, in which there exists one continuous error character in a sentence in almost all sentences in the Bakeoff corpora. The approach of using a web-based measure can be applied directly to sentences with multiple error characters, either consecutive or not, and is not optimized for one-character error correction of Chinese sentences. The results show that the approach achieved a fair precision score whereas the recall is low compared to results reported in this Bakeoff.
false
[]
[]
null
null
null
This work was supported by National Science Council (NSC), Taiwan, under Contract number: 102-2221-E-155-029-MY3.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ishiwatari-etal-2017-chunk
https://aclanthology.org/P17-1174
Chunk-based Decoder for Neural Machine Translation
Chunks (or phrases) once played a pivotal role in machine translation. By using a chunk rather than a word as the basic translation unit, local (intrachunk) and global (inter-chunk) word orders and dependencies can be easily modeled. The chunk structure, despite its importance, has not been considered in the decoders used for neural machine translation (NMT). In this paper, we propose chunk-based decoders for NMT, each of which consists of a chunk-level decoder and a word-level decoder. The chunklevel decoder models global dependencies while the word-level decoder decides the local word order in a chunk. To output a target sentence, the chunk-level decoder generates a chunk representation containing global information, which the wordlevel decoder then uses as a basis to predict the words inside the chunk. Experimental results show that our proposed decoders can significantly improve translation performance in a WAT '16 Englishto-Japanese translation task.
false
[]
[]
null
null
null
This research was partially supported by the Research and Development on Real World Big Data Integration and Analysis program of the Ministry of Education, Culture, Sports, Science and Technology (MEXT) and RIKEN, Japan, and by the Chinese National Research Fund (NSFC) Key Project No. 61532013 and National China 973 Project No. 2015CB352401.The authors appreciate Dongdong Zhang, Shuangzhi Wu, and Zhirui Zhang for the fruitful discussions during the first and second authors were interns at Microsoft Research Asia. We also thank Masashi Toyoda and his group for letting us use their computing resources. Finally, we thank the anonymous reviewers for their careful reading of our paper and insightful comments.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wood-doughty-etal-2018-challenges
https://aclanthology.org/D18-1488
Challenges of Using Text Classifiers for Causal Inference
Causal understanding is essential for many kinds of decision-making, but causal inference from observational data has typically only been applied to structured, low-dimensional datasets. While text classifiers produce low-dimensional outputs, their use in causal inference has not previously been studied. To facilitate causal analyses based on language data, we consider the role that text classifiers can play in causal inference through established modeling mechanisms from the causality literature on missing data and measurement error. We demonstrate how to conduct causal analyses using text classifiers on simulated and Yelp data, and discuss the opportunities and challenges of future work that uses text data in causal inference.
false
[]
[]
null
null
null
This work was in part supported by the National Institute of General Medical Sciences under grant number 5R01GM114771 and by the National Institute of Allergy and Infectious Diseases under grant number R01 AI127271-01A1. We thank the anonymous reviewers for their helpful comments.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sawhney-etal-2021-multimodal
https://aclanthology.org/2021.acl-long.526
Multimodal Multi-Speaker Merger \& Acquisition Financial Modeling: A New Task, Dataset, and Neural Baselines
Risk prediction is an essential task in financial markets. Merger and Acquisition (M&A) calls provide key insights into the claims made by company executives about the restructuring of the financial firms. Extracting vocal and textual cues from M&A calls can help model the risk associated with such financial activities. To aid the analysis of M&A calls, we curate a dataset of conference call transcripts and their corresponding audio recordings for the time period ranging from 2016 to 2020. We introduce M3ANet, a baseline architecture that takes advantage of the multimodal multispeaker input to forecast the financial risk associated with the M&A calls. Empirical results prove that the task is challenging, with the proposed architecture performing marginally better than strong BERT-based baselines. We release the M3A dataset and benchmark models to motivate future research on this challenging problem domain.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kordjamshidi-etal-2017-spatial
https://aclanthology.org/W17-4306
Spatial Language Understanding with Multimodal Graphs using Declarative Learning based Programming
This work is on a previously formalized semantic evaluation task of spatial role labeling (SpRL) that aims at extraction of formal spatial meaning from text. Here, we report the results of initial efforts towards exploiting visual information in the form of images to help spatial language understanding. We discuss the way of designing new models in the framework of declarative learning-based programming (DeLBP). The DeLBP framework facilitates combining modalities and representing various data in a unified graph. The learning and inference models exploit the structure of the unified graph as well as the global first order domain constraints beyond the data to predict the semantics which forms a structured meaning representation of the spatial context. Continuous representations are used to relate the various elements of the graph originating from different modalities. We improved over the state-of-the-art results on SpRL.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
linde-goguen-1980-independence
https://aclanthology.org/P80-1010
On the Independence of Discourse Structure and Semantic Domain
Traditionally, linguistics has been concerned with units at the level of the sentence or below, but recently, a body of research has emerged which demonstrates the existence and organization of linguistic units larger than the sentence. (Chafe, 1974; Goguen, Linde, and Weiner, to appear; Grosz, 1977; Halliday and Hasan, 1976; Labov, 1972; Linde, 1974 , 1980a Linde and Goguen, 1978; Linde and Labov, 1975; Folanyi, 1978; Weiner, 1979.) Each such study raises a question about whether the structure discovered is a property of the organization of Language or whether it is entirely a property of the semantic domain. That is, are we discovering general facts about the structure of language at a level beyond the sentence, or are we discovering particular facts about apartment layouts, water pump repair, Watergate politics, etc? Such a crude question does not arise with regard to sentences.
false
[]
[]
null
null
null
We would like to thank R. M. Burstall and James Weiner for their help throughout much of the work reported in this paper. We owe our approach to discourse analysis to the work of William Labor, and our basic orientation to Chogyam Trungpa, Rinp~che.
1980
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bohan-etal-2000-evaluating
http://www.lrec-conf.org/proceedings/lrec2000/pdf/136.pdf
Evaluating Translation Quality as Input to Product Development
In this paper we present a corpus-based method to evaluate the translation quality of machine translation (MT) systems. We start with a shallow analysis of a large corpus and gradually focus the attention on the translation problems. The method constitutes an efficient way to identify the most important grammatical and lexical weaknesses of an MT system and to guide development towards improved translation quality. The evaluation described in the paper was carried out as a cooperation between an MT technology developer, Sail Labs, and the Computational Linguistics group at the University of Zürich.
false
[]
[]
null
null
null
null
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
burchardt-etal-2008-formalising
https://aclanthology.org/I08-1051
Formalising Multi-layer Corpora in OWL DL - Lexicon Modelling, Querying and Consistency Control
We present a general approach to formally modelling corpora with multi-layered annotation, thereby inducing a lexicon model in a typed logical representation language, OWL DL. This model can be interpreted as a graph structure that offers flexible querying functionality beyond current XML-based query languages and powerful methods for consistency control. We illustrate our approach by applying it to the syntactically and semantically annotated SALSA/TIGER corpus.
false
[]
[]
null
null
null
This work has been partly funded by the German Research Foundation DFG (grant PI 154/9-2). We also thank the two anonymous reviewers for their valuable comments and suggestions.
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
meiby-1996-building
https://aclanthology.org/1996.tc-1.17
Building Machine Translation on a firm foundation
Professor Alan K. Melby, Brigham Young University at Provo, USA SYNOPSIS How can we build the next generation of machine translation systems on a firm foundation? We should build on the current generation of systems by incorporating proven technology. That is, we should emphasize indicative-quality translation where appropriate and high-quality controlled-language translation where appropriate, leaving other kinds of translation to humans. This paper will suggest a theoretical framework for discussing text types and make five practical proposals to machine translation vendors for enhancing current machine translation systems. Some of these enhancements will also benefit human translators who are using translation technology. THEORETICAL FRAMEWORK How can we build the next generation of machine translation systems on a firm foundation? Unless astounding breakthroughs in computational linguistics appear on the horizon, the next generation of machine translation systems is not likely to replace all human translators or even reduce the current level of need for human translators. We should build the next generation of systems on the current generation, looking for ways to further help both human and machine translation benefit from technology that has been shown to work. Currently understood technology has not yet been fully implemented in machine translation and can provide a firm foundation for further development of existing systems and implementation of new systems. Before making five practical proposals and projecting their potential benefits, I will sketch a theoretical framework for the rest of the paper.
false
[]
[]
null
null
null
null
1996
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
stanovsky-tamari-2019-yall
https://aclanthology.org/D19-5549
Y'all should read this! Identifying Plurality in Second-Person Personal Pronouns in English Texts
Distinguishing between singular and plural "you" in English is a challenging task which has potential for downstream applications, such as machine translation or coreference resolution. While formal written English does not distinguish between these cases, other languages (such as Spanish), as well as other dialects of English (via phrases such as "y'all"), do make this distinction. We make use of this to obtain distantly-supervised labels for the task on a large-scale in two domains. Following, we train a model to distinguish between the single/plural 'you', finding that although in-domain training achieves reasonable accuracy (≥ 77%), there is still a lot of room for improvement, especially in the domain-transfer scenario, which proves extremely challenging. Our code and data are publicly available. 1 * Work done during an internship at the Allen Institute for Artificial Intelligence.
false
[]
[]
null
null
null
We thank the anonymous reviewers for their many helpful comments and suggestions.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tan-bansal-2019-lxmert
https://aclanthology.org/D19-1514
LXMERT: Learning Cross-Modality Encoder Representations from Transformers
Vision-and-language reasoning requires an understanding of visual concepts, language semantics, and, most importantly, the alignment and relationships between these two modalities. We thus propose the LXMERT (Learning Cross-Modality Encoder Representations from Transformers) framework to learn these vision-and-language connections. In LXMERT, we build a large-scale Transformer model that consists of three encoders: an object relationship encoder, a language encoder, and a cross-modality encoder. Next, to endow our model with the capability of connecting vision and language semantics, we pre-train the model with large amounts of image-and-sentence pairs, via five diverse representative pre-training tasks: masked language modeling, masked object prediction (feature regression and label classification), cross-modality matching, and image question answering. These tasks help in learning both intra-modality and cross-modality relationships. After fine-tuning from our pretrained parameters, our model achieves the state-of-the-art results on two visual question answering datasets (i.e., VQA and GQA). We also show the generalizability of our pretrained cross-modality model by adapting it to a challenging visual-reasoning task, NLVR 2 , and improve the previous best result by 22% absolute (54% to 76%). Lastly, we demonstrate detailed ablation studies to prove that both our novel model components and pretraining strategies significantly contribute to our strong results. 1
false
[]
[]
null
null
null
We thank the reviewers for their helpful comments. This work was supported by ARO-YIP Award #W911NF-18-1-0336, and awards from Google, Facebook, Salesforce, and Adobe. The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency. We also thank Alane Suhr for evaluation on NLVR 2 .
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
riedl-biemann-2013-scaling
https://aclanthology.org/D13-1089
Scaling to Large\mbox$^3$ Data: An Efficient and Effective Method to Compute Distributional Thesauri
We introduce a new highly scalable approach for computing Distributional Thesauri (DTs). By employing pruning techniques and a distributed framework, we make the computation for very large corpora feasible on comparably small computational resources. We demonstrate this by releasing a DT for the whole vocabulary of Google Books syntactic n-grams. Evaluating against lexical resources using two measures, we show that our approach produces higher quality DTs than previous approaches, and is thus preferable in terms of speed and quality for large corpora.
false
[]
[]
null
null
null
This work has been supported by the Hessian research excellence program "Landes-Offensive zur Entwicklung Wissenschaftlich-konomischer Exzellenz" (LOEWE) as part of the research center "Digital Humanities". We would also thank the anonymous reviewers for their comments, which greatly helped to improve the paper.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
blaschke-etal-2020-cyberwalle
https://aclanthology.org/2020.semeval-1.192
CyberWallE at SemEval-2020 Task 11: An Analysis of Feature Engineering for Ensemble Models for Propaganda Detection
This paper describes our participation in the SemEval-2020 task Detection of Propaganda Techniques in News Articles. We participate in both subtasks: Span Identification (SI) and Technique Classification (TC). We use a bi-LSTM architecture in the SI subtask and train a complex ensemble model for the TC subtask. Our architectures are built using embeddings from BERT in combination with additional lexical features and extensive label post-processing. Our systems achieve a rank of 8 out of 35 teams in the SI subtask (F1-score: 43.86%) and 8 out of 31 teams in the TC subtask (F1-score: 57.37%).
true
[]
[]
Peace, Justice and Strong Institutions
null
null
We thank Dr. Ç agrı Çöltekin for useful discussions and his guidance throughout this project.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
agirre-etal-2013-ubc
https://aclanthology.org/S13-1018
UBC\_UOS-TYPED: Regression for typed-similarity
We approach the typed-similarity task using a range of heuristics that rely on information from the appropriate metadata fields for each type of similarity. In addition we train a linear regressor for each type of similarity. The results indicate that the linear regression is key for good performance. Our best system was ranked third in the task.
false
[]
[]
null
null
null
This work is partially funded by the PATHS project (http://paths-project.eu) funded by the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 270082. Aitor Gonzalez-Agirre is supported by a PhD grant from the Spanish Ministry of Education, Culture and Sport (grant FPU12/06243).
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
cai-etal-2017-crf
https://aclanthology.org/D17-1171
CRF Autoencoder for Unsupervised Dependency Parsing
Unsupervised dependency parsing, which tries to discover linguistic dependency structures from unannotated data, is a very challenging task. Almost all previous work on this task focuses on learning generative models. In this paper, we develop an unsupervised dependency parsing model based on the CRF autoencoder. The encoder part of our model is discriminative and globally normalized which allows us to use rich features as well as universal linguistic priors. We propose an exact algorithm for parsing as well as a tractable learning algorithm. We evaluated the performance of our model on eight multilingual treebanks and found that our model achieved comparable performance with state-of-the-art approaches.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
li-etal-2016-learning
https://aclanthology.org/C16-1136
Learning Event Expressions via Bilingual Structure Projection
Identifying events of a specific type is a challenging task as events in texts are described in numerous and diverse ways. Aiming to resolve high complexities of event descriptions, previous work (Huang and Riloff, 2013) proposes multi-faceted event recognition and a bootstrapping method to automatically acquire both event facet phrases and event expressions from unannotated texts. However, to ensure high quality of learned phrases, this method is constrained to only learn phrases that match certain syntactic structures. In this paper, we propose a bilingual structure projection algorithm that explores linguistic divergences between two languages (Chinese and English) and mines new phrases with new syntactic structures, which have been ignored in the previous work. Experiments show that our approach can successfully find novel event phrases and structures, e.g., phrases headed by nouns. Furthermore, the newly mined phrases are capable of recognizing additional event descriptions and increasing the recall of event recognition.
false
[]
[]
null
null
null
The authors were supported by National Natural Science Foundation of China (Grant Nos. 61403269, 61432013 and 61525205) and Natural Science Foundation of Jiangsu Province (Grant No. BK20140355). This research was also partially supported by Ruihong Huang's startup funds in Texas A&M University. We also thank the anonymous reviewers for their insightful comments.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
liu-chan-2012-role
https://aclanthology.org/Y12-1069
The Role of Qualia Structure in Mandarin Children Acquiring Noun-modifying Constructions
This paper investigates the types and the developmental trajectory of noun modifying constructions (NMCs), in the form of [Modifier + de + (Noun)], attested in Mandarin-speaking children's speech from a semantic perspective based on the generative lexicon framework (Pustejovsky, 1995). Based on 1034 NMCs (including those traditionally defined as relative clauses (RCs)) produced by 135 children aged 3 to 6 from a cross-sectional naturalistic speech corpus "Zhou2" in CHILDES, we analyzed the relation between the modifier and the head noun according to the 4 major roles of qualia structure: formal, constitutive, telic and agentive. Results suggest that (i) NMCs expressing the formal facet of the head noun's meaning are most frequently produced and acquired earliest, followed by those expressing the constitutive quale, and then those expressing the telic or the agentive quale; (ii) RC-type NMCs emerge either alongside the other non-RC type NMCs at the same time, or emerge later than the other non-RC type NMCs for the constitutive quale; and (iii) the majority of NMCs expressing the agentive and telic quales are those that fall within the traditional domain of RCs (called RC-type NMCs here), while the majority of NMCs expressing the formal and the constitutive quales are non-RC type NMCs. These findings are consistent with: (i) the semantic nature and complexity of the four qualia relations: formal and constitutive aspects of an object (called natural type concepts in Pustejovsky 2001, 2006) are more basic attributes, while telic and agentive (called artificial type concepts in Pustejovsky 2001, 2006) are derived and often eventive (hence conceptually more complex); and (ii) the properties of their adult input: NMCs expressing the formal quale are also most frequently encountered in the adult input; followed by the constitutive quale, and then the agentive and telic quales. The findings are also consistent with the idea that in Asian languages such as Japanese, Korean and Chinese, RCs develop from attributive constructions specifying a semantic feature of the head noun in acquisition (Diessel 2007, c.f. also Comrie 1996, 1998, 2002). This study is probably the first of using the generative lexicon framework in the field of child language acquisition.
true
[]
[]
Quality Education
null
null
null
2012
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
kim-etal-2018-modeling
https://aclanthology.org/C18-1235
Modeling with Recurrent Neural Networks for Open Vocabulary Slots
Dealing with 'open-vocabulary' slots has been among the challenges in the natural language area. While recent studies on attention-based recurrent neural network (RNN) models have performed well in completing several language related tasks such as spoken language understanding and dialogue systems, there has been a lack of attempts to address filling slots that take on values from a virtually unlimited set. In this paper, we propose a new RNN model that can capture the vital concept: Understanding the role of a word may vary according to how long a reader focuses on a particular part of a sentence. The proposed model utilizes a longterm aware attention structure, positional encoding primarily considering the relative distance between words, and multi-task learning of a character-based language model and an intent detection model. We show that the model outperforms the existing RNN models with respect to discovering 'open-vocabulary' slots without any external information, such as a named entity database or knowledge base. In particular, we confirm that it performs better with a greater number of slots in a dataset, including unknown words, by evaluating the models on a dataset of several domains. In addition, the proposed model also demonstrates superior performance with regard to intent detection.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hogenhout-matsumoto-1997-preliminary
https://aclanthology.org/W97-1003
A Preliminary Study of Word Clustering Based on Syntactic Behavior
We show how a treebank can be used to cluster words on the basis of their syntactic behavior. The resulting clusters represent distinct types of behavior with much more precision than parts of speech. As an example we show how prepositions can be automatically subdivided by their syntactic behavior and discuss the appropriateness of such a subdivision. Applications of this work are also discussed.
false
[]
[]
null
null
null
We would like to express our appreciation to the anonymous reviewers who have provided many valuable comments and criticisms.
1997
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
shilen-wilson-2022-learning
https://aclanthology.org/2022.scil-1.26
Learning Input Strictly Local Functions: Comparing Approaches with Catalan Adjectives
Input strictly local (ISL) functions are a class of subregular transductions that have well-understood mathematical and computational properties and that are sufficiently expressive to account for a wide variety of attested morphological and phonological patterns (e.g., Chandlee, 2014; Chandlee, 2017; . In this study, we compared several approaches to learning ISL functions: the ISL function learning algorithm (ISLFLA; Chandlee, 2014; and the classic OSTIA learner to which it is related (Oncina et al., 1993) ; the Minimal Generalization Learner (MGL; Hayes, 2002, 2003) ; and a novel deep neural network model presented here (DNN-ISL).
false
[]
[]
null
null
null
Thanks to Coleman Haley and Marina Bedny for helpful discussion of this research, which was supported by NSF grant BCS-1941593 to CW.
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
li-etal-2019-building
https://aclanthology.org/2019.lilt-18.2
Building a Chinese AMR Bank with Concept and Relation Alignments
Meaning Representation (AMR) is a meaning representation framework in which the meaning of a full sentence is represented as a single-rooted, acyclic, directed graph. In this article, we describe an ongoing project to build a Chinese AMR (CAMR) corpus, which currently includes 10,149 sentences from the newsgroup and weblog portion of the Chinese TreeBank (CTB). We describe the annotation specifications for the CAMR corpus, which follow the annotation principles of English AMR but make adaptations where needed to accommodate the linguistic facts of Chinese. The CAMR specifications also include a systematic treatment of sentence-internal discourse relations. One significant change we have made to the AMR annotation methodology is the inclusion of the alignment between word tokens in the sentence and the concepts/relations in the CAMR annotation to make it easier for automatic parsers to model the correspondence between a sentence and its meaning representation. We develop an annotation tool for CAMR, and the inter-agreement as measured by the Smatch score between the two annotators is 0.83, indicating reliable annotation. We also present some quantitative analysis of the CAMR corpus. 46.71% of the AMRs of the sentences are non-tree graphs. Moreover, the AMR of 88.95% of the sentences has concepts inferred from the context of the sentence but do not correspond to a specific word 2 / LiLT volume 18, issue (1) June 2019 or phrase in a sentence, and the average number of such inferred concepts per sentence is 2.88. These statistics will have to be taken into account when developing automatic Chinese AMR parsers.
false
[]
[]
null
null
null
This work is the staged achievement of the projects supported by National Social Science Foundation of China (18BYY127) and National Science Foundation of China (61772278).
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
proux-etal-2009-natural
https://aclanthology.org/W09-4506
Natural Language Processing to Detect Risk Patterns Related to Hospital Acquired Infections
Hospital Acquired Infections (HAI) has a major impact on public health and on related healthcare cost. HAI experts are fighting against this issue but they are struggling to access data. Information systems in hospitals are complex, highly heterogeneous, and generally not convenient to perform a real time surveillance. Developing a tool able to parse patient records in order to automatically detect signs of a possible issue would be a tremendous help for these experts and could allow them to react more rapidly and as a consequence to reduce the impact of such infections. Recent advances in Computational Intelligence Techniques such as Information Extraction, Risk Patterns Detection in documents and Decision Support Systems now allow to develop such systems.
true
[]
[]
Good Health and Well-Being
null
null
null
2009
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kron-etal-2007-development
https://aclanthology.org/W07-1807
A Development Environment for Building Grammar-Based Speech-Enabled Applications
We present a development environment for Regulus, a toolkit for building unification grammar-based speech-enabled systems, focussing on new functionality added over the last year. In particular, we will show an initial version of a GUI-based top-level for the development environment, a tool that supports graphical debugging of unification grammars by cutting and pasting of derivation trees, and various functionalities that support systematic development of speech translation and spoken dialogue applications built using Regulus.
false
[]
[]
null
null
null
null
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jones-thompson-2003-identifying
https://aclanthology.org/W03-0418
Identifying Events using Similarity and Context
As part of our work on automatically building knowledge structures from text, we apply machine learning to determine which clauses from multiple narratives describing similar situations should be grouped together as descriptions of the same type of occurrence. Our approach to the problem uses textual similarity and context from other clauses. Besides training data, our system uses only a partial parser as outside knowledge. We present results evaluating the cohesiveness of the aggregated clauses and a brief overview of how this work fits into our overall system.
false
[]
[]
null
null
null
We would like to thank Robert Cornell and Donald Jones for evaluating our system.
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kwon-etal-2013-bilingual
https://aclanthology.org/W13-2502
Bilingual Lexicon Extraction via Pivot Language and Word Alignment Tool
This paper presents a simple and effective method for automatic bilingual lexicon extraction from less-known language pairs. To do this, we bring in a bridge language named the pivot language and adopt information retrieval techniques combined with natural language processing techniques. Moreover, we use a freely available word aligner: Anymalign (Lardilleux et al., 2011) for constructing context vectors. Unlike the previous works, we obtain context vectors via a pivot language. Therefore, we do not require to translate context vectors by using a seed dictionary and improve the accuracy of low frequency word alignments that is weakness of statistical model by using Anymalign. In this paper, experiments have been conducted on two different language pairs that are bi-directional Korean-Spanish and Korean-French, respectively. The experimental results have demonstrated that our method for high-frequency words shows at least 76.3 and up to 87.2% and for the lowfrequency words at least 43.3% and up to 48.9% within the top 20 ranking candidates, respectively.
false
[]
[]
null
null
null
This work was supported by the Korea Ministry of Knowledge Economy (MKE) under Grant No.10041807
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yuste-rodrigo-braun-chen-2001-comparative
https://aclanthology.org/2001.mtsummit-eval.12
Comparative evaluation of the linguistic output of MT systems for translation and information purposes
This paper describes a Machine Translation (MT) evaluation experiment where emphasis is placed on the quality of output and the extent to which it is geared to different users' needs. Adopting a very specific scenario, that of a multilingual international organisation, a clear distinction is made between two user classes: translators and administrators. Whereas the first group requires MT output to be accurate and of good post-editable quality in order to produce a polished translation, the second group primarily needs informative data for carrying out other, non-linguistic tasks, and therefore uses MT more as an information-gathering and gisting tool. During the experiment, MT output of three different systems is compared in order to establish which MT system best serves the organisation's multilingual communication and information needs. This is a comparative usability-and adequacy-oriented evaluation in that it attempts to help such organisations decide which system produces the most adequate output for certain well-defined user types. To perform the experiment, criteria relating to both users and MT output are examined with reference to the ISLE taxonomy. The experiment comprises two evaluation phases, the first at sentence level, the second at overall text level. In both phases, evaluators make use of a 1-5 rating scale. Weighted results provide some insight into the systems' usability and adequacy for the purposes described above. As a conclusion, it i s suggested that further research should be devoted to the most critical aspect of this exercise, namely defining meaningful and useful criteria for evaluating the post-editability and informativeness of MT output.
false
[]
[]
null
null
null
null
2001
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mayfield-etal-1995-concept
https://aclanthology.org/1995.tmi-1.15
Concept-Based Parsing For Speech Translation
As part of the JANUS speech-to-speech translation project[5], we have developed a translation system that successfully parses full utterances and is effective in parsing spontaneous speech, which is often syntactically ill-formed. The system is concept-based, meaning that it has no explicit notion of a sentence but rather views each input utterance as a potential sequence of concepts. Generation is performed by translating each of these concepts in whole phrases into the target language, consulting lookup tables only for low-level concepts such as numbers. Currently, we are working on an appointment scheduling task, parsing English, German, Spanish, and Korean input and producing output in those same languages and also Japanese.
false
[]
[]
null
null
null
null
1995
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
clark-2021-strong
https://aclanthology.org/2021.scil-1.47
Strong Learning of Probabilistic Tree Adjoining Grammars
In this abstract we outline some theoretical work on the probabilistic learning of a representative mildly context-sensitive grammar formalism from positive examples only. In a recent paper, Clark and Fijalkow (2020) (CF from now on) present a consistent unsupervised learning algorithm for probabilistic context-free grammars (PCFGs) satisfying certain structural conditions: it converges to the correct grammar and parameter values, taking as input only a sample of strings generated by the PCFG. Here we extend this to the problem of learning tree grammars from derived trees, and show that under analogous conditions, we can learn a probabilistic tree grammar, of a type that is equivalent to Tree Adjoining Grammars (TAGs) (Vijay-Shankar and Joshi, 1985) . In this learning model, we have a probabilistic tree grammar which generates a probability distribution over trees; given a sample of these trees, the learner must converge to a grammar that has the same structure as the original grammar and the same parameters.
false
[]
[]
null
null
null
I would like to thank Ryo Yoshinaka; and the reviewers for comments on the paper of which this is an extended abstract.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wu-fung-2005-inversion
https://aclanthology.org/I05-1023
Inversion Transduction Grammar Constraints for Mining Parallel Sentences from Quasi-Comparable Corpora
We present a new implication of Wu's (1997) Inversion Transduction Grammar (ITG) Hypothesis, on the problem of retrieving truly parallel sentence translations from large collections of highly non-parallel documents. Our approach leverages a strong language universal constraint posited by the ITG Hypothesis, that can serve as a strong inductive bias for various language learning problems, resulting in both efficiency and accuracy gains. The task we attack is highly practical since non-parallel multilingual data exists in far greater quantities than parallel corpora, but parallel sentences are a much more useful resource. Our aim here is to mine truly parallel sentences, as opposed to comparable sentence pairs or loose translations as in most previous work. The method we introduce exploits Bracketing ITGs to produce the first known results for this problem. Experiments show that it obtains large accuracy gains on this task compared to the expected performance of state-of-the-art models that were developed for the less stringent task of mining comparable sentence pairs.
false
[]
[]
null
null
null
null
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
vijay-shanker-1992-using
https://aclanthology.org/J92-4004
Using Descriptions of Trees in a Tree Adjoining Grammar
This paper describes a new interpretation of Tree Adjoining Grammars (TAG) that allows the embedding of TAG in the unification framework in a manner consistent with the declarative approach taken in this framework. In the new interpretation we present in this paper, the objects manipulated by a TAG are considered to be descriptions of trees. This is in contrast to the traditional view that in a TAG the composition operations of adjoining and substitution combine trees. Borrowing ideas from Description Theory, we propose quasi-trees as a means to represent partial descriptions of trees. Using quasi-trees, we are able to justify the definition of feature structurebased Tree Adjoining Grammars (FTAG) that was first given in Vijay-Shanker (1987) and Vijay-Shanker and Joshi (1988). In the definition of the FTAG formalism given here, we argue that a grammar manipulates descriptions of trees (i.e., quasi-trees); whereas the structures derived by a grammar are trees that are obtained by taking the minimal readings of such descriptions. We then build on and refine the earlier version of FTAG, give examples that illustrate the usefulness of embedding TAG in the unification framework, and present a logical formulation (and its associated semantics) of FTA G that shows the separation between descriptions of well-formed structures and the actual structures that are derived, a theme that is central to this work. Finally, we discuss some questions that are raised by our new interpretation of the TAG formalism: questions dealing with the nature and definition of the adjoining operation (in contrast to substitution), its relation to multi-component adjoining, and the distinctions between auxiliary and initial structures.
false
[]
[]
null
null
null
This work was partially supported by NSF grant IRI-9016591. I am extremely grateful to A. AbeiUe, A. K. Joshi, A. Kroch, K. E McCoy, Y. Schabes, S. M. Shieber, and D. J. Weir. Their suggestions and comments at various stages have played a substantial role in the development of this work. I am thankful to the reviewers for many useful suggestions. Many of the figures in this paper have been drawn by XTAG (Schabes and Paroubek 1992) , a workbench for Tree-Adjoining Grammars. I would like to thank Yves Schabes for making this available to me.
1992
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bangalore-etal-2012-real
https://aclanthology.org/N12-1048
Real-time Incremental Speech-to-Speech Translation of Dialogs
In a conventional telephone conversation between two speakers of the same language, the interaction is real-time and the speakers process the information stream incrementally. In this work, we address the problem of incremental speech-to-speech translation (S2S) that enables cross-lingual communication between two remote participants over a telephone. We investigate the problem in a novel real-time Session Initiation Protocol (SIP) based S2S framework. The speech translation is performed incrementally based on generation of partial hypotheses from speech recognition. We describe the statistical models comprising the S2S system and the SIP architecture for enabling real-time two-way cross-lingual dialog. We present dialog experiments performed in this framework and study the tradeoff in accuracy versus latency in incremental speech translation. Experimental results demonstrate that high quality translations can be generated with the incremental approach with approximately half the latency associated with nonincremental approach.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
karlsson-1990-constraint
https://aclanthology.org/C90-3030
Constraint Grammar as a Framework for Parsing Running Text
Grammars which are used in parsers are often directly imported from autonomous grammar theory and descriptive practice that were not exercised for the explicit purpose of parsing. Parsers have been designed for English based on e.g. Government and Binding Theory, Generalized Phrase Structure Grammar, and LexicaI-Functional Grammar. We present a formalism to be used for parsing where the grammar statements are closer to real text sentences and more directly address some notorious parsing problems, especially ambiguity. The formalism is a linguistic one. It relies on transitional probabilities in an indirect way. The probabilities are not part of the description.
false
[]
[]
null
null
null
This research was supported by the Academy of Finland in 1985-89, and by the Technology Development Centre of Finland (TEKES) in 1989-90. Part of it belongs to the ESPRIT II project SIMPR (2083). I am indebted to Kimmo Koskenniemi for help in the field of morphological analysis, and to Atro Voutilainen, Juha Heikkil~, and Arto Anttila for help in testing the formalism.
1990
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
schlor-etal-2020-improving
https://aclanthology.org/2020.onion-1.5
Improving Sentiment Analysis with Biofeedback Data
Humans frequently are able to read and interpret emotions of others by directly taking verbal and non-verbal signals in human-to-human communication into account or to infer or even experience emotions from mediated stories. For computers, however, emotion recognition is a complex problem: Thoughts and feelings are the roots of many behavioural responses and they are deeply entangled with neurophysiological changes within humans. As such, emotions are very subjective, often are expressed in a subtle manner, and are highly depending on context. For example, machine learning approaches for text-based sentiment analysis often rely on incorporating sentiment lexicons or language models to capture the contextual meaning. This paper explores if and how we further can enhance sentiment analysis using biofeedback of humans which are experiencing emotions while reading texts. Specifically, we record the heart rate and brain waves of readers that are presented with short texts which have been annotated with the emotions they induce. We use these physiological signals to improve the performance of a lexicon-based sentiment classifier. We find that the combination of several biosignals can improve the ability of a text-based classifier to detect the presence of a sentiment in a text on a per-sentence level.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhao-ng-2007-identification
https://aclanthology.org/D07-1057
Identification and Resolution of Chinese Zero Pronouns: A Machine Learning Approach
In this paper, we present a machine learning approach to the identification and resolution of Chinese anaphoric zero pronouns. We perform both identification and resolution automatically, with two sets of easily computable features. Experimental results show that our proposed learning approach achieves anaphoric zero pronoun resolution accuracy comparable to a previous state-ofthe-art, heuristic rule-based approach. To our knowledge, our work is the first to perform both identification and resolution of Chinese anaphoric zero pronouns using a machine learning approach.
false
[]
[]
null
null
null
We thank Susan Converse and Martha Palmer for sharing their Chinese third-person pronoun and zero pronoun coreference corpus.
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
shen-etal-2004-discriminative
https://aclanthology.org/N04-1023
Discriminative Reranking for Machine Translation
This paper describes the application of discriminative reranking techniques to the problem of machine translation. For each sentence in the source language, we obtain from a baseline statistical machine translation system, a rankedbest list of candidate translations in the target language. We introduce two novel perceptroninspired reranking algorithms that improve on the quality of machine translation over the baseline system based on evaluation using the BLEU metric. We provide experimental results on the NIST 2003 Chinese-English large data track evaluation. We also provide theoretical analysis of our algorithms and experiments that verify that our algorithms provide state-of-theart performance in machine translation.
false
[]
[]
null
null
null
This material is based upon work supported by the National Science Foundation under Grant No. 0121285. The first author was partially supported by JHU postworkshop fellowship and NSF Grant ITR-0205456. The second author is partially supported by NSERC, Canada (RGPIN: 264905). We thank the members of the SMT team of JHU Workshop 2003 for help on the dataset and three anonymous reviewers for useful comments.
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sharifi-atashgah-bijankhan-2009-corpus
https://aclanthology.org/2009.mtsummit-caasl.11
Corpus-based Analysis for Multi-token Units in Persian
Morphological and syntactic annotation of multi-token units confront several problems due to the concatenating nature of Persian script and so its orthographic variation. In the present paper, by the analysis of the different collocation types of the tokens, the compositional, non-compositional and semicompositional constructions are described and then, in order to explain these constructions, the static and dynamic multi-token units will be introduced for the non-generative and generative structures of the verbs, infinitives, prepositions, conjunctions, adverbs, adjectives and nouns. Defining the multi-token unit templates for these categories is one of the important results of this research. The findings can be input to the Persian Treebank generator systems. Also, the machine translation systems using the rule-based methods to parse the texts can utilize the results in text segmentation and parsing.
false
[]
[]
null
null
null
null
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mowery-etal-2012-medical
https://aclanthology.org/W12-2407
Medical diagnosis lost in translation -- Analysis of uncertainty and negation expressions in English and Swedish clinical texts
In the English clinical and biomedical text domains, negation and certainty usage are two well-studied phenomena. However, few studies have made an in-depth characterization of uncertainties expressed in a clinical setting, and compared this between different annotation efforts. This preliminary, qualitative study attempts to 1) create a clinical uncertainty and negation taxonomy, 2) develop a translation map to convert annotation labels from an English schema into a Swedish schema, and 3) characterize and compare two data sets using this taxonomy. We define a clinical uncertainty and negation taxonomy and a translation map for converting annotation labels between two schemas and report observed similarities and differences between the two data sets.
true
[]
[]
Good Health and Well-Being
null
null
For the English and Swedish data sets, we obtained approval from the University of Pittsburgh IRB and the Regional Ethical Review Board in Stockholm (Etikprövningsnämnden i Stockholm). The study is part of the Interlock project, funded by the Stockholm University Academic Initiative and partially funded by NLM Fellowship 5T15LM007059. Lexicons and probabilities will be made available and updated on the iDASH NLP ecosystem under Resources: http://idash.ucsd.edu/nlp/natural-languageprocessing-nlp-ecosystem.
2012
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
baldwin-etal-2003-alias
https://aclanthology.org/N03-4002
Alias-i Threat Trackers
Alias-i ThreatTrackers are an advanced information access application designed around the needs of analysts working through a large daily data feed. ThreatTrackers help analysts decompose an information gathering topic like the unfolding political situation in Iraq into specifications including people, places, organizations and relationships. These specifications are then used to collect and browse information on a daily basis. The nearest related technologies are information retrieval (search engines), document categorization, information extraction and named entity detection.ThreatTrackers are currently being used in the Total Information Awareness program.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
bernier-colborne-etal-2021-n
https://aclanthology.org/2021.vardial-1.15
N-gram and Neural Models for Uralic Language Identification: NRC at VarDial 2021
We describe the systems developed by the National Research Council Canada for the Uralic language identification shared task at the 2021 VarDial evaluation campaign. We evaluated two different approaches to this task: a probabilistic classifier exploiting only character 5grams as features, and a character-based neural network pre-trained through self-supervision, then fine-tuned on the language identification task. The former method turned out to perform better, which casts doubt on the usefulness of deep learning methods for language identification, where they have yet to convincingly and consistently outperform simpler and less costly classification algorithms exploiting n-gram features.
false
[]
[]
null
null
null
We thank the organizers for their work developing and running this shared task.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
soundrarajan-etal-2011-interface
https://aclanthology.org/P11-4024
An Interface for Rapid Natural Language Processing Development in UIMA
This demonstration presents the Annotation Librarian, an application programming interface that supports rapid development of natural language processing (NLP) projects built in Apache Unstructured Information Management Architecture (UIMA). The flexibility of UIMA to support all types of unstructured data-images, audio, and text-increases the complexity of some of the most common NLP development tasks. The Annotation Librarian interface handles these common functions and allows the creation and management of annotations by mirroring Java methods used to manipulate Strings. The familiar syntax and NLP-centric design allows developers to adopt and rapidly develop NLP algorithms in UIMA. The general functionality of the interface is described in relation to the use cases that necessitated its creation.
false
[]
[]
null
null
null
This work was supported using resources and facilities at the VA Salt Lake City Health Care System with funding support from the VA Informatics and Computing Infrastructure (VINCI), VA HSR HIR 08-204 and the Consortium for Healthcare Informatics Research (CHIR), VA HSR HIR 08-374. Views expressed are those of the authors and not necessarily those of the Department of Veterans Affairs.
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
buechel-etal-2016-enterprises
https://aclanthology.org/W16-0423
Do Enterprises Have Emotions?
Emotional language of human individuals has been studied for quite a while dealing with opinions and value judgments people have and share with others. In our work, we take a different stance and investigate whether large organizations, such as major industrial players, have and communicate emotions, as well. Such an anthropomorphic perspective has recently been advocated in management and organization studies which consider organizations as social actors. We studied this assumption by analyzing 1,676 annual business and sustainability reports from 90 top-performing enterprises in the United States, Great Britain and Germany. We compared the measurements of emotions in this homogeneous corporate text corpus with those from RCV1, a heterogeneous Reuters newswire corpus. From this, we gathered empirical evidence that business reports compare well with typical emotion-neutral economic news, whereas sustainability reports are much more emotionally loaded, similar to emotion-heavy sports and fashion news from Reuters. Furthermore, our data suggest that these emotions are distinctive and relatively stable over time per organization, thus constituting an emotional profile for enterprises.
true
[]
[]
Industry, Innovation and Infrastructure
Decent Work and Economic Growth
null
null
2016
false
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
ws-2007-biological
https://aclanthology.org/W07-1000
Biological, translational, and clinical language processing
Biological, translational, and clinical language processing K. BRETONNEL COHEN, DINA DEMNER-FUSHMAN, CAROL FRIEDMAN, LYNETTE HIRSCHMAN, AND JOHN P. PESTIAN Natural language processing has a long history in the medical domain, with research in the field dating back to at least the early 1960s. In the late 1990s, a separate thread of research involving natural language processing in the genomic domain began to gather steam. It has become a major focus of research in the bioinformatics, computational biology, and computational linguistics communities. A number of successful workshops and conference sessions have resulted, with significant progress in the areas of named entity recognition for a wide range of key biomedical classes, concept normalization, and system evaluation. A variety of publicly available resources have contributed to this progress, as well.
true
[]
[]
Good Health and Well-Being
null
null
null
2007
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
erjavec-etal-2004-making
http://www.lrec-conf.org/proceedings/lrec2004/pdf/107.pdf
Making an XML-based Japanese-Slovene Learners' Dictionary
In this paper we present a hypertext dictionary of Japanese lexical units for Slovene students of Japanese at the Faculty of Arts of Ljubljana University. The dictionary is planned as a long-term project in which a simple dictionary is to be gradually enlarged and enhanced, taking into account the needs of the students. Initially, the dictionary was encoded in a tabular format, in a mixture of encodings, and subsequently rendered in HTML. The paper first discusses the conversion of the dictionary into XML, into an encoding that complies with the Text Encoding Initiative (TEI) Guidelines. The conversion into such an encoding validates, enriches, explicates and standardises the structure of the dictionary, thus making it more usable for further development and linguistically oriented research. We also present the current Web implementation of the dictionary, which offers full text search and a tool for practising inflected parts of speech. The paper gives an overview of related research, i.e. other XML oriented Web dictionaries of Slovene and East Asian languages and presents planned developments, i.e. the inclusion of the dictionary into the Reading Tutor program.
true
[]
[]
Quality Education
null
null
null
2004
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
hintz-2016-data
https://aclanthology.org/N16-2006
Data-driven Paraphrasing and Stylistic Harmonization
This thesis proposal outlines the use of unsupervised data-driven methods for paraphrasing tasks. We motivate the development of knowledge-free methods at the guiding use case of multi-document summarization, which requires a domain-adaptable system for both the detection and generation of sentential paraphrases. First, we define a number of guiding research questions that will be addressed in the scope of this thesis. We continue to present ongoing work in unsupervised lexical substitution. An existing supervised approach is first adapted to a new language and dataset. We observe that supervised lexical substitution relies heavily on lexical semantic resources, and present an approach to overcome this dependency. We describe a method for unsupervised relation extraction, which we aim to leverage in lexical substitution as a replacement for knowledge-based resources.
false
[]
[]
null
null
null
This work has been supported by the German Research Foundation as part of the Research Training Group "Adaptive Preparation of Information from Heterogeneous Sources" (AIPHES) under grant No. GRK 1994/1.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lenci-etal-1999-fame
https://aclanthology.org/W99-0407
FAME: a Functional Annotation Meta-scheme for multi-modal and multi-lingual Parsing Evaluation
The paper describes FAME, a functional annotation meta-scheme for comparison and evaluation of existing syntactic annotation schemes, intended to be used as a flexible yardstick in multilingual and multi-modal parser evaluation campaigns. We show that FAME complies with a variety of non-trivial methodological requirements, and has the potential for being effectively used as an "interlingua" between different syntactic representation formats.
false
[]
[]
null
null
null
null
1999
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false