ID
stringlengths
11
54
url
stringlengths
33
64
title
stringlengths
11
184
abstract
stringlengths
17
3.87k
label_nlp4sg
bool
2 classes
task
sequence
method
sequence
goal1
stringclasses
9 values
goal2
stringclasses
9 values
goal3
stringclasses
1 value
acknowledgments
stringlengths
28
1.28k
year
stringlengths
4
4
sdg1
bool
1 class
sdg2
bool
1 class
sdg3
bool
2 classes
sdg4
bool
2 classes
sdg5
bool
2 classes
sdg6
bool
1 class
sdg7
bool
1 class
sdg8
bool
2 classes
sdg9
bool
2 classes
sdg10
bool
2 classes
sdg11
bool
2 classes
sdg12
bool
1 class
sdg13
bool
2 classes
sdg14
bool
1 class
sdg15
bool
1 class
sdg16
bool
2 classes
sdg17
bool
2 classes
milajevs-purver-2014-investigating
https://aclanthology.org/W14-1505
Investigating the Contribution of Distributional Semantic Information for Dialogue Act Classification
This paper presents a series of experiments in applying compositional distributional semantic models to dialogue act classification. In contrast to the widely used bag-ofwords approach, we build the meaning of an utterance from its parts by composing the distributional word vectors using vector addition and multiplication. We investigate the contribution of word sequence, dialogue act sequence, and distributional information to the performance, and compare with the current state of the art approaches. Our experiment suggests that that distributional information is useful for dialogue act tagging but that simple models of compositionality fail to capture crucial information from word and utterance sequence; more advanced approaches (e.g. sequence-or grammar-driven, such as categorical, word vector composition) are required.
false
[]
[]
null
null
null
We thank Mehrnoosh Sadrzadeh for her helpful advice and valuable discussion. We would like to thank anonymous reviewers for their effective comments. Milajevs is supported by the EP-SRC project EP/J002607/1. Purver is supported in part by the European Community's Seventh Framework Programme under grant agreement no 611733 (ConCreTe).
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhao-caragea-2021-knowledge
https://aclanthology.org/2021.ranlp-1.181
Knowledge Distillation with BERT for Image Tag-Based Privacy Prediction
Text in the form of tags associated with online images is often informative for predicting private or sensitive content from images. When using privacy prediction systems running on social networking sites that decide whether each uploaded image should get posted or be protected, users may be reluctant to share real images that may reveal their identity, but may share image tags. In such cases, privacy-aware tags become good indicators of image privacy and can be utilized to generate privacy decisions. In this paper, our aim is to learn tag representations for images to improve tagbased image privacy prediction. To achieve this, we explore self-distillation with BERT, in which we utilize knowledge in the form of soft probability distributions (soft labels) from the teacher model to help with the training of the student model. Our approach effectively learns better tag representations with improved performance on private image identification and outperforms state-of-the-art models for this task. Moreover, we utilize the idea of knowledge distillation to improve tag representations in a semi-supervised learning task. Our semi-supervised approach with only 20% of annotated data achieves similar performance compared with its supervised learning counterpart. Last, we provide a comprehensive analysis to get a better understanding of our approach.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
This research is supported in part by NSF. Any opinions, findings, and conclusions expressed here are those of the authors and do not necessarily reflect the views of NSF. The computing for this project was performed on AWS. We also thank our reviewers for their feedback.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
guijarrubia-etal-2004-evaluation
http://www.lrec-conf.org/proceedings/lrec2004/pdf/309.pdf
Evaluation of a Spoken Phonetic Database in Basque Language
In this paper we present the evaluation of a spoken phonetic corpus designed to train acoustic models for Speech Recognition applications in Basque Language. A complete set of acoustic-phonetic decoding experiments was carried out over the proposed database. Context dependent and independent phoneme units were used in these experiments with two different approaches to acoustic modeling, namely discrete and continuous Hidden Markov Models (HMMs). A complete set of HMMs were trained and tested with the database. Experimental results reveal that the database is large and phonetically rich enough to get great acoustic models to be integrated in Continuous Speech Recognition Systems.
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
atanasov-etal-2019-predicting
https://aclanthology.org/K19-1096
Predicting the Role of Political Trolls in Social Media
We investigate the political roles of "Internet trolls" in social media. Political trolls, such as the ones linked to the Russian Internet Research Agency (IRA), have recently gained enormous attention for their ability to sway public opinion and even influence elections. Analysis of the online traces of trolls has shown different behavioral patterns, which target different slices of the population. However, this analysis is manual and laborintensive, thus making it impractical as a firstresponse tool for newly-discovered troll farms. In this paper, we show how to automate this analysis by using machine learning in a realistic setting. In particular, we show how to classify trolls according to their political role-left, news feed, right-by using features extracted from social media, i.e., Twitter, in two scenarios: (i) in a traditional supervised learning scenario, where labels for trolls are available, and (ii) in a distant supervision scenario, where labels for trolls are not available, and we rely on more-commonly-available labels for news outlets mentioned by the trolls. Technically, we leverage the community structure and the text of the messages in the online social network of trolls represented as a graph, from which we extract several types of learned representations, i.e., embeddings, for the trolls. Experiments on the "IRA Russian Troll" dataset show that our methodology improves over the state-of-the-art in the first scenario, while providing a compelling case for the second scenario, which has not been explored in the literature thus far.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
This research is part of the Tanbih project, 4 which aims to limit the effect of "fake news", propaganda and media bias by making users aware of what they are reading. The project is developed in collaboration between the Qatar Computing Research Institute, HBKU and the MIT Computer Science and Artificial Intelligence Laboratory.Gianmarco De Francisci Morales acknowledges support from Intesa Sanpaolo Innovation Center. The funder had no role in the study design, in the data collection and analysis, in the decision to publish, or in the preparation of the manuscript.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
si-etal-2020-new
https://aclanthology.org/2020.icon-main.20
A New Approach to Claim Check-Worthiness Prediction and Claim Verification
The more we are advancing towards a modern world, the more it opens the path to falsification in every aspect of life. Even in case of knowing the surrounding, common people can not judge the actual scenario as the promises, comments and opinions of the influential people at power keep changing every day. Therefore computationally determining the truthfulness of such claims and comments has a very important societal impact. This paper describes a unique method to extract check-worthy claims from the 2016 US presidential debates and verify the truthfulness of the check-worthy claims. We classify the claims for check-worthiness with our modified Tf-Idf model which is used in background training on fact-checking news articles (NBC News and Washington Post). We check the truthfulness of the claims by using POS, sentiment score and cosine similarity features.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
kelleher-2007-dit
https://aclanthology.org/2007.mtsummit-ucnlg.17
DIT: frequency based incremental attribute selection for GRE
The DIT system uses an incremental greedy search to generate descriptions, (similar to the incremental algorithm described in (Dale and Reiter, 1995) ) incremental algorithm). The selection of the next attribute to be tested for inclusion in the description is ordered by the absolute frequency of each attribute in the training corpus. Attributes are selected in descending order of frequency (i.e. the attribute that occurred most frequently in the training corpus is selected first). The type attribute is always included in the description. Other attributes are included in the description if they excludes at least 1 distractor from the set of distractors that fulfil the description generated prior the the attribute's selection.The algorithm terminates when a distinguishing description has been generated or all the targets attributes have been tested for inclusion in the description. To generate a description the system does the following:
true
[]
[]
Quality Education
null
null
null
2007
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
webb-etal-2010-evaluating
http://www.lrec-conf.org/proceedings/lrec2010/pdf/115_Paper.pdf
Evaluating Human-Machine Conversation for Appropriateness
Evaluation of complex, collaborative dialogue systems is a difficult task. Traditionally, developers have relied upon subjective feedback from the user, and parametrisation over observable metrics. However, both models place some reliance on the notion of a task; that is, the system is helping to user achieve some clearly defined goal, such as book a flight or complete a banking transaction. It is not clear that such metrics are as useful when dealing with a system that has a more complex task, or even no definable task at all, beyond maintain and performing a collaborative dialogue. Working within the EU funded COMPANIONS program, we investigate the use of appropriateness as a measure of conversation quality, the hypothesis being that good companions need to be good conversational partners. We report initial work in the direction of annotating dialogue for indicators of good conversation, including the annotation and comparison of the output of two generations of the same dialogue system.
false
[]
[]
null
null
null
This work was funded by the Companions project (www.companions-project.org) sponsored by the European Commission as part of the Information Society Technologies (IST) programme under EC grant number IST-FP6-034434.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nagao-etal-2002-annotation
https://aclanthology.org/C02-1098
Annotation-Based Multimedia Summarization and Translation
This paper presents techniques for multimedia annotation and their application to video summarization and translation. Our tool for annotation allows users to easily create annotation including voice transcripts, video scene descriptions, and visual/auditory object descriptions. The module for voice transcription is capable of multilingual spoken language identification and recognition. A video scene description consists of semi-automatically detected keyframes of each scene in a video clip and time codes of scenes. A visual object description is created by tracking and interactive naming of people and objects in video scenes. The text data in the multimedia annotation are syntactically and semantically structured using linguistic annotation. The proposed multimedia summarization works upon a multimodal document that consists of a video, keyframes of scenes, and transcripts of the scenes. The multimedia translation automatically generates several versions of multimedia content in different languages.
false
[]
[]
null
null
null
null
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mathur-etal-2018-detecting
https://aclanthology.org/W18-3504
Detecting Offensive Tweets in Hindi-English Code-Switched Language
The exponential rise of social media websites like Twitter, Facebook and Reddit in linguistically diverse geographical regions has led to hybridization of popular native languages with English in an effort to ease communication. The paper focuses on the classification of offensive tweets written in Hinglish language, which is a portmanteau of the Indic language Hindi with the Roman script. The paper introduces a novel tweet dataset, titled Hindi-English Offensive Tweet (HEOT) dataset, consisting of tweets in Hindi-English code switched language split into three classes: nonoffensive, abusive and hate-speech. Further, we approach the problem of classification of the tweets in HEOT dataset using transfer learning wherein the proposed model employing Convolutional Neural Networks is pre-trained on tweets in English followed by retraining on Hinglish tweets.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
kallgren-1996-linguistic
https://aclanthology.org/C96-2114
Linguistic Indeterminacy as a Source of Errors in Tagging
Most evaluations of part-of-speech tagging compare the utput of an automatic tagger to some established standard, define the differences as tagging errors and try to remedy them by, e.g., more training of the tagger. The present article is based on a manual analysis of a large number of tagging errors. Some clear patterns among the errors can be discerned, and the sources of the errors as well as possible alternative methods of remedy are presented and discussed. In particular are the problems with undecidable cases treated.
false
[]
[]
null
null
null
null
1996
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
singha-roy-mercer-2022-biocite
https://aclanthology.org/2022.bionlp-1.23
BioCite: A Deep Learning-based Citation Linkage Framework for Biomedical Research Articles
Research papers reflect scientific advances. Citations are widely used in research publications to support the new findings and show their benefits, while also regulating the information flow to make the contents clearer for the audience. A citation in a research article refers to the information's source, but not the specific text span from that source article. In biomedical research articles, this task is challenging as the same chemical or biological component can be represented in multiple ways in different papers from various domains. This paper suggests a mechanism for linking citing sentences in a publication with cited sentences in referenced sources. The framework presented here pairs the citing sentence with all of the sentences in the reference text, and then tries to retrieve the semantically equivalent pairs. These semantically related sentences from the reference paper are chosen as the cited statements. This effort involves designing a citation linkage framework utilizing sequential and tree-structured siamese deep learning models. This paper also provides a method to create an automatically generated corpus for such a task.
true
[]
[]
Good Health and Well-Being
null
null
null
2022
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
noble-etal-2021-semantic
https://aclanthology.org/2021.starsem-1.3
Semantic shift in social networks
Just as the meaning of words is tied to the communities in which they are used, so too is semantic change. But how does lexical semantic change manifest differently across different communities? In this work, we investigate the relationship between community structure and semantic change in 45 communities from the social media website Reddit. We use distributional methods to quantify lexical semantic change and induce a social network on communities, based on interactions between members. We explore the relationship between semantic change and the clustering coefficient of a community's social network graph, as well as community size and stability. While none of these factors are found to be significant on their own, we report a significant effect of their three-way interaction. We also report on significant wordlevel effects of frequency and change in frequency, which replicate previous findings.
false
[]
[]
null
null
null
This work was supported by grant 2014-39 from the Swedish Research Council (VR) for the establishment of the Centre for Linguistic Theory and Studies in Probability (CLASP) at the University of Gothenburg. This work was also supported by the Marianne and Marcus Wallenberg Foundation grant 2019.0214 for the Gothenburg Research Initiative for Politically Emergent Systems (GRIPES). This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 819455 Awarded to RF).
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
himoro-pareja-lora-2020-towards
https://aclanthology.org/2020.lrec-1.327
Towards a Spell Checker for Zamboanga Chavacano Orthography
Zamboanga Chabacano (ZC) is the most vibrant variety of Philippine Creole Spanish, with over 400,000 native speakers in the Philippines (as of 2010). Following its introduction as a subject and a medium of instruction in the public schools of Zamboanga City from Grade 1 to 3 in 2012, an official orthography for this variety-the so-called "Zamboanga Chavacano Orthography"-has been approved in 2014. Its complexity, however, is a barrier to most speakers, since it does not necessarily reflect the particular phonetic evolution in ZC, but favours etymology instead. The distance between the correct spelling and the different spelling variations is often so great that delivering acceptable performance with the current de facto spell checking technologies may be challenging. The goals of this research is to propose i) a spelling error taxonomy for ZC, formalised as an ontology and ii) an adaptive spell checking approach using Character-Based Statistical Machine Translation to correct spelling errors in ZC. Our results show that this approach is suitable for the goals mentioned and that it could be combined with other current spell checking technologies to achieve even higher performance.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
edmundson-1963-behavior
https://aclanthology.org/1963.earlymt-1.9
The behavior of English articles
null
false
[]
[]
null
null
null
null
1963
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
skjaerholt-2014-chance
https://aclanthology.org/P14-1088
A chance-corrected measure of inter-annotator agreement for syntax
Following the works of Carletta (1996) and Artstein and Poesio (2008), there is an increasing consensus within the field that in order to properly gauge the reliability of an annotation effort, chance-corrected measures of inter-annotator agreement should be used. With this in mind, it is striking that virtually all evaluations of syntactic annotation efforts use uncorrected parser evaluation metrics such as bracket F 1 (for phrase structure) and accuracy scores (for dependencies). In this work we present a chance-corrected metric based on Krippendorff's α, adapted to the structure of syntactic annotations and applicable both to phrase structure and dependency annotation without any modifications. To evaluate our metric we first present a number of synthetic experiments to better control the sources of noise and gauge the metric's responses, before finally contrasting the behaviour of our chance-corrected metric with that of uncorrected parser evaluation metrics on real corpora. 1
false
[]
[]
null
null
null
I would like to thank JanŠtěpánek at Charles University for data from the PCEDT and help with the conversion process, the CDT project for publishing their agreement data, Per Erik Solberg at 8 The Python implementation used in this work, using NumPy and the PyPy compiler, took seven and a half hours compute a single α for the PCEDT data set on an Intel Core i7 2.9 GHz computer. The program is single-threaded. the Norwegian National Library for data from the NDT, and Emily Bender at the University of Washington for the SSD data.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zarriess-etal-2016-pentoref
https://aclanthology.org/L16-1019
PentoRef: A Corpus of Spoken References in Task-oriented Dialogues
PentoRef is a corpus of task-oriented dialogues collected in systematically manipulated settings. The corpus is multilingual, with English and German sections, and overall comprises more than 20000 utterances. The dialogues are fully transcribed and annotated with referring expressions mapped to objects in corresponding visual scenes, which makes the corpus a rich resource for research on spoken referring expressions in generation and resolution. The corpus includes several sub-corpora that correspond to different dialogue situations where parameters related to interactivity, visual access, and verbal channel have been manipulated in systematic ways. The corpus thus lends itself to very targeted studies of reference in spontaneous dialogue.
false
[]
[]
null
null
null
This work was supported by the German Research Foundation (DFG) through the Cluster of Excellence Cognitive Interaction Technology 'CITEC' (EXC 277) at Bielefeld University and the DUEL project (grant SCHL 845/5-1).
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
xu-etal-2020-volctrans
https://aclanthology.org/2020.wmt-1.112
Volctrans Parallel Corpus Filtering System for WMT 2020
In this paper, we describe our submissions to the WMT20 shared task on parallel corpus filtering and alignment for low-resource conditions. The task requires the participants to align potential parallel sentence pairs out of the given document pairs, and score them so that low-quality pairs can be filtered. Our system, Volctrans, is made of two modules, i.e., a mining module and a scoring module. Based on the word alignment model, the mining module adopts an iterative mining strategy to extract latent parallel sentences. In the scoring module, an XLM-based scorer provides scores, followed by reranking mechanisms and ensemble. Our submissions outperform the baseline by 3.x/2.x and 2.x/2.x for km-en and ps-en on From Scratch/Fine-Tune conditions.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
trotta-etal-2020-adding
https://aclanthology.org/2020.lrec-1.532
Adding Gesture, Posture and Facial Displays to the PoliModal Corpus of Political Interviews
This paper introduces a multimodal corpus in the political domain, which on top of transcribed face-to-face interviews presents the annotation of facial displays, hand gestures and body posture. While the fully annotated corpus consists of 3 interviews for a total of 120 minutes, it is extracted from a larger available corpus of 56 face-to-face interviews (14 hours) that has been manually annotated with information about metadata (i.e. tools used for the transcription, link to the interview etc.), pauses (used to mark a pause either between or within utterances), vocal expressions (marking non-lexical expressions such as burp and semi-lexical expressions such as primary interjections), deletions (false starts, repetitions and truncated words) and overlaps. In this work, we describe the additional level of annotation relating to non-verbal elements used by three Italian politicians belonging to three different political parties and who at the time of the talk-show were all candidates for the presidency of the Council of Ministers. We also present the results of some analyses aimed at identifying existing relations between the proxemic phenomena and the linguistic structures in which they occur, in order to capture recurring patterns and differences in the communication strategy.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
voutilainen-1994-noun
https://aclanthology.org/W93-0426
A Noun Phrase Parser of English
A tro V o u tila in e n H elsin k i A b stract An accurate rule-based noun phrase parser of English is described. Special attention is given to the linguistic description. A report on a performance test concludes the paper. 1. In trod u ction 1.1 M o tiv a tio n .
false
[]
[]
null
null
null
null
1994
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
pei-feng-2006-representation
https://aclanthology.org/Y06-1063
Representation of Original Sense of Chinese Characters by FOPC
In Natural Language Processing(NLP), the automatic analysis of meaning occupies a very important position. The representation of original sense of Chinese character plays an irreplaceable role in the processing of advanced units of Chinese language such as the processing of syntax and semantics, etc. This paper, by introducing a few important concepts: FOPC, Ontology and Case Grammar, discusses the representation of original sense of Chinese character.
false
[]
[]
null
null
null
null
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
desai-etal-2015-logistic
https://aclanthology.org/W15-5931
Logistic Regression for Automatic Lexical Level Morphological Paradigm Selection for Konkani Nouns
Automatic selection of morphological paradigm for a noun lemma is necessary to automate the task of building morphological analyzer for nouns with minimal human interventions. Morphological paradigms can be of two types namely surface level morphological paradigms and lexical level morphological paradigms. In this paper we present a method to automatically select lexical level morphological paradigms for Konkani nouns. Using the proposed concept of paradigm differentiating measure to generate a training data set we found that logistic regression can be used to automatically select lexical level morphological paradigms with an F-Score of 0.957.
false
[]
[]
null
null
null
null
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yousef-etal-2021-press
https://aclanthology.org/2021.emnlp-demo.18
Press Freedom Monitor: Detection of Reported Press and Media Freedom Violations in Twitter and News Articles
Freedom of the press and media is of vital importance for democratically organised states and open societies. We introduce the Press Freedom Monitor, a tool that aims to detect reported press and media freedom violations in news articles and tweets. It is used by press and media freedom organisations to support their daily monitoring and to trigger rapid response actions. The Press Freedom Monitor enables the monitoring experts to get a swift overview of recently reported incidents and it has performed impressively in this regard. This paper presents our work on the tool, starting with the training phase, which comprises defining the topic-related keywords to be used for querying APIs for news and Twitter content and evaluating different machine learning models based on a training dataset specifically created for our use case. Then, we describe the components of the production pipeline, including data gathering, duplicates removal, country mapping, case mapping and the user interface. We also conducted a usability study to evaluate the effectiveness of the user interface, and describe improvement plans for future work.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
This work is funded by the European Commission within the Media Freedom Rapid Response project and co-financed through public funding by the regional parliament of Saxony, Germany.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
walker-etal-2018-evidence
https://aclanthology.org/W18-5209
Evidence Types, Credibility Factors, and Patterns or Soft Rules for Weighing Conflicting Evidence: Argument Mining in the Context of Legal Rules Governing Evidence Assessment
This paper reports on the results of an empirical study of adjudicatory decisions about veterans' claims for disability benefits in the United States. It develops a typology of kinds of relevant evidence (argument premises) employed in cases, and it identifies factors that the tribunal considers when assessing the credibility or trustworthiness of individual items of evidence. It also reports on patterns or "soft rules" that the tribunal uses to comparatively weigh the probative value of conflicting evidence. These evidence types, credibility factors, and comparison patterns are developed to be inter-operable with legal rules governing the evidence assessment process in the U.S. This approach should be transferable to other legal and non-legal domains.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
We are grateful to the peer reviewers for this paper, whose comments led to significant improvements. This research was generously supported by the Maurice A. Deane School of Law at Hofstra University, New York, USA.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
klein-1999-standardisation
https://aclanthology.org/W99-0305
Standardisation Efforts on the Level of Dialogue Act in the MATE Project
This paper describes the state of the art of coding schemes for dialogue acts and the efforts to establish a standard in this field. We present a review and comparison of currently available schemes and outline the comparison problems we had due to domain, task, and language dependencies of schemes. We discuss solution strategies which have in mind the reusability of corpora. Reusability is a crucial point because production and annotation of corpora is very time and cost consuming but the current broad variety of schemes makes reusability of annotated corpora very hard. The work of this paper takes place in the framework of the European Union funded MATE project. MATE aims to develop general methodological guidelines for the creation, annotation, retrieval and analysis of annotated corpora.
false
[]
[]
null
null
null
The work described here is part of the European Union funded MATE LE Telematics Project LE4-8370.
1999
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
busemann-1997-automating
https://aclanthology.org/A97-2003
Automating NL Appointment Scheduling with COSMA
Appointment scheduling is a problem faced daily by many individuals and organizations. Cooperating agent systems have been developed to partially automate this task. In order to extend the circle of participants as far as possible we advocate the use of natural language transmitted by email. We demonstrate COSMA, a fully implemented German language server for existing appointment scheduling agent systems. COSMA can cope with multiple dialogues in parallel, and accounts for differences in dialogue behaviour between human and machine agents.
true
[]
[]
Industry, Innovation and Infrastructure
null
null
The following persons have contributed significantly to the development and the implementation of the NL server system and its components: Thierry Declerck, Abdel Kader Diagne, Luca Dini, Judith Klein, and G/inter Neumann. The PASHA agent system has been developed and extended by Sven Schmeier.
1997
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
zhang-etal-2021-namer
https://aclanthology.org/2021.naacl-demos.3
NAMER: A Node-Based Multitasking Framework for Multi-Hop Knowledge Base Question Answering
We present NAMER, an open-domain Chinese knowledge base question answering system based on a novel node-based framework that better grasps the structural mapping between questions and KB queries by aligning the nodes in a query with their corresponding mentions in question. Equipped with techniques including data augmentation and multitasking, we show that the proposed framework outperforms the previous SoTA on CCKS CKBQA dataset. Moreover, we develop a novel data annotation strategy that facilitates the node-tomention alignment, a dataset 1 with such strategy is also published to promote further research. An online demo of NAMER 2 is provided to visualize our framework and supply extra information for users, a video illustration 3 of NAMER is also available.
false
[]
[]
null
null
null
We would like to thank Yanzeng Li and Wenjie Li for the valuable assistance on system design and implementation. We also appreciate anonymous reviewers for their insightful and constructive comments. This work was supported by NSFC under grants 61932001, 61961130390, U20A20174. This work was also partially supported by Beijing Academy of Artificial Intelligence (BAAI). The corresponding author of this work is Lei Zou (zoulei@pku.edu.cn).
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kay-2014-computational
https://aclanthology.org/C14-1191
Does a Computational Linguist have to be a Linguist?
Early computational linguists supplied much of theoretical basis that the ALPAC report said was needed for research on the practical problem of machine translation. The result of their efforts turned out to be more fundamental in that it provided a general theoretical basis for the study of language use as a process, giving rise eventually to constraint-based grammatical formalisms for syntax, finite-state approaches to morphology and phonology, and a host of models how speakers might assemble sentences, and hearers take them apart. Recently, an entirely new enterprise, based on machine learning and big data, has sprung on the scene and challenged the ALPAC committee's finding that linguistic processing must have a firm basis in linguistic theory. In this talk, I will show that the long-term development of linguistic processing requires linguistic theory, sophisticated statistical manipulation of big data, and a third component which is not linguistic at all.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gonzalez-etal-2021-interaction
https://aclanthology.org/2021.findings-acl.259
On the Interaction of Belief Bias and Explanations
A myriad of explainability methods have been proposed in recent years, but there is little consensus on how to evaluate them. While automatic metrics allow for quick benchmarking, it isn't clear how such metrics reflect human interaction with explanations. Human evaluation is of paramount importance, but previous protocols fail to account for belief biases affecting human performance, which may lead to misleading conclusions. We provide an overview of belief bias, its role in human evaluation, and ideas for NLP practitioners on how to account for it. For two experimental paradigms, we present a case study of gradientbased explainability introducing simple ways to account for humans' prior beliefs: models of varying quality and adversarial examples. We show that conclusions about the highest performing methods change when introducing such controls, pointing to the importance of accounting for belief bias in evaluation.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
We thank the reviewers for their insightful feedback for this and previous versions of this paper. This work is partly funded by the Innovation Fund Denmark.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
tanimura-nakagawa-2000-alignment
https://aclanthology.org/W00-1708
Alignment of Sound Track with Text in a TV Drama
(i-p+1,j) (i-p,j-1) (i-1,j-p) (i,j) (i-1,j-1) 1 1 1 1 1 1 1 1 1 1 A' B' a'mi b'mj b'mj-1 a'mi-1 p p (i-p+1,j) (i-p,j-1) (i,j-p+1) (i-1,j-p) (i,j) (i-1,j-1
false
[]
[]
null
null
null
null
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
joshi-2013-invited
https://aclanthology.org/W13-3702
Invited talk: Dependency Representations, Grammars, Folded Structures, among Other Things!
In a dependency grammar (DG) dependency rep resentations (trees) directly express the depen dency relations between words. The hierarchical structure emerges out of the representation. There are no labels other than the words them selves. In a phrase structure type of representa tion words are associated with some category la bels and then the dependencies between the words emerge indirectly in terms of the phrase structure, the nonterminal labels, and possibly some indices associated with the labels. Behind the scene there is a phrase structure grammar (PSG) that builds the hierarchical structure. In a categorical type of grammar (CG), words are as sociated with labels that encode the combinatory potential of each word. Then the hierarchical structure (tree structure) emerges out of a set of operations such as application, function composi tion, type raising, among others. In a treeadjoin ing grammar (TAG), each word is associated with an elementary tree that encodes both the hi erarchical and the dependency structure associ ated with the lexical anchor and the tree(s) asso ciated with a word. The elementary trees are then composed with the operations of substitution and adjoining. In a way, the dependency potential of a word is localized within the elementary tree (trees) associated with a word. Already TAG and TAG like grammars are able to represent dependencies that go beyond those that can be represented by contextfree grammars, but in a controlled way. With this perspective and with the availability of larger dependency annotated corpora (e.g. the Prague Dependency Treebank) one is able to as sess how far one can cover the dependencies that actually appear in the corpora. This approach has the potential of carrying out an 'empirical' inves tigation of the power of representations and the associated grammars. Here by 'empirical' we do not mean 'statistical or distributional' but rather in the sense of covering as much as possible the actual data in annotated corpora! If time permits, I will talk about how dependen cies are represented in nature. For example, grammars have been used to describe the folded structure of RNA biomolecules. The folded structure here describes the dependencies be tween the amino acids as they appear in an RNA biomolecule. One can then ask the question: Can we represent a sentence structure as a folded structure, where the fold captures both the depen dencies and the structure, without any additional labels?
false
[]
[]
null
null
null
null
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
cyrus-feddes-2004-model
https://aclanthology.org/W04-2202
A Model for Fine-Grained Alignment of Multilingual Texts
While alignment of texts on the sentential level is often seen as being too coarse, and word alignment as being too fine-grained, bi-or multilingual texts which are aligned on a level inbetween are a useful resource for many purposes. Starting from a number of examples of non-literal translations, which tend to make alignment difficult, we describe an alignment model which copes with these cases by explicitly coding them. The model is based on predicateargument structures and thus covers the middle ground between sentence and word alignment. The model is currently used in a recently initiated project of a parallel English-German treebank (FuSe), which can in principle be extended with additional languages. * We would like to thank our colleague Frank Schumacher for many valuable comments on this paper. 1 Cf. the approach described in (Melamed, 1998).
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
koller-kruijff-2004-talking
https://aclanthology.org/C04-1049
Talking robots with Lego MindStorms
This paper shows how talking robots can be built from off-the-shelf components, based on the Lego MindStorms robotics platform. We present four robots that students created as final projects in a seminar we supervised. Because Lego robots are so affordable, we argue that it is now feasible for any dialogue researcher to tackle the interesting challenges at the robot-dialogue interface. 1 LEGO and LEGO MindStorms are trademarks of the LEGO Company.
false
[]
[]
null
null
null
Acknowledgments. The authors would like to thank LEGO and CLT Sprachtechnologie for providing free components from which to build our robot systems. We are deeply indebted to our students, who put tremendous effort into designing and building the presented robots. Further information about the student projects (including a movie) is available at the course website, http://www.coli.unisb.de/cl/courses/lego-02.
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chatterjee-etal-2017-multi
https://aclanthology.org/W17-4773
Multi-source Neural Automatic Post-Editing: FBK's participation in the WMT 2017 APE shared task
Previous phrase-based approaches to Automatic Post-editing (APE) have shown that the dependency of MT errors from the source sentence can be exploited by jointly learning from source and target information. By integrating this notion in a neural approach to the problem, we present the multi-source neural machine translation (NMT) system submitted by FBK to the WMT 2017 APE shared task. Our system implements multi-source NMT in a weighted ensemble of 8 models. The n-best hypotheses produced by this ensemble are further re-ranked using features based on the edit distance between the original MT output and each APE hypothesis, as well as other statistical models (n-gram language model and operation sequence model). This solution resulted in the best system submission for this round of the APE shared task for both en-de and de-en language directions. For the former language direction, our primary submission improves over the MT baseline up to-4.9 TER and +7.6 BLEU points. For the latter, where the higher quality of the original MT output reduces the room for improvement, the gains are lower but still significant (-0.25 TER and +0.3 BLEU).
false
[]
[]
null
null
null
This work has been partially supported by the ECfunded H2020 project QT21 (grant agreement no. 645452).
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
diab-bhutada-2009-verb
https://aclanthology.org/W09-2903
Verb Noun Construction MWE Token Classification
We address the problem of classifying multiword expression tokens in running text. We focus our study on Verb-Noun Constructions (VNC) that vary in their idiomaticity depending on context. VNC tokens are classified as either idiomatic or literal. We present a supervised learning approach to the problem. We experiment with different features. Our approach yields the best results to date on MWE classification combining different linguistically motivated features, the overall performance yields an F-measure of 84.58% corresponding to an Fmeasure of 89.96% for idiomaticity identification and classification and 62.03% for literal identification and classification.
false
[]
[]
null
null
null
null
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
laurent-etal-2010-ad
http://www.lrec-conf.org/proceedings/lrec2010/pdf/133_Paper.pdf
Ad-hoc Evaluations Along the Lifecycle of Industrial Spoken Dialogue Systems: Heading to Harmonisation?
With a view to rationalise the evaluation process within the Orange Labs spoken dialogue system projects, a field audit has been realised among the various related professionals. The article presents the main conclusions of the study and draws work perspectives to enhance the evaluation process in such a complex organisation. We first present the typical spoken dialogue system project lifecycle and the involved communities of stakeholders. We then sketch a map of indicators used across the teams. It shows that each professional category designs its evaluation metrics according to a case-by-case strategy, each one targeting different goals and methodologies. And last, we identify weaknesses in the evaluation process is handled by the various teams. Among others, we mention: the dependency on the design and exploitation tools that may not be suitable for an adequate collection of relevant indicators, the need to refine some indicators' definition and analysis to obtain valuable information for system enhancement, the sharing issue that advocates for a common definition of indicators across the teams and, as a consequence, the need for shared applications that support and encourage such a rationalisation.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
rosner-2002-future
http://www.lrec-conf.org/proceedings/lrec2002/pdf/256.pdf
The Future of Maltilex
The Maltilex project, supported by the University of Malta, has now been running for approximately 3 years. Its aim is to create a computational lexicon of Maltese to serve as the basic infrastructure for the development of a wide variety of language-enabled applications. The project is further described in Rosner et. al. (Rosner et-al 1999, Rosner et al., 1998). This paper discusses the background, achievements, and immediate future aims of the project. It concludes with a discussion of some themes to be pursued in the medium term.
false
[]
[]
null
null
null
This work is being supported by the University of Malta. Thanks also go to colleagues Ray Fabri, Joe Caruana, Albert Gatt, and Angelo Dalli all of whom are working actively for the project.
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
cramer-etal-2006-building
http://www.lrec-conf.org/proceedings/lrec2006/pdf/206_pdf.pdf
Building an Evaluation Corpus for German Question Answering by Harvesting Wikipedia
The growing interest in open-domain question answering is limited by the lack of evaluation and training resources. To overcome this resource bottleneck for German, we propose a novel methodology to acquire new question-answer pairs for system evaluation that relies on volunteer collaboration over the Internet. Utilizing Wikipedia, a popular free online encyclopedia available in several languages, we show that the data acquisition problem can be cast as a Web experiment. We present a Web-based annotation tool and carry out a distributed data collection experiment. The data gathered from the mostly anonymous contributors is compared to a similar dataset produced in-house by domain experts on the one hand, and the German questions from the from the CLEF QA 2004 effort on the other hand. Our analysis of the datasets suggests that using our novel method a medium-scale evaluation resource can be built at very small cost in a short period of time. The technique and software developed here is readily applicable to other languages where free online encyclopedias are available, and our resulting corpus is likewise freely available.
false
[]
[]
null
null
null
Acknowledgments. This research was partly funded by the BMBF Project SmartWeb under Federal Ministry of Education and Research grant 01IM D01M. We thank Tim Bartel from the Wikimedia Foundation for feedback. A big "thank you" goes to all volunteer subjects, without whom this would not have been possible. We also thank the two Web experiment portals for linking to our study and Holly Branigan for discussions about alignment.
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kim-park-2004-bioar
https://aclanthology.org/W04-0711
BioAR: Anaphora Resolution for Relating Protein Names to Proteome Database Entries
The need for associating, or grounding, protein names in the literature with the entries of proteome databases such as Swiss-Prot is well-recognized. The protein names in the biomedical literature show a high degree of morphological and syntactic variations, and various anaphoric expressions including null anaphors. We present a biomedical anaphora resolution system, BioAR, in order to address the variations of protein names and to further associate them with Swiss-Prot entries as the actual entities in the world. The system shows the performance of 59.5%¢ 75.0% precision and 40.7%¢ 56.3% recall, depending on the specific types of anaphoric expressions. We apply BioAR to the protein names in the biological interactions as extracted by our biomedical information extraction system, or BioIE, in order to construct protein pathways automatically.
true
[]
[]
Good Health and Well-Being
null
null
We are grateful to the anonymous reviewers and to Bonnie Webber for helpful comments. This work has been supported by the Korea Science and Engineering Foundation through AITrc.
2004
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
xie-pu-2021-empathetic
https://aclanthology.org/2021.conll-1.10
Empathetic Dialog Generation with Fine-Grained Intents
Empathetic dialog generation aims at generating coherent responses following previous dialog turns and, more importantly, showing a sense of caring and a desire to help. Existing models either rely on pre-defined emotion labels to guide the response generation, or use deterministic rules to decide the emotion of the response. With the advent of advanced language models, it is possible to learn subtle interactions directly from the dataset, providing that the emotion categories offer sufficient nuances and other non-emotional but emotional regulating intents are included. In this paper, we describe how to incorporate a taxonomy of 32 emotion categories and 8 additional emotion regulating intents to succeed the task of empathetic response generation. To facilitate the training, we also curated a largescale emotional dialog dataset from movie subtitles. Through a carefully designed crowdsourcing experiment, we evaluated and demonstrated how our model produces more empathetic dialogs compared with its baselines.
true
[]
[]
Good Health and Well-Being
null
null
null
2021
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
cesa-bianchi-reverberi-2009-online
https://aclanthology.org/2009.eamt-smart.3
Online learning for CAT applications
CAT meets online learning The vector w contains the decoder online weights EQUATION
false
[]
[]
null
null
null
null
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mckeown-1984-natural
https://aclanthology.org/P84-1043
Natural Language for Exert Systems: Comparisons with Database Systems
Do natural language database systems still ,~lovide a valuable environment for further work on n~,tural language processing?
false
[]
[]
null
null
null
null
1984
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
pitler-etal-2010-using
https://aclanthology.org/C10-1100
Using Web-scale N-grams to Improve Base NP Parsing Performance
We use web-scale N-grams in a base NP parser that correctly analyzes 95.4% of the base NPs in natural text. Web-scale data improves performance. That is, there is no data like more data. Performance scales log-linearly with the number of parameters in the model (the number of unique N-grams). The web-scale N-grams are particularly helpful in harder cases, such as NPs that contain conjunctions.
false
[]
[]
null
null
null
We gratefully acknowledge the Center for Language and Speech Processing at Johns Hopkins University for hosting the workshop at which this research was conducted.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
du-etal-2021-learning
https://aclanthology.org/2021.acl-long.403
Learning Event Graph Knowledge for Abductive Reasoning
Abductive reasoning aims at inferring the most plausible explanation for observed events, which would play critical roles in various NLP applications, such as reading comprehension and question answering. To facilitate this task, a narrative text based abductive reasoning task αNLI is proposed, together with explorations about building reasoning framework using pretrained language models. However, abundant event commonsense knowledge is not well exploited for this task. To fill this gap, we propose a variational autoencoder based model ege-RoBERTa, which employs a latent variable to capture the necessary commonsense knowledge from event graph for guiding the abductive reasoning task. Experimental results show that through learning the external event graph knowledge, our approach outperforms the baseline methods on the αNLI task.
false
[]
[]
null
null
null
We thank the anonymous reviewers for their constructive comments, and gratefully acknowledge the support of the National Key Research and Development Program of China (2020AAA0106501), and the National Natural Science Foundation of China (61976073).
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
masmoudi-etal-2019-semantic
https://aclanthology.org/R19-1084
Semantic Language Model for Tunisian Dialect
In this paper, we describe the process of creating a statistical Language Model (LM) for the Tunisian Dialect. Indeed, this work is part of the realization of Automatic Speech Recognition (ASR) system for the Tunisian Railway Transport Network. Since our field of work has been limited, there are several words with similar behaviors (semantic for example) but they do not have the same appearance probability; their class groupings will therefore be possible. For these reasons, we propose to build an n-class LM that is based mainly on the integration of purely semantic data. Indeed, each class represents an abstraction of similar labels. In order to improve the sequence labeling task, we proposed to use a discriminative algorithm based on the Conditional Random Field (CRF) model. To better judge our choice of creating an n-class word model, we compared the created model with the 3-gram type model on the same test corpus of evaluation. Additionally, to assess the impact of using the CRF model to perform the semantic labelling task in order to construct semantic classes, we compared the n-class created model with using the CRF in the semantic labelling task and the n-class model without using the CRF in the semantic labelling task. The drawn comparison of the predictive power of the n-class model obtained by applying the CRF model in the semantic labelling is that it is better than the other two models presenting the highest value of its perplexity.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
saggion-etal-2010-multilingual
https://aclanthology.org/C10-2122
Multilingual Summarization Evaluation without Human Models
We study correlation of rankings of text summarization systems using evaluation methods with and without human models. We apply our comparison framework to various well-established contentbased evaluation measures in text summarization such as coverage, Responsiveness, Pyramids and ROUGE studying their associations in various text summarization tasks including generic and focus-based multi-document summarization in English and generic single-document summarization in French and Spanish. The research is carried out using a new content-based evaluation framework called FRESA to compute a variety of divergences among probability distributions.
false
[]
[]
null
null
null
We thank three anonymous reviewers for their valuable and enthusiastic comments. Horacio Saggion is grateful to the Programa Ramón y Cajal from the Ministerio de Ciencia e Innovación, Spain and to a Comença grant from Universitat Pompeu Fabra (COMENÇ A10.004). This work is partially supported by a postdoctoral grant (National Program for Mobility of Research Human Resources; National Plan of Scientific Research, Development and Innovation 2008-2011) given to Iria da Cunha by the Ministerio de Ciencia e Innovación, Spain.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
de-waard-pander-maat-2012-epistemic
https://aclanthology.org/W12-4306
Epistemic Modality and Knowledge Attribution in Scientific Discourse: A Taxonomy of Types and Overview of Features
We propose a model for knowledge attribution and epistemic evaluation in scientific discourse, consisting of three dimensions with different values: source (author, other, unknown); value (unknown, possible, probable, presumed true) and basis (reasoning, data, other). Based on a literature review, we investigate four linguistic features that mark different types epistemic evaluation (modal auxiliary verbs, adverbs/adjectives, reporting verbs and references). A corpus study on two biology papers indicates the usefulness of this model, and suggest some typical trends. In particular, we find that matrix clauses with a reporting verb of the form 'These results suggest', are the predominant feature indicating knowledge attribution in scientific text.
true
[]
[]
Industry, Innovation and Infrastructure
null
null
We wish to thank Eduard Hovy for providing the insight that modality can be thought of like sentiment, and our anonymous reviewers for their constructive comments. Anita de Waard's research is supported by Elsevier Labs and a grant from the Dutch funding organization NWO, under their Casimir Programme.
2012
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
laparra-etal-2015-timelines
https://aclanthology.org/W15-4508
From TimeLines to StoryLines: A preliminary proposal for evaluating narratives
We formulate a proposal that covers a new definition of StoryLines based on the shared data provided by the NewsStory workshop. We re-use the SemEval 2015 Task 4: Timelines dataset to provide a gold-standard dataset and an evaluation measure for evaluating StoryLines extraction systems. We also present a system to explore the feasibility of capturing Story-Lines automatically. Finally, based on our initial findings, we also discuss some simple changes that will improve the existing annotations to complete our initial Story-Line task proposal.
false
[]
[]
null
null
null
We are grateful to the anonymous reviewers for their insightful comments. This work has been partially funded by SKaTer (TIN2012-38584-C06-02) and NewsReader (FP7-ICT-2011-8-316404), as well as the READERS project with the financial support of MINECO, ANR (convention ANR-12-CHRI-0004-03) and EPSRC (EP/K017845/1) in the framework of ERA-NET CHIST-ERA (UE FP7/2007.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
deshmukh-etal-2019-sequence
https://aclanthology.org/W19-5809
A Sequence Modeling Approach for Structured Data Extraction from Unstructured Text
Extraction of structured information from unstructured text has always been a problem of interest for NLP community. Structured data is concise to store, search and retrieve; and it facilitates easier human & machine consumption. Traditionally, structured data extraction from text has been done by using various parsing methodologies, applying domain specific rules and heuristics. In this work, we leverage the developments in the space of sequence modeling for the problem of structured data extraction. Initially, we posed the problem as a machine translation problem and used the state-of-the-art machine translation model. Based on these initial results, we changed the approach to a sequence tagging one. We propose an extension of one of the attractive models for sequence tagging tailored and effective to our problem. This gave 4.4% improvement over the vanilla sequence tagging model. We also propose another variant of the sequence tagging model which can handle multiple labels of words. Experiments have been performed on Wikipedia Infobox Dataset of biographies and results are presented for both single and multi-label models. These models indicate an effective alternate deep learning technique based methods to extract structured data from raw text.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
somers-2005-faking
https://aclanthology.org/U05-1012
Faking it: Synthetic Text-to-speech Synthesis for Under-resourced Languages -- Experimental Design
Speech synthesis or text-to-speech (TTS) systems are currently available for a number of the world's major languages, but for thousands of the world's 'minor' languages no such technology is available. While awaiting the development of such technology, we would like to try the stopgap solution of using an existing TTS system for a major language (the base language) to 'fake' TTS for a minor language (the target language). This paper describes the design for an experiment which involves finding a suitable base language for the Australian Aboriginal language Pitjantjajara as a target language, and evaluating its usability in the real-life situation of providing language technology support for speakers of the target language whose understanding of the local majority language is limited, for example in the scenario of going to the doctor.
false
[]
[]
null
null
null
Our thanks go to Andrew Longmire at the Department of Environment and Heritage's Cultural Centre, Uluru-Kata Tjuta National Park, Yulara NT, and to Bill Edwards, of the Unaipon School, University of South Australia, Adelaide, for their interest in the experiment, and, we hope eventually, for their assistance in conducting it.
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
edunov-etal-2018-understanding
https://aclanthology.org/D18-1045
Understanding Back-Translation at Scale
An effective method to improve neural machine translation with monolingual data is to augment the parallel training corpus with back-translations of target language sentences. This work broadens the understanding of back-translation and investigates a number of methods to generate synthetic source sentences. We find that in all but resource poor settings back-translations obtained via sampling or noised beam outputs are most effective. Our analysis shows that sampling or noisy synthetic data gives a much stronger training signal than data generated by beam or greedy search. We also compare how synthetic data compares to genuine bitext and study various domain effects. Finally, we scale to hundreds of millions of monolingual sentences and achieve a new state of the art of 35 BLEU on the WMT'14 English-German test set.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
rohanian-2017-multi
https://doi.org/10.26615/issn.1314-9156.2017_005
Multi-Document Summarization of Persian Text using Paragraph Vectors
null
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
shahid-etal-2020-detecting
https://aclanthology.org/2020.nuse-1.15
Detecting and understanding moral biases in news
We describe work in progress on detecting and understanding the moral biases of news sources by combining framing theory with natural language processing. First we draw connections between issue-specific frames and moral frames that apply to all issues. Then we analyze the connection between moral frame presence and news source political leaning. We develop and test a simple classification model for detecting the presence of a moral frame, highlighting the need for more sophisticated models. We also discuss some of the annotation and frame detection challenges that can inform future research in this area.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
The authors would like to thank Sumayya Siddiqui, Navya Reddy and Hasan Sehwail for their help with annotating the data.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
meteer-etal-2012-medlingmap
https://aclanthology.org/W12-2417
MedLingMap: A growing resource mapping the Bio-Medical NLP field
The application of natural language processing (NLP) in the biology and medical domain crosses many fields from Healthcare Information to Bioinformatics to NLP itself. In order to make sense of how these fields relate and intersect, we have created "MedLingMap" (www.medlingmap.org) which is a compilation of references with a multi-faceted index. The initial focus has been creating the infrastructure and populating it with references annotated with facets such as topic, resources used (ontologies, tools, corpora), and organizations. Simultaneously we are applying NLP techniques to the text to find clusters, key terms and other relationships. The goal for this paper is to introduce MedLingMap to the community and show how it can be a powerful tool for research and exploration in the field.
true
[]
[]
Good Health and Well-Being
null
null
null
2012
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
habash-metsky-2008-automatic
https://aclanthology.org/2008.amta-papers.9
Automatic Learning of Morphological Variations for Handling Out-of-Vocabulary Terms in Urdu-English MT
We present an approach for online handling of Out-of-Vocabulary (OOV) terms in Urdu-English MT. Since Urdu is morphologically richer than English, we expect a large portion of the OOV terms to be Urdu morphological variations that are irrelevant to English. We describe an approach to automatically learn English-irrelevant (targetirrelevant) Urdu (source) morphological variation rules from standard phrase tables. These rules are learned in an unsupervised (or lightly supervised) manner by exploiting redundancy in Urdu and collocation with English translations. We use these rules to hypothesize invocabulary alternatives to the OOV terms. Our results show that we reduce the OOV rate from a standard baseline average of 2.6% to an average of 0.3% (or 89% relative decrease). We also increase the BLEU score by 0.45 (absolute) and 2.8% (relative) on a standard test set. A manual error analysis shows that 28% of handled OOV cases produce acceptable translations in context. [8th AMTA conference, Hawaii, 21-25 October 2008] Ã © ÂÃ º bnwAnA 'to make through another person (indirect causative)'. Much of these inflectional variations are just "noise" from the point of view of English but some are not. In the work presented here we attempt to automatically learn the patterns of what English is truly blind to and what it is not.
false
[]
[]
null
null
null
The first author was funded under the DARPA GALE program, contract HR0011-06-C-0023.
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dary-etal-2021-talep
https://aclanthology.org/2021.cmcl-1.13
TALEP at CMCL 2021 Shared Task: Non Linear Combination of Low and High-Level Features for Predicting Eye-Tracking Data
In this paper we describe our contribution to the CMCL 2021 Shared Task, which consists in predicting 5 different eye tracking variables from English tokenized text. Our approach is based on a neural network that combines both raw textual features we extracted from the text and parser-based features that include linguistic predictions (e.g. part of speech) and complexity metrics (e.g., entropy of parsing). We found that both the features we considered as well as the architecture of the neural model that combined these features played a role in the overall performance. Our system achieved relatively high accuracy on the test data of the challenge and was ranked 2nd out of 13 competing teams and a total of 30 submissions.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
williams-1984-frequency
https://aclanthology.org/1984.bcs-1.7
A frequency-mode device to assist in the machine translation of natural languages
null
false
[]
[]
null
null
null
null
1984
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ljubesic-etal-2017-adapting
https://aclanthology.org/W17-1410
Adapting a State-of-the-Art Tagger for South Slavic Languages to Non-Standard Text
In this paper we present the adaptations of a state-of-the-art tagger for South Slavic languages to non-standard texts on the example of the Slovene language. We investigate the impact of introducing in-domain training data as well as additional super-87.41% on the full morphosyntactic description, which is, nevertheless, still quite far from the accuracy of 94.27% achieved on standard text.
false
[]
[]
null
null
null
The work described in this paper was funded by the Slovenian Research Agency national basic research project J6-6842 "Resources, Tools and Methods for the Research of Nonstandard Internet Slovene", the national research programme "Knowledge Technologies", by the Ministry of Education, Science and Sport within the "CLARIN.SI" research infrastructure and the Swiss National Science Foundation grant IZ74Z0 160501 (ReLDI).
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
amiri-etal-2017-repeat
https://aclanthology.org/D17-1255
Repeat before Forgetting: Spaced Repetition for Efficient and Effective Training of Neural Networks
We present a novel approach for training artificial neural networks. Our approach is inspired by broad evidence in psychology that shows human learners can learn efficiently and effectively by increasing intervals of time between subsequent reviews of previously learned materials (spaced repetition). We investigate the analogy between training neural models and findings in psychology about human memory model and develop an efficient and effective algorithm to train neural models. The core part of our algorithm is a cognitively-motivated scheduler according to which training instances and their "reviews" are spaced over time. Our algorithm uses only 34-50% of data per epoch, is 2.9-4.8 times faster than standard training, and outperforms competing state-of-the-art baselines. 1
false
[]
[]
null
null
null
We thank Mitra Mohtarami for her constructive feedback during the development of this paper and anonymous reviewers for their thoughtful comments. This work was supported by National Institutes of Health (NIH) grant R01GM114355 from the National Institute of General Medical Sciences (NIGMS). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
joshi-etal-2017-triviaqa
https://aclanthology.org/P17-1147
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K questionanswer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a featurebased classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that Trivi-aQA is a challenging testbed that is worth significant future study. 1
false
[]
[]
null
null
null
This work was supported by DARPA contract FA8750-13-2-0019, the WRF/Cable Professorship, gifts from Google and Tencent, and an Allen Distinguished Investigator Award. The authors would like to thank Minjoon Seo for the BiDAF code, and Noah Smith, Srinivasan Iyer, Mark Yatskar, Nicholas FitzGerald, Antoine Bosselut, Dallas Card, and anonymous reviewers for helpful comments.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
riloff-lehnert-1993-dictionary
https://aclanthology.org/X93-1023
Dictionary Construction by Domain Experts
Sites participating in the recent message understanding conferences have increasingly focused their research on developing methods for automated knowledge acquisition and tools for human-assisted knowledge engineering. However, it is important to remember that the ultimate users of these tools will be domain experts, not natural language processing researchers. Domain experls have extensive knowledge about the task and the domain, but will have little or no background in linguistics or text processing. Tools that assume familiarity with computational linguistics will be of limited use in practical development scenarios.
false
[]
[]
null
null
null
null
1993
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
emele-dorna-1998-ambiguity-preserving
https://aclanthology.org/P98-1060
Ambiguity Preserving Machine Translation using Packed Representations
In this paper we present an ambiguity preserving translation approach which transfers ambiguous LFG f-structure representations. It is based on packed f-structure representations which are the result of potentially ambiguous utterances. If the ambiguities between source and target language can be preserved, no unpacking during transfer is necessary and the generator may produce utterances which maximally cover the underlying ambiguities. We convert the packed f-structure descriptions into a flat set of prolog terms which consist of predicates, their predicate argument structure and additional attribute-value information. Ambiguity is expressed via local disjunctions. The flat representations facilitate the application of a Shake-and-Bake like transfer approach extended to deal with packed ambiguities. * We would like to thank our colleagues at Xerox PARC and Xerox RCE for fruitful discussions and the anonymous reviewers for valuable feedback. This work was funded by the German Federal Ministry of Education, Science, Research and Technology (BMBF) in the framework of the Verbmobil project under grant 01 IV 701 N3.
false
[]
[]
null
null
null
null
1998
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
raiyani-etal-2018-fully
https://aclanthology.org/W18-4404
Fully Connected Neural Network with Advance Preprocessor to Identify Aggression over Facebook and Twitter
Aggression Identification and Hate Speech detection had become an essential part of cyberharassment and cyberbullying and an automatic aggression identification can lead to the interception of such trolling. Following the same idealization, vista.ue team participated in the workshop which included a shared task on 'Aggression Identification'. A dataset of 15,000 aggression-annotated Facebook Posts and Comments written in Hindi (in both Roman and Devanagari script) and English languages were made available and different classification models were designed. This paper presents a model that outperforms Facebook FastText (Joulin et al., 2016a) and deep learning models over this dataset. Especially, the English developed system, when used to classify Twitter text, outperforms all the shared task submitted systems.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
The authors would like to thank COMPETE 2020, PORTUGAL 2020 Programs, the European Union, and LISBOA 2020 for supporting this research as part of Agatha Project SI & IDT number 18022 (Intelligent analysis system of open of sources information for surveillance/crime control) made in collaboration with the University ofÉvora. The colleagues Madhu Agrawal, Silvia Bottura Scardina and Roy Bayot provided insight and expertise that greatly assisted the research.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
pouliquen-etal-2011-statistical
https://aclanthology.org/2011.eamt-1.3
Statistical Machine Translation
This paper presents a study conducted in the course of implementing a project in the World Intellectual Property Organization (WIPO) on assisted translation of patent abstracts and titles from English to French. The tool (called 'Tapta') is trained on an extensive corpus of manually translated patents. These patents are classified, each class belonging to one of the 32 predefined domains. The trained Statistical Machine Translation (SMT) tool uses this additional information to propose more accurate translations according to the context. The performance of the SMT system was shown to be above the current state of the art, but, in order to produce an acceptable translation, a human has to supervise the process. Therefore, a graphical user interface was built in which the translator drives the automatic translation process. A significant experiment with human operators was conducted within WIPO, the output was judged to be successful and a project to use Tapta in production is now under discussion.
false
[]
[]
null
null
null
This work would not have been possible without the help of WIPO translators, namely Cécile Copet, Sophie Maire, Yann Wipraechtiger, Peter Smith and Nicolas Potapov. Special thanks to the 15 persons who participated in the two tests of Tapta and to Paul Halfpenny for his valuable proof-reading.
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wu-palmer-1994-verb
https://aclanthology.org/P94-1019
Verb Semantics and Lexical Selection
This paper will focus on the semantic representation of verbs in computer systems and its impact on lexical selection problems in machine translation (MT). Two groups of English and Chinese verbs are examined to show that lexical selection must be based on interpretation of the sentence as well as selection restrictions placed on the verb arguments. A novel representation scheme is suggested, and is compared to representations with selection restrictions used in transfer-based MT. We see our approach as closely aligned with knowledge-based MT approaches (KBMT), and as a separate component that could be incorporated into existing systems. Examples and experimental results will show that, using this scheme, inexact matches can achieve correct lexical selection.
false
[]
[]
null
null
null
null
1994
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
libovicky-helcl-2017-attention
https://aclanthology.org/P17-2031
Attention Strategies for Multi-Source Sequence-to-Sequence Learning
Modeling attention in neural multi-source sequence-to-sequence learning remains a relatively unexplored area, despite its usefulness in tasks that incorporate multiple source languages or modalities. We propose two novel approaches to combine the outputs of attention mechanisms over each source sequence, flat and hierarchical. We compare the proposed methods with existing techniques and present results of systematic evaluation of those methods on the WMT16 Multimodal Translation and Automatic Post-editing tasks. We show that the proposed methods achieve competitive results on both tasks.
false
[]
[]
null
null
null
We would like to thank Ondřej Dušek, Rudolf Rosa, Pavel Pecina, and Ondřej Bojar for a fruitful discussions and comments on the draft of the paper.This research has been funded by the Czech Science Foundation grant no. P103/12/G084, the EU grant no. H2020-ICT-2014-1-645452 (QT21), and Charles University grant no. 52315/2014 and SVV project no. 260 453. This work has been using language resources developed and/or stored and/or distributed by the LINDAT-Clarin project of the Ministry of Education of the Czech Republic (project LM2010013).
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bogantes-etal-2016-towards
https://aclanthology.org/L16-1358
Towards Lexical Encoding of Multi-Word Expressions in Spanish Dialects
This paper describes a pilot study in lexical encoding of multi-word expressions (MWEs) in 4 Latin American dialects of Spanish: Costa Rican, Colombian, Mexican and Peruvian. We describe the variability of MWE usage across dialects. We adapt an existing data model to a dialect-aware encoding, so as to represent dialect-related specificities, while avoiding redundancy of the data common for all dialects. A dozen of linguistic properties of MWEs can be expressed in this model, both on the level of a whole MWE and of its individual components. We describe the resulting lexical resource containing several dozens of MWEs in four dialects and we propose a method for constructing a web corpus as a support for crowdsourcing examples of MWE occurrences. The resource is available under an open license and paves the way towards a large-scale dialect-aware language resource construction, which should prove useful in both traditional and novel NLP applications.
false
[]
[]
null
null
null
This work is an outcome of a student project carried out within the Erasmus Mundus Master's program "Information Technologies for Business Intelligence" 7 . It was supported by the IC1207 COST action PARSEME 8 . We are grateful to prof. Shuly Wintner for his valuable insights into lexical encoding of MWEs.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
le-etal-2020-dual
https://aclanthology.org/2020.coling-main.314
Dual-decoder Transformer for Joint Automatic Speech Recognition and Multilingual Speech Translation
We introduce dual-decoder Transformer, a new model architecture that jointly performs automatic speech recognition (ASR) and multilingual speech translation (ST). Our models are based on the original Transformer architecture (Vaswani et al., 2017) but consist of two decoders, each responsible for one task (ASR or ST). Our major contribution lies in how these decoders interact with each other: one decoder can attend to different information sources from the other via a dual-attention mechanism. We propose two variants of these architectures corresponding to two different levels of dependencies between the decoders, called the parallel and cross dual-decoder Transformers, respectively. Extensive experiments on the MuST-C dataset show that our models outperform the previously-reported highest translation performance in the multilingual settings, and outperform as well bilingual one-to-one results. Furthermore, our parallel models demonstrate no trade-off between ASR and ST compared to the vanilla multi-task architecture. Our code and pre-trained models are available at https://github.com/formiel/speech-translation.
false
[]
[]
null
null
null
This work was supported by a Facebook AI SRA grant, and was granted access to the HPC resources of IDRIS under the allocations 2020-AD011011695 and 2020-AP011011765 made by GENCI. It was also done as part of the Multidisciplinary Institute in Artificial Intelligence MIAI@Grenoble-Alpes (ANR-19-P3IA-0003). We thank the anonymous reviewers for their insightful feedback.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kiyota-etal-2003-dialog
https://aclanthology.org/P03-2027
Dialog Navigator : A Spoken Dialog Q-A System based on Large Text Knowledge Base
This paper describes a spoken dialog Q-A system as a substitution for call centers. The system is capable of making dialogs for both fixing speech recognition errors and for clarifying vague questions, based on only large text knowledge base. We introduce two measures to make dialogs for fixing recognition errors. An experimental evaluation shows the advantages of these measures.
false
[]
[]
null
null
null
null
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
amble-2000-bustuc-natural
https://aclanthology.org/W99-1001
BusTUC--A natural language bus route adviser in Prolog
Sam m endrag The paper describes a natural language based expert system route adviser for the public bus transport in Trondheim, Norway. The sy stem is available on the Internet, and has been installed at the bus company's web server since the beginning of 1999. The system is bilin gual, relying on an internal language independent logic representation.
false
[]
[]
null
null
null
null
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
johansson-etal-2012-semantic
http://www.lrec-conf.org/proceedings/lrec2012/pdf/455_Paper.pdf
Semantic Role Labeling with the Swedish FrameNet
We present the first results on semantic role labeling using the Swedish FrameNet, which is a lexical resource currently in development. Several aspects of the task are investigated, including the selection of machine learning features, the effect of choice of syntactic parser, and the ability of the system to generalize to new frames and new genres. In addition, we evaluate two methods to make the role label classifier more robust: cross-frame generalization and cluster-based features. Although the small amount of training data limits the performance achievable at the moment, we reach promising results. In particular, the classifier that extracts the boundaries of arguments works well for new frames, which suggests that it already at this stage can be useful in a semi-automatic setting.
false
[]
[]
null
null
null
We are grateful to Percy Liang for the implementation of the Brown clustering software. This work was partly funded by the Centre for Language Technology at Gothenburg University.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
olabiyi-etal-2019-multi
https://aclanthology.org/W19-4114
Multi-turn Dialogue Response Generation in an Adversarial Learning Framework
We propose an adversarial learning approach for generating multi-turn dialogue responses. Our proposed framework, hredGAN, is based on conditional generative adversarial networks (GANs). The GAN's generator is a modified hierarchical recurrent encoder-decoder network (HRED) and the discriminator is a word-level bidirectional RNN that shares context and word embeddings with the generator. During inference, noise samples conditioned on the dialogue history are used to perturb the generator's latent space to generate several possible responses. The final response is the one ranked best by the discriminator. The hredGAN shows improved performance over existing methods: (1) it generalizes better than networks trained using only the log-likelihood criterion, and (2) it generates longer, more informative and more diverse responses with high utterance and topic relevance even with limited training data. This improvement is demonstrated on the Movie triples and Ubuntu dialogue datasets using both automatic and human evaluations.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
su-etal-2022-comparison
https://aclanthology.org/2022.acl-long.572
A Comparison of Strategies for Source-Free Domain Adaptation
Data sharing restrictions are common in NLP, especially in the clinical domain, but there is limited research on adapting models to new domains without access to the original training data, a setting known as source-free domain adaptation. We take algorithms that traditionally assume access to the source-domain training data-active learning, self-training, and data augmentation-and adapt them for source-free domain adaptation. Then we systematically compare these different strategies across multiple tasks and domains. We find that active learning yields consistent gains across all SemEval 2021 Task 10 tasks and domains, but though the shared task saw successful self-trained and data augmented models, our systematic comparison finds these strategies to be unreliable for source-free domain adaptation.
false
[]
[]
null
null
null
Research reported in this publication was supported by the National Library of Medicine of the National Institutes of Health under Award Numbers R01LM012918 and R01LM010090. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nissim-etal-2004-annotation
http://www.lrec-conf.org/proceedings/lrec2004/pdf/638.pdf
An Annotation Scheme for Information Status in Dialogue
We present an annotation scheme for information status (IS) in dialogue, and validate it on three Switchboard dialogues. We show that our scheme has good reproducibility, and compare it with previous attempts to code IS and related features. We eventually apply the scheme to 147 dialogues, thus producing a corpus that contains nearly 70,000 NPs annotated for IS and over 15,000 coreference links.
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
klementiev-etal-2012-inducing
https://aclanthology.org/C12-1089
Inducing Crosslingual Distributed Representations of Words
Distributed representations of words have proven extremely useful in numerous natural language processing tasks. Their appeal is that they can help alleviate data sparsity problems common to supervised learning. Methods for inducing these representations require only unlabeled language data, which are plentiful for many natural languages. In this work, we induce distributed representations for a pair of languages jointly. We treat it as a multitask learning problem where each task corresponds to a single word, and task relatedness is derived from co-occurrence statistics in bilingual parallel data. These representations can be used for a number of crosslingual learning tasks, where a learner can be trained on annotations present in one language and applied to test data in another. We show that our representations are informative by using them for crosslingual document classification, where classifiers trained on these representations substantially outperform strong baselines (e.g. machine translation) when applied to a new language.
false
[]
[]
null
null
null
The work was supported by the MMCI Cluster of Excellence and a Google research award.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
song-etal-2018-deep
https://aclanthology.org/D18-1107
A Deep Neural Network Sentence Level Classification Method with Context Information
In the sentence classification task, context formed from sentences adjacent to the sentence being classified can provide important information for classification. This context is, however, often ignored. Where methods do make use of context, only small amounts are considered, making it difficult to scale. We present a new method for sentence classification, Context-LSTM-CNN, that makes use of potentially large contexts. The method also utilizes long-range dependencies within the sentence being classified, using an LSTM, and short-span features, using a stacked CNN. Our experiments demonstrate that this approach consistently improves over previous methods on two different datasets.
false
[]
[]
null
null
null
This work was partially supported by the European Union under grant agreement No. 654024 SoBigData.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hassanali-liu-2011-measuring
https://aclanthology.org/W11-1411
Measuring Language Development in Early Childhood Education: A Case Study of Grammar Checking in Child Language Transcripts
Language sample analysis is an important technique used in measuring language development. At present, measures of grammatical complexity such as the Index of Productive Syntax (Scarborough, 1990) are used to measure language development in early childhood. Although these measures depict the overall competence in the usage of language, they do not provide for an analysis of the grammatical mistakes made by the child. In this paper, we explore the use of existing Natural Language Processing (NLP) techniques to provide an insight into the processing of child language transcripts and challenges in automatic grammar checking. We explore the automatic detection of 6 types of verb related grammatical errors. We compare rule based systems to statistical systems and investigate the use of different features. We found the statistical systems performed better than the rule based systems for most of the error categories.
true
[]
[]
Quality Education
null
null
The authors thank Chris Dollaghan for sharing the Paradise data, and Thamar Solorio for discussions. This research is partly supported by an NSF award IIS-1017190.
2011
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
dirkson-etal-2021-fuzzybio
https://aclanthology.org/2021.louhi-1.9
FuzzyBIO: A Proposal for Fuzzy Representation of Discontinuous Entities
Discontinuous entities pose a challenge to named entity recognition (NER). These phenomena occur commonly in the biomedical domain. As a solution, expansions of the BIO representation scheme that can handle these entity types are commonly used (i.e. BIOHD). However, the extra tag types make the NER task more difficult to learn. In this paper we propose an alternative; a fuzzy continuous BIO scheme (FuzzyBIO). We focus on the task of Adverse Drug Response extraction and normalization to compare FuzzyBIO to BIOHD. We find that FuzzyBIO improves recall of NER for two of three data sets and results in a higher percentage of correctly identified disjoint and composite entities for all data sets. Using FuzzyBIO also improves end-toend performance for continuous and composite entities in two of three data sets. Since FuzzyBIO improves performance for some data sets and the conversion from BIOHD to FuzzyBIO is straightforward, we recommend investigating which is more effective for any data set containing discontinuous entities.
true
[]
[]
Good Health and Well-Being
null
null
We would like to thank the SIDN fonds for funding this research and our reviewers for their valuable feedback.
2021
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
huang-etal-2021-adast
https://aclanthology.org/2021.findings-acl.224
AdaST: Dynamically Adapting Encoder States in the Decoder for End-to-End Speech-to-Text Translation
In end-to-end speech translation, acoustic representations learned by the encoder are usually fixed and static, from the perspective of the decoder, which is not desirable for dealing with the cross-modal and cross-lingual challenge in speech translation. In this paper, we show the benefits of varying acoustic states according to decoder hidden states and propose an adaptive speech-to-text translation model that is able to dynamically adapt acoustic states in the decoder. We concatenate the acoustic state and target word embedding sequence and feed the concatenated sequence into subsequent blocks in the decoder. In order to model the deep interaction between acoustic states and target hidden states, a speech-text mixed attention sublayer is introduced to replace the conventional cross-attention network. Experiment results on two widely-used datasets show that the proposed method significantly outperforms stateof-the-art neural speech translation models.
false
[]
[]
null
null
null
The present research was partially supported by the National Key Research and Development Program of China (Grant No. 2019QY1802). We would like to thank the anonymous reviewers for their insightful comments.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
okada-1980-conceptual
https://aclanthology.org/C80-1019
Conceptual Taxonomy of Japanese Verbs for Uderstanding Natural Language and Picture Patterns
This paper presents a taxonomy of "matter concepts" or concepts of verbs that play roles of governors in understanding natural language and picture patterns. For this taxonomy we associate natural language with real world picture patterns and analyze the meanings common to them. The analysis shows that matter concepts are divided into two large classes:"simple matter concepts" and "non-simple matter concepts." Furthermore, the latter is divided into "complex concepts" and "derivative concepts." About 4,700 matter concepts used in daily Japanese were actually classified according to the analysis. As a result of the classification about 1,200 basic matter concepts which cover the concepts of real world matter at a minimum were obtained. This classification was applied to a translation of picture pattern sequences into natural language.
false
[]
[]
null
null
null
null
1980
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chen-etal-2021-identify
https://aclanthology.org/2021.rocling-1.43
Identify Bilingual Patterns and Phrases from a Bilingual Sentence Pair
This paper presents a method for automatically identifying bilingual grammar patterns and extracting bilingual phrase instances from a given English-Chinese sentence pair. In our approach, the English-Chinese sentence pair is parsed to identify English grammar patterns and Chinese counterparts. The method involves generating translations of each English grammar pattern and calculating translation probability of words from a word-aligned parallel corpora. The results allow us to extract the most probable English-Chinese phrase pairs in the sentence pair. We present a prototype system that applies the method to extract grammar patterns and phrases in parallel sentences. An evaluation on randomly selected examples from a dictionary shows that our approach has reasonably good performance. We use human judge to assess the bilingual phrases generated by our approach. The results have potential to assist language learning and machine translation research.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
beigman-klebanov-etal-2018-corpus
https://aclanthology.org/N18-2014
A Corpus of Non-Native Written English Annotated for Metaphor
We present a corpus of 240 argumentative essays written by non-native speakers of English annotated for metaphor. The corpus is made publicly available. We provide benchmark performance of state-of-the-art systems on this new corpus, and explore the relationship between writing proficiency and metaphor use.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
amnueypornsakul-etal-2014-predicting
https://aclanthology.org/W14-4110
Predicting Attrition Along the Way: The UIUC Model
Discussion forum and clickstream are two primary data streams that enable mining of student behavior in a massively open online course. A student's participation in the discussion forum gives direct access to the opinions and concerns of the student. However, the low participation (5-10%) in discussion forums, prompts the modeling of user behavior based on clickstream information. Here we study a predictive model for learner attrition on a given week using information mined just from the clickstream. Features that are related to the quiz attempt/submission and those that capture interaction with various course components are found to be reasonable predictors of attrition in a given week.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
graff-etal-2019-ingeotec
https://aclanthology.org/S19-2114
INGEOTEC at SemEval-2019 Task 5 and Task 6: A Genetic Programming Approach for Text Classification
This paper describes our participation in HatEval and OffensEval challenges for English and Spanish languages. We used several approaches, B4MSA, FastText, and EvoMSA. Best results were achieved with EvoMSA, which is a multilingual and domainindependent architecture that combines the prediction from different knowledge sources to solve text classification problems.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lonngren-1988-lexika
https://aclanthology.org/W87-0115
Lexika, baserade p\aa semantiska relationer (Lexica, based on semantic relations) [In Swedish]
Den första fråga man måste ta ställning till om man vill bygga upp en tesaurus, alltså ett lexikon baserat på semantiska relationer, är om man skall tillämpa någon form av hierarkisering och hur i så fall denna skall se ut. I princip vill jag förkasta tanken på att begrep pen som sådana kan ordnas hierarkiskt; jeig tror alltså inte på några universella eller särspråkliga semantiska primitiver ä la Wierzbicka (1972) . Normalt är det nog att konstatera att det föreligger ett associativt samband mellEin två begrepp, t.ex. tand och bita, samt att fastställa styrkan hos och arten av detta saimband utan att pos tulera något riktningsförhållande. Det är emellertid praktiskt att orgainisera ett tesaurus lexikon hierarkiskt. Det innebär en förenkling så till vida att man ersätter en mångfald av relationer med i princip en enda, dependens. Jag tän ker mig här en mer djup-och genomgående hierarkisering än den vi finner i Rogets lexikon (1962, första gången utgivet 1852) och dess svenska efterbildning. Bring (1930) , där man definierat ett begränsat antal " begreppsklasser" och hänfört varje ord till en sådan. Frågan är bara om detta är möjligt, alltså om en sådan hierarkisering står i samklemg med ordskattens inre natur. För att citera Kassabov (1987, 51) gäller det här att undvika det allmänna misstaget att " attempt to prove the systematic character of vocabulary not by establishing the inherent principles of its inner orgainization, but by forcing upon the lexical items the networks of pre-formulated systems" .
false
[]
[]
null
null
null
null
1988
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
maccartney-manning-2007-natural
https://aclanthology.org/W07-1431
Natural Logic for Textual Inference
This paper presents the first use of a computational model of natural logic-a system of logical inference which operates over natural language-for textual inference. Most current approaches to the PAS-CAL RTE textual inference task achieve robustness by sacrificing semantic precision; while broadly effective, they are easily confounded by ubiquitous inferences involving monotonicity. At the other extreme, systems which rely on first-order logic and theorem proving are precise, but excessively brittle. This work aims at a middle way. Our system finds a low-cost edit sequence which transforms the premise into the hypothesis; learns to classify entailment relations across atomic edits; and composes atomic entailments into a top-level entailment judgment. We provide the first reported results for any system on the FraCaS test suite. We also evaluate on RTE3 data, and show that hybridizing an existing RTE system with our natural logic system yields significant performance gains.
false
[]
[]
null
null
null
Acknowledgements The authors wish to thank Marie-Catherine de Marneffe and the anonymous reviewers for their helpful comments on an earlier draft of this paper. This work was supported in part by ARDA's Advanced Question Answering for Intelligence (AQUAINT) Program.
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
han-etal-2020-dyernie
https://aclanthology.org/2020.emnlp-main.593
DyERNIE: Dynamic Evolution of Riemannian Manifold Embeddings for Temporal Knowledge Graph Completion
There has recently been increasing interest in learning representations of temporal knowledge graphs (KGs), which record the dynamic relationships between entities over time. Temporal KGs often exhibit multiple simultaneous non-Euclidean structures, such as hierarchical and cyclic structures. However, existing embedding approaches for temporal KGs typically learn entity representations and their dynamic evolution in the Euclidean space, which might not capture such intrinsic structures very well. To this end, we propose Dy-ERNIE, a non-Euclidean embedding approach that learns evolving entity representations in a product of Riemannian manifolds, where the composed spaces are estimated from the sectional curvatures of underlying data. Product manifolds enable our approach to better reflect a wide variety of geometric structures on temporal KGs. Besides, to capture the evolutionary dynamics of temporal KGs, we let the entity representations evolve according to a velocity vector defined in the tangent space at each timestamp. We analyze in detail the contribution of geometric spaces to representation learning of temporal KGs and evaluate our model on temporal knowledge graph completion tasks. Extensive experiments on three real-world datasets demonstrate significantly improved performance, indicating that the dynamics of multi-relational graph data can be more properly modeled by the evolution of embeddings on Riemannian manifolds.
false
[]
[]
null
null
null
The authors acknowledge support by the German Federal Ministry for Education and Research (BMBF), funding project MLWin (grant 01IS18050).
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mazziotta-2019-evolution
https://aclanthology.org/W19-7709
The evolution of spatial rationales in Tesni\`ere's stemmas
This paper investigates the evolution of the spatial rationales of Tesnière's syntactic diagrams (stemma). I show that the conventions change from his first attempts to model complete sentences up to the classical stemma he uses in his Elements of structural syntax (1959). From mostly symbolic representations of hierarchy (directed arrows from the dependent to the governor), he shifts to a more configurational one (connected dependents are placed below the governor).
false
[]
[]
null
null
null
I would like to thank Sylvain Kahane, Jean-Christophe Vanhalle and anonymous reviewers of the Depling comitee for their suggestions. I would also like to thank Jacques François and Lene Schøsler, who discussed preliminary versions of this paper.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mao-etal-2007-using
https://aclanthology.org/Y07-1031
Using Non-Local Features to Improve Named Entity Recognition Recall
Named Entity Recognition (NER) is always limited by its lower recall resulting from the asymmetric data distribution where the NONE class dominates the entity classes. This paper presents an approach that exploits non-local information to improve the NER recall. Several kinds of non-local features encoding entity token occurrence, entity boundary and entity class are explored under Conditional Random Fields (CRFs) framework. Experiments on SIGHAN 2006 MSRA (CityU) corpus indicate that non-local features can effectively enhance the recall of the state-of-the-art NER systems. Incorporating the non-local features into the NER systems using local features alone, our best system achieves a 23.56% (25.26%) relative error reduction on the recall and 17.10% (11.36%) relative error reduction on the F1 score; the improved F1 score 89.38% (90.09%) is significantly superior to the best NER system with F1 of 86.51% (89.03%) participated in the closed track.
false
[]
[]
null
null
null
null
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chaudhary-etal-2021-wall
https://aclanthology.org/2021.emnlp-main.553
When is Wall a Pared and when a Muro?: Extracting Rules Governing Lexical Selection
Learning fine-grained distinctions between vocabulary items is a key challenge in learning a new language. For example, the noun "wall" has different lexical manifestations in Spanish-"pared" refers to an indoor wall while "muro" refers to an outside wall. However, this variety of lexical distinction may not be obvious to non-native learners unless the distinction is explained in such a way. In this work, we present a method for automatically identifying fine-grained lexical distinctions, and extracting concise descriptions explaining these distinctions in a human-and machine-readable format. We confirm the quality of these extracted descriptions in a language learning setup for two languages, Spanish and Greek, where we use them to teach non-native speakers when to translate a given ambiguous word into its different possible translations. Code and data are publicly released here. 1
false
[]
[]
null
null
null
The authors are grateful to the anonymous reviewers who took the time to provide many interesting comments that made the paper significantly better. We would also like to thank Nikolai Vogler for the original interface for data annotation, and all the learners for their participation in our study, and without whom this study would not have been possible or meaningful. This work is sponsored by the Waibel Presidential Fellowship and by the National Science Foundation under grant 1761548.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
regier-1991-learning
https://aclanthology.org/P91-1018
Learning Perceptually-Grounded Semantics in the \textitL₀ Project
A method is presented for acquiring perceptuallygrounded semantics for spatial terms in a simple visual domain, as a part of the L0 miniature language acquisition project. Two central problems in this learning task are (a) ensuring that the terms learned generalize well, so that they can be accurately applied to new scenes, and (b) learning in the absence of explicit negative evidence. Solutions to these two problems are presented, and the results discussed.
false
[]
[]
null
null
null
null
1991
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kountz-etal-2008-laf
http://www.lrec-conf.org/proceedings/lrec2008/pdf/569_paper.pdf
A LAF/GrAF based Encoding Scheme for underspecified Representations of syntactic Annotations.
Data models and encoding formats for syntactically annotated text corpora need to deal with syntactic ambiguity; underspecified representations are particularly well suited for the representation of ambiguous data because they allow for high informational efficiency. We discuss the issue of being informationally efficient, and the trade-off between efficient encoding of linguistic annotations and complete documentation of linguistic analyses. The main topic of this article is a data model and an encoding scheme based on LAF/GrAF (Ide and Romary, 2006; Ide and Suderman, 2007) which provides a flexible framework for encoding underspecified representations. We show how a set of dependency structures and a set of TiGer graphs (Brants et al., 2002) representing the readings of an ambiguous sentence can be encoded, and we discuss basic issues in querying corpora which are encoded using the framework presented here.
false
[]
[]
null
null
null
null
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
misawa-etal-2017-character
https://aclanthology.org/W17-4114
Character-based Bidirectional LSTM-CRF with words and characters for Japanese Named Entity Recognition
Recently, neural models have shown superior performance over conventional models in NER tasks. These models use CNN to extract sub-word information along with RNN to predict a tag for each word. However, these models have been tested almost entirely on English texts. It remains unclear whether they perform similarly in other languages. We worked on Japanese NER using neural models and discovered two obstacles of the state-ofthe-art model. First, CNN is unsuitable for extracting Japanese sub-word information. Secondly, a model predicting a tag for each word cannot extract an entity when a part of a word composes an entity. The contributions of this work are (i) verifying the effectiveness of the state-of-theart NER model for Japanese, (ii) proposing a neural model for predicting a tag for each character using word and character information. Experimentally obtained results demonstrate that our model outperforms the state-of-the-art neural English NER model in Japanese.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhang-etal-2022-modeling
https://aclanthology.org/2022.acl-long.84
Modeling Temporal-Modal Entity Graph for Procedural Multimodal Machine Comprehension
Procedural Multimodal Documents (PMDs) organize textual instructions and corresponding images step by step. Comprehending PMDs and inducing their representations for the downstream reasoning tasks is designated as Procedural MultiModal Machine Comprehension (M 3 C). In this study, we approach Procedural M 3 C at a fine-grained level (compared with existing explorations at a document or sentence level), that is, entity. With delicate consideration, we model entity both in its temporal and cross-modal relation and propose a novel Temporal-Modal Entity Graph (TMEG). Specifically, a heterogeneous graph structure is formulated to capture textual and visual entities and trace their temporal-modal evolution. In addition, a graph aggregation module is introduced to conduct graph encoding and reasoning. Comprehensive experiments across three Procedural M 3 C tasks are conducted on a traditional dataset RecipeQA and our new dataset CraftQA, which can better evaluate the generalization of TMEG.
false
[]
[]
null
null
null
null
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
vogel-etal-2013-emergence
https://aclanthology.org/N13-1127
Emergence of Gricean Maxims from Multi-Agent Decision Theory
Grice characterized communication in terms of the cooperative principle, which enjoins speakers to make only contributions that serve the evolving conversational goals. We show that the cooperative principle and the associated maxims of relevance, quality, and quantity emerge from multi-agent decision theory. We utilize the Decentralized Partially Observable Markov Decision Process (Dec-POMDP) model of multi-agent decision making which relies only on basic definitions of rationality and the ability of agents to reason about each other's beliefs in maximizing joint utility. Our model uses cognitively-inspired heuristics to simplify the otherwise intractable task of reasoning jointly about actions, the environment, and the nested beliefs of other actors. Our experiments on a cooperative language task show that reasoning about others' belief states, and the resulting emergent Gricean communicative behavior, leads to significantly improved task performance.
false
[]
[]
null
null
null
This research was supported in part by ONR grants N00014-10-1-0109 and N00014-13-1-0287 and ARO grant W911NF-07-1-0216.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
reitter-etal-2006-priming
https://aclanthology.org/W06-1637
Priming Effects in Combinatory Categorial Grammar
This paper presents a corpus-based account of structural priming in human sentence processing, focusing on the role that syntactic representations play in such an account. We estimate the strength of structural priming effects from a corpus of spontaneous spoken dialogue, annotated syntactically with Combinatory Categorial Grammar (CCG) derivations. This methodology allows us to test a range of predictions that CCG makes about priming. In particular, we present evidence for priming between lexical and syntactic categories encoding partially satisfied subcategorization frames, and we show that priming effects exist both for incremental and normal-form CCG derivations.
false
[]
[]
null
null
null
We would like to thank Mark Steedman, Roger Levy, Johanna Moore and three anonymous reviewers for their comments. The authors are grateful for being supported by the
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sinha-etal-2012-new-semantic
https://aclanthology.org/W12-5114
A New Semantic Lexicon and Similarity Measure in Bangla
The Mental Lexicon (ML) refers to the organization of lexical entries of a language in the human mind.A clear knowledge of the structure of ML will help us to understand how the human brain processes language. The knowledge of semantic association among the words in ML is essential to many applications. Although, there are works on the representation of lexical entries based on their semantic association in the form of a lexicon in English and other languages, such works of Bangla is in a nascent stage. In this paper, we have proposed a distinct lexical organization based on semantic association between Bangla words which can be accessed efficiently by different applications. We have developed a novel approach of measuring the semantic similarity between words and verified it against user study. Further, a GUI has been designed for easy and efficient access.
false
[]
[]
null
null
null
We are thankful to Society for Natural Language Technology Research Kolkata for helping us to develop the lexical resource. We are also thankful to those subjects who spend their time to manually evaluate our semantic similarity measure.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
shazal-etal-2020-unified
https://aclanthology.org/2020.wanlp-1.15
A Unified Model for Arabizi Detection and Transliteration using Sequence-to-Sequence Models
While online Arabic is primarily written using the Arabic script, a Roman-script variety called Arabizi is often seen on social media. Although this representation captures the phonology of the language, it is not a one-to-one mapping with the Arabic script version. This issue is exacerbated by the fact that Arabizi on social media is Dialectal Arabic which does not have a standard orthography. Furthermore, Arabizi tends to include a lot of code mixing between Arabic and English (or French). To map Arabizi text to Arabic script in the context of complete utterances, previously published efforts have split Arabizi detection and Arabic script target in two separate tasks. In this paper, we present the first effort on a unified model for Arabizi detection and transliteration into a code-mixed output with consistent Arabic spelling conventions, using a sequence-to-sequence deep learning model. Our best system achieves 80.6% word accuracy and 58.7% BLEU on a blind test set.
false
[]
[]
null
null
null
This research was carried out on the High Performance Computing resources at New York University Abu Dhabi (NYUAD). We would like to thank Daniel Watson, Ossama Obeid, Nasser Zalmout and Salam Khalifa from the Computational Approaches to Modeling Language Lab at NYUAD for their help and suggestions throughout this project. We thank Owen Rambow, and the paper reviewers for helpful suggestions.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mihalcea-2004-co
https://aclanthology.org/W04-2405
Co-training and Self-training for Word Sense Disambiguation
This paper investigates the application of cotraining and self-training to word sense disambiguation. Optimal and empirical parameter selection methods for co-training and self-training are investigated, with various degrees of error reduction. A new method that combines cotraining with majority voting is introduced, with the effect of smoothing the bootstrapping learning curves, and improving the average performance.
false
[]
[]
null
null
null
Many thanks to Carlo Strapparava and the three anonymous reviewers for useful comments and suggestions. This work was partially supported by a National Science Foundation grant IIS-0336793.
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
schwartz-gomez-2009-acquiring
https://aclanthology.org/W09-1701
Acquiring Applicable Common Sense Knowledge from the Web
In this paper, a framework for acquiring common sense knowledge from the Web is presented. Common sense knowledge includes information about the world that humans use in their everyday lives. To acquire this knowledge, relationships between nouns are retrieved by using search phrases with automatically filled constituents. Through empirical analysis of the acquired nouns over Word-Net, probabilities are produced for relationships between a concept and a word rather than between two words. A specific goal of our acquisition method is to acquire knowledge that can be successfully applied to NLP problems. We test the validity of the acquired knowledge by means of an application to the problem of word sense disambiguation. Results show that the knowledge can be used to improve the accuracy of a state of the art unsupervised disambiguation system.
false
[]
[]
null
null
null
This research was supported by the NASA Engineering and Safety Center under Grant/Cooperative Agreement NNX08AJ98A.
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
turian-melamed-2005-constituent
https://aclanthology.org/W05-1515
Constituent Parsing by Classification
Ordinary classification techniques can drive a conceptually simple constituent parser that achieves near state-of-the-art accuracy on standard test sets. Here we present such a parser, which avoids some of the limitations of other discriminative parsers. In particular, it does not place any restrictions upon which types of features are allowed. We also present several innovations for faster training of discriminative parsers: we show how training can be parallelized, how examples can be generated prior to training without a working parser, and how independently trained sub-classifiers that have never done any parsing can be effectively combined into a working parser. Finally, we propose a new figure-of-merit for bestfirst parsing with confidence-rated inferences. Our implementation
false
[]
[]
null
null
null
The authors would like to thank Dan Bikel, Mike Collins, Ralph Grishman, Adam Meyers, Mehryar Mohri, Satoshi Sekine, and Wei Wang, as well as the anonymous reviewers, for their helpful comments
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false