Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O03-1007",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:01:34.099251Z"
},
"title": "Automatic Pronominal Anaphora Resolution in English Texts",
"authors": [
{
"first": "Tyne",
"middle": [],
"last": "Liang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Chiao Tung University Hsinchu",
"location": {
"country": "Taiwan"
}
},
"email": "tliang@cis.nctu.edu.tw"
},
{
"first": "Dian-Song",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Chiao Tung University Hsinchu",
"location": {
"country": "Taiwan"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Anaphora is a common phenomenon in discourses as well as an important research issue in the applications of natural language processing. In this paper, the anaphora resolution is achieved by employing WordNet ontology and heuristic rules. The proposed system identifies both intra-sentential and inter-sentential antecedents of anaphors. Information about animacy is obtained by analyzing the hierarchical relation of nouns and verbs in the surrounding context. The identification of animacy entities and pleonastic-it usage in English discourses are employed to promote the resolution accuracy.",
"pdf_parse": {
"paper_id": "O03-1007",
"_pdf_hash": "",
"abstract": [
{
"text": "Anaphora is a common phenomenon in discourses as well as an important research issue in the applications of natural language processing. In this paper, the anaphora resolution is achieved by employing WordNet ontology and heuristic rules. The proposed system identifies both intra-sentential and inter-sentential antecedents of anaphors. Information about animacy is obtained by analyzing the hierarchical relation of nouns and verbs in the surrounding context. The identification of animacy entities and pleonastic-it usage in English discourses are employed to promote the resolution accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Anaphora resolution is vital for areas such as machine translation, summarization, question-answering system and so on. In machine translating, anaphora must be resolved for languages that mark the gender of pronouns. One main drawback with most current machine translation systems is that the translation usually does not go beyond sentence level, and so does not deal with discourse understanding successfully.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem description",
"sec_num": "1.1"
},
{
"text": "Inter-sentential anaphora resolution would thus be a great assistance to the development of machine translation systems. On the other hand, many of automatic text summarization systems apply a scoring mechanism to identify the most salient sentences. However, the task result is not always guaranteed to be coherent with each other. It could lead to errors if the selected sentence contains anaphoric expressions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem description",
"sec_num": "1.1"
},
{
"text": "To improve the accuracy of extracting important sentences, it is essential to solve the problem of anaphoric references in advance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem description",
"sec_num": "1.1"
},
{
"text": "Pronominal anaphora is the most common phenomenon which the pronouns are substituted with previous mentioned entities. This type of anaphora can be further divided into four subclasses, namely, Nominative: {he, she, it, they} Reflexive: {himself, herself, itself, themselves} Possessive: {his, her, its, their}",
"cite_spans": [
{
"start": 226,
"end": 275,
"text": "Reflexive: {himself, herself, itself, themselves}",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem description",
"sec_num": "1.1"
},
{
"text": "Objective: {him, her, it, them} However, the usage of \"it\" can also be a non-anaphoric expression which does not refer to any items mentioned before and is called expletive or pleonastic-it [Lappin and Leass, 94] . Although pleonastic pronouns are not considered anaphoric since they do not have an antecedent to refer to, yet recognizing such occurrences is essential during anaphora resolution. In [Mitkov, 01] , the non-anaphoric pronouns are in average of 14.2% from a corpus of 28,272 words.",
"cite_spans": [
{
"start": 190,
"end": 208,
"text": "[Lappin and Leass,",
"ref_id": null
},
{
"start": 209,
"end": 212,
"text": "94]",
"ref_id": null
},
{
"start": 400,
"end": 408,
"text": "[Mitkov,",
"ref_id": null
},
{
"start": 409,
"end": 412,
"text": "01]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem description",
"sec_num": "1.1"
},
{
"text": "Definite noun phrase anaphora occurs in the situation that the antecedent is referred by a general concept entity. The general concept entity can be a semantically close phrase such as synonyms or superordinates of the antecedent [Mitkov, 99] . The word one has a number of different uses apart from counting. One of the important functions is as an anaphoric form. For example:",
"cite_spans": [
{
"start": 230,
"end": 238,
"text": "[Mitkov,",
"ref_id": null
},
{
"start": 239,
"end": 242,
"text": "99]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem description",
"sec_num": "1.1"
},
{
"text": "Intra-sentential anaphora means that the anaphor and the corresponding antecedent occur in the same sentence. Inter-sentential anaphora is where the antecedent occurs in a sentence prior to the sentence with the anaphor. In [Lappin and Leass, 94] , there are 15.9% of Inter-sentential cases and 84.1% Intra-sentential cases in their testing result. In the report of [Mitkov, 01] , there are 33.4% of Inter-sentential cases and 66.6% Intra-sentential cases.",
"cite_spans": [
{
"start": 224,
"end": 242,
"text": "[Lappin and Leass,",
"ref_id": null
},
{
"start": 243,
"end": 246,
"text": "94]",
"ref_id": null
},
{
"start": 366,
"end": 374,
"text": "[Mitkov,",
"ref_id": null
},
{
"start": 375,
"end": 378,
"text": "01]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem description",
"sec_num": "1.1"
},
{
"text": "Traditionally, anaphora resolution systems rely on syntactic, semantic or pragmatic clues to identify the antecedent of an anaphor. Hobbs' algorithm [Hobbs, 76] is the first syntax-oriented method presented in this research domain. From the result of syntactic tree, they check the number and gender agreement between antecedent candidates and a specified pronoun. In RAP (Resolution of Anaphora Procedure)",
"cite_spans": [
{
"start": 149,
"end": 156,
"text": "[Hobbs,",
"ref_id": null
},
{
"start": 157,
"end": 160,
"text": "76]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "1.2"
},
{
"text": "proposed by Lappin and Leass [94] , the algorithm applies to the syntactic representations generated by McCord's Slot Grammar parser, and relies on salience measures derived from syntactic structure. It does not make use of semantic information or real world knowledge in choosing among the candidates. A modified version of RAP system is proposed by [Kennedy and Boguraev, 96] . It depends only on part-of-speech tagging with a shallow syntactic parse indicating grammatical role of NPs and containment in an adjunct or noun phrase.",
"cite_spans": [
{
"start": 12,
"end": 33,
"text": "Lappin and Leass [94]",
"ref_id": null
},
{
"start": 351,
"end": 373,
"text": "[Kennedy and Boguraev,",
"ref_id": null
},
{
"start": 374,
"end": 377,
"text": "96]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "1.2"
},
{
"text": "In [Cardie et al., 99] , they treated coreference as a clustering task. Then a distance metric function was used to decide whether these two noun phrases are similar or not. In [Denber, 98] , an algorithm called Anaphora Matcher (AM) is implemented to handle inter-sentential anaphora over a two-sentence context. It uses information about the sentence as well as real world semantic knowledge obtained from outer sources. The lexical database system WordNet is utilized to acquire the semantic clues about the words in the input sentences. He declared that most anaphora does not refer back more than one sentence in any case. Thus a two-sentence \"window size\" is sufficient for anaphora resolution in the domain of image queries.",
"cite_spans": [
{
"start": 3,
"end": 18,
"text": "[Cardie et al.,",
"ref_id": null
},
{
"start": 19,
"end": 22,
"text": "99]",
"ref_id": null
},
{
"start": 177,
"end": 185,
"text": "[Denber,",
"ref_id": null
},
{
"start": 186,
"end": 189,
"text": "98]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "1.2"
},
{
"text": "A statistical approach was introduced by [Dagan and Itai, 90] , in which the corpus information was used to disambiguate pronouns. It is an alternative solution to the syntactical dependent constraints knowledge. Their experiment makes an attempt to resolve references of the pronoun \"it\" in sentences randomly selected from the corpus. The model uses a statistical feature of the co-occurence patterns obtained from the corpus to find out the antecedent. The antecedent candidate with the highest frequency in the co-occurence patterns are selected to match the anaphor.",
"cite_spans": [
{
"start": 41,
"end": 57,
"text": "[Dagan and Itai,",
"ref_id": null
},
{
"start": 58,
"end": 61,
"text": "90]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "1.2"
},
{
"text": "A knowledge-poor approach is proposed by [Mitkov, 98] , it can also be applied to different languages (English, Polish, and Arabic). The main components of this method are so-called \"antecedent indicators\" which are used for assigning scores (2 The procedure to identify antecedents is described as follows:",
"cite_spans": [
{
"start": 41,
"end": 49,
"text": "[Mitkov,",
"ref_id": null
},
{
"start": 50,
"end": 53,
"text": "98]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "1.2"
},
{
"text": "1. Each text is parsed into sentences and tagged by POS tagger. An internal representation data structure with essential information (such as sentence offset, word offset, word POS, base form, etc.) is stored.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "1.2"
},
{
"text": "2. Base noun phrases in each sentence will be identified by NP finder module and stored in a global data structure. Then the number agreement is implemented on the head noun. Testing capitalized nouns in the name gazetteer to find out the person names. The gender feature is attached to the name if it can be found uniquely in male or female class. In this phase,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "1.2"
},
{
"text": "WordNet is also used to find out possible gender clues to increase resolution performance. The gender attribute is ignored to avoid the ambiguity while the noun can be masculine or feminine.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "1.2"
},
{
"text": "3. Anaphors are checked sequentially from the beginning of the first sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "1.2"
},
{
"text": "They are stored in the list with information of sentence offset and word offset in order. Then pleonastic-it is checked so that no further attempt for resolution is made. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "1.2"
},
{
"text": "where can: each candidate noun phrase for the specified anaphor ana: anaphor to be resolved rule_pre i : the ith preference rule rule_con i : the ith constraint rule agreement k : denotes number agreement, gender agreement and animacy agreement 2.2 Main Components",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "1.2"
},
{
"text": "The TOSCA-ICLE tagger [Aarts et al., 97] was used for the lemmatization and tagging of English learner corpora. The TOSCA-ICLE tagset consists of 16 major wordclasses. These major wordclasses may further be specified by features for subclasses as well as for a variety of syntactic, semantic and morphological characteristics.",
"cite_spans": [
{
"start": 22,
"end": 36,
"text": "[Aarts et al.,",
"ref_id": null
},
{
"start": 37,
"end": 40,
"text": "97]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "POS Tagging",
"sec_num": "2.2.1"
},
{
"text": "According to part-of-speech result, the basic noun phrase patterns are found as follows: base NP \u2192 modifier\uff0bhead noun modifier \u2192 <article| number| present participle| past participle |adjective| noun>",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NP Finder",
"sec_num": "2.2.2"
},
{
"text": "In this paper, the proposed base noun phrase finder is implemented on the basis of a finite state machine (figure 2). Each state indicates a particular part-of-speech of a word. The arcs between states mean a word input from the sentence sequentially. If a word sequence can be recognized from the initial state and ends in a final state, it is accepted as a base noun phrase with no recursion, otherwise rejected. An example of base noun phrase output is illustrated in figure 3. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NP Finder",
"sec_num": "2.2.2"
},
{
"text": "The pleonastic-it module is used to filter out those semantic empty usage conditions which is essential for pronominal anaphora resolution. A pronoun it is said to be pleonastic when it is used in a discourse where the pronoun has no antecedent. The usage of \"pleonastic-it\" can be classified into state reference and passive reference [Denber, 98] . State references are usually used for assertions about the weather or the time, and it is furtherly divided into meteorological references and temporal references.",
"cite_spans": [
{
"start": 336,
"end": 344,
"text": "[Denber,",
"ref_id": null
},
{
"start": 345,
"end": 348,
"text": "98]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pleonastic-it Module",
"sec_num": "2.2.3"
},
{
"text": "Passive references consist of modal adjectives and cognitive verbs. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pleonastic-it Module",
"sec_num": "2.2.3"
},
{
"text": "Number is the quantity that distinguishes between singular (one entity) and plural (numerous entities). It makes the process of deciding candidates easier since they must be consistent in number. With the output of tagger, all the noun phrases and pronouns are annotated with number (single or plural). For a specified pronoun, we can discard those noun phrases whose numbers differ from the pronoun.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number Agreement",
"sec_num": "2.2.4"
},
{
"text": "Gender recognition process can deal with words that have gender features. To distinguish the gender information of a person, we collect an English first name list from (http://www.behindthename.com/) covering 5,661 male first name entries and 5,087 female ones. Besides, we employ some useful clues from WordNet result by using keyword search around the query result. These keywords can be divided into two classes\uff1a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gender Agreement",
"sec_num": "2.2.5"
},
{
"text": "Class_Female= {feminine, female, woman, women} Class_Male= {masculine, male, man, men}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gender Agreement",
"sec_num": "2.2.5"
},
{
"text": "Animacy denotes the living entities which can be referred by some gender-marked pronouns (he, she, him, her, his, hers, himself, herself) in texts. Conventionally, animate entities include people and animals. Since we can hardly obtain the property of animacy with respect to a noun phrase by its surface morphology, we make use of WordNet [Miller, 93] for the recognition of animate entities. In which a noun can only have a hypernym but many hyponyms (an opposite relation to hypernym). In the light of twenty-five unique beginners, we can observe that two of them can be taken as the representation of animacy. These two unique beginners are {animal, fauna} and {person, human being}. Since all the hyponyms inherit the properties from their hypernyms, the animacy of a noun can be achieved by making use of this hierarchical relation. However, a noun may have several senses with the change of different contexts. The output result with respect to a noun must be employed to resolve this problem. First of all, a threshold value t_noun is defined (equation 2) as the ratio of the number of senses in animacy files to the number of total senses. This threshold value can be obtained by training on a corpus and the value is selected when the accuracy rate reaches the maximum. ",
"cite_spans": [
{
"start": 340,
"end": 348,
"text": "[Miller,",
"ref_id": null
},
{
"start": 349,
"end": 352,
"text": "93]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Animacy Agreement",
"sec_num": "2.2.6"
},
{
"text": "Besides the utilization of noun hypernym relation, unique beginners of verbs are taken into consideration as well. These lexicographer files with respect to verb synsets are {cognition}, {communication}, {emotion}, and {social} (table 1). The sense of a verb, for example \"read\", varies from context to context as well. We can also define a threshold value t_verb as the ratio of the number of senses in animacy files (table 1) to the number of total senses. The training data from the Brown corpus consists of 10,134 words, 2,155 noun phrases, and 517 animacy entities. It shows that 24% of the noun phrases in the corpus refer to animacy entities whereas 76% of them refer to inanimacy ones. Threshold values can be obtained by training on the corpus and select the value when the accuracy rate (equation 4) reaches the maximum. Therefore t_noun and t_verb are achieved to be 0.8 and 0.9 respectively according to the observation in figure 4. The process of determining whether a noun phrase belong to animacy or not is described below\uff1a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Animacy Agreement",
"sec_num": "2.2.6"
},
{
"text": "The syntactic parallelism could be an important clue while other constraints or preferences could not be employed to identify an unique unambiguous antecedent. It denotes the preference that correct antecedent has the same part-of-speech and grammatical function as the anaphor. The grammatical function of nouns can be subject, object or subject complement. The subject is the person, thing, concept or idea that is the topic of the sentence. The object is directly or indirectly affected by the nature of the verb. Words which follow verbs are not always direct or indirect objects. After a particular kind of verb, nouns remain in the subjective case. We call these subjective completions or subject complements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I. Syntactic parallelism rule",
"sec_num": null
},
{
"text": "For example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I. Syntactic parallelism rule",
"sec_num": null
},
{
"text": "The security guard took off the uniform after getting off duty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I. Syntactic parallelism rule",
"sec_num": null
},
{
"text": "He put it in the bottom of the closet. The \"He\" (the subject) in the second sentence refers to \"The security guard\" which is also the subject of the first sentence. In the same way, the \"it\" refers to \"the uniform\" which is the object of the first sentence as well. Empirical evidence also shows that anaphors usually match their antecedents in their syntactic functions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I. Syntactic parallelism rule",
"sec_num": null
},
{
"text": "This preference works with identifying collocation patterns in which anaphora took place. In this way, system can automatically identify semantic roles and employ them to select the most appropriate candidate. Collocation relations specify the relation between words that tend to co-occur in the same lexical contexts. It emphasizes that noun phrases which have the same semantic role as the anaphor are favored.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "II. Semantic parallelism rule",
"sec_num": null
},
{
"text": "Definiteness is a category concerned with the grammaticalization of identifiability and nonidentifiability of referents. A definite noun phrase is a noun phrase that starts with the word \"the\", for example, \"the young lady\" is a definite noun phrase. Definite noun phrases which can be identified uniquely are more likely to be the antecedent of anaphors than indefinite ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "III. Definiteness rule",
"sec_num": null
},
{
"text": "Iterated items in the context are regarded as the likely candidates for the antecedent of an anaphor. Generally, the high frequent mentioned items denote the focus of the topic as well as the most likely candidate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IV. Mention Frequency rule",
"sec_num": null
},
{
"text": "Recency information is employed by most of the implementations for anaphora resolution. In [Lappin, 94] the recency factor is the one with highest weight among a set of factors that influence the choice of antecedent. The recency factor states that if there are two (or more) candidate antecedents for an anaphor and all of these candidates satisfy the consistency restrictions for the anaphor (i.e. they are qualified candidates) then the most recent one (the one closest to the anaphor) is chosen. In [Mitkov et al., 01] , the average distance (in sentences) between the anaphor and the antecedent is 1.3, and the average distance in noun phrases is 4.3 NPs.",
"cite_spans": [
{
"start": 91,
"end": 99,
"text": "[Lappin,",
"ref_id": null
},
{
"start": 100,
"end": 103,
"text": "94]",
"ref_id": null
},
{
"start": 503,
"end": 518,
"text": "[Mitkov et al.,",
"ref_id": null
},
{
"start": 519,
"end": 522,
"text": "01]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "V. Sentence recency rule",
"sec_num": null
},
{
"text": "A noun phrase not contained in another noun phrase is favored as the possible candidate. This condition can be explained from the perspective of functional ranking: subject > direct object > indirect object. A noun phrase embedded in a prepositional noun phrase is usually an indirect object.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "VI. Non-prepositional noun phrase rule",
"sec_num": null
},
{
"text": "Conjunctions are usually used to link words, phrases and clauses. If the candidate is connected with the anaphor by a conjunction, they can hardly have anaphora relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "VII. Conjunction constraint rule",
"sec_num": null
},
{
"text": "For example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "VII. Conjunction constraint rule",
"sec_num": null
},
{
"text": "Mr. Brown teaches in a high school. Both Jane and he enjoy watching the movies in the weekend.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "VII. Conjunction constraint rule",
"sec_num": null
},
{
"text": "The training and testing text are selected randomly from the Brown corpus. The Corpus is divided into 500 samples of about 2000 words each. The samples represent a wide range of styles and varieties of prose. The main categories are listed in figure 5. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Brown Corpus",
"sec_num": "2.3"
},
{
"text": "The main system window is shown in figure 6 . The text editor is used to input raw text without any annotations and shows the analyzed result. The POS tagger component takes the input text and outputs tokens, lemmas, most likely tags and the number of alternative tags. NP chunker makes use of finite state machine (FSM) to recognize strings belong to a specified regular set. After performing the selection procedure, the most appropriate antecedent is chosen to match each anaphor in the text. Figure 7 illustrates the result of anaphora pairs in each line in which sentence number and word number are attached to the end of the entities. For example, the \"it\" in the first word of the first sentence denotes a pleonastic-it and the other \"it\" in the 57 th word of the second sentence refers to \"the heart\". Figure 8 shows the original text input with antecedent annotation followed each anaphor in the text. All the annotations are highlighted to make it easy to carry out the subsequent testing purposes. ",
"cite_spans": [],
"ref_spans": [
{
"start": 35,
"end": 43,
"text": "figure 6",
"ref_id": "FIGREF6"
},
{
"start": 496,
"end": 504,
"text": "Figure 7",
"ref_id": "FIGREF7"
},
{
"start": 810,
"end": 818,
"text": "Figure 8",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "System functions",
"sec_num": "2.4"
},
{
"text": "The proposed system is developed in the following environment (table 2) . The evaluation task is based on random texts selected from the Brown corpus of different genres. There are 14,124 words, 2,970 noun phrases and 530 anaphors in the testing data. Two baseline models are set up to compare the effectiveness with our proposed anaphora resolution (AR) system. The first baseline model (called baseline subject) performs the number and gender agreement between candidates and anaphors, and then chooses the most recent subject as the antecedent from the candidate set. The second baseline model (called baseline recent) performs a similar procedure but it selects the most recent noun phrase as the antecedent which matches the number and gender agreement with the anaphor. The measurement can be calculated as follows: ",
"cite_spans": [],
"ref_spans": [
{
"start": 62,
"end": 71,
"text": "(table 2)",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experimental Results and Analysis",
"sec_num": "3."
},
{
"text": "anaphors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results and Analysis",
"sec_num": "3."
},
{
"text": "In the result of our experiment baseline subject (table 3) , there are 41% of antecedents can be identified by finding the most recent subject, however, only 17% of antecedents can be resolved by means of selecting the most recent noun phrase with the same gender and number agreement to anaphors. Table 3 : Success rate of baseline models. Figure 9 presents the distribution of sentence distance between antecedents and anaphors. The value 0 denotes intra-sentential anaphora and other values mean inter-sentential anaphora. Figure 10 shows the average word distance distribution with respect to each genre. The identification of pleonastic-it can be achieved to 89% accuracy (table 4) . The evaluation result of our system which applies animacy agreement and heuristic rules for resolution is listed in table 5. It also contains the results for each individual genre of testing data and the overall success rate reaches 77%. ",
"cite_spans": [],
"ref_spans": [
{
"start": 49,
"end": 58,
"text": "(table 3)",
"ref_id": null
},
{
"start": 298,
"end": 305,
"text": "Table 3",
"ref_id": null
},
{
"start": 341,
"end": 349,
"text": "Figure 9",
"ref_id": null
},
{
"start": 526,
"end": 535,
"text": "Figure 10",
"ref_id": "FIGREF0"
},
{
"start": 677,
"end": 686,
"text": "(table 4)",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Experimental Results and Analysis",
"sec_num": "3."
},
{
"text": "In this paper, the WordNet ontology and heuristic rules are adopted to the anaphora resolution. The recognition of animacy entities and gender features in the discourses is helpful to the promotion of resolution accuracy. The proposed system is able to deal with intra-sentential and inter-sentential anaphora in English text and includes an appropriate treatment of pleonastic pronouns. From experiment results, our proposed method is comparable with prior works using fully parsing of the text. In contrast to most anaphora resolution approaches, our system benefits from the recognition of animacy occurrence and operates in fully automatic mode to achieve optimal performance. With the growing interest in natural language processing and its various applications, anaphora resolution is worth considering for further message understanding and the consistency of discourses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "4."
},
{
"text": "Our future work will be directed into following studies: 1. Extending the set of anaphor being processed: This analysis aims at identifying instances (such as definite anaphor) that could be useful in anaphora resolution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "4."
},
{
"text": "The language resource WordNet can be utilized to identify the coreference relation on the basis of synonymy/hypernym/hyponym relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resolving nominal coreference:",
"sec_num": "2."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The TOSCA-ICLE Tagset: Tagging Manual",
"authors": [
{
"first": "Aarts",
"middle": [],
"last": "Jan",
"suffix": ""
},
{
"first": "Henk",
"middle": [],
"last": "Barkema",
"suffix": ""
},
{
"first": "Nelleke",
"middle": [],
"last": "Oostdijk",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aarts Jan, Henk Barkema and Nelleke Oostdijk (1997), \"The TOSCA-ICLE Tagset: Tagging Manual\", TOSCA Research Group for Corpus Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "CogNIAC: high precision coreference with limited knowledge and linguistic resources",
"authors": [
{
"first": "Breck",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the ACL'97/EACL'97 workshop on Operational factors in practical, robust anaphora resolution",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baldwin, Breck (1997), \"CogNIAC: high precision coreference with limited knowledge and linguistic resources\", Proceedings of the ACL'97/EACL'97 workshop on Operational factors in practical, robust anaphora resolution, pp. 38-45.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Shallow Methods for Named Entity Coreference Resolution",
"authors": [
{
"first": "Kalina",
"middle": [],
"last": "Bontcheva",
"suffix": ""
},
{
"first": "Marin",
"middle": [],
"last": "Dimitrov",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Maynard",
"suffix": ""
},
{
"first": "Valentin",
"middle": [],
"last": "Tablan",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of TRAITEMENT AUTOMATIQUE DES LANGUES NATURELLES (TALN)",
"volume": "",
"issue": "",
"pages": "24--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bontcheva, Kalina, Marin Dimitrov, Diana Maynard and Valentin Tablan (2002), \"Shallow Methods for Named Entity Coreference Resolution\", Proceedings of TRAITEMENT AUTOMATIQUE DES LANGUES NATURELLES (TALN), pp. 24-32.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Noun Phrase Coreference as Clustering",
"authors": [
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Kiri",
"middle": [],
"last": "Wagstaff",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Very Large Corpora",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cardie, Claire and Kiri Wagstaff (1999), \"Noun Phrase Coreference as Clustering\", Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Very Large Corpora.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Extracting Noun Phrases from Large-Scale Texts: A Hybrid Approach and Its Automatic Evaluation",
"authors": [
{
"first": "Kuang-Hua",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hsin-Hsi",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 32nd ACL Annual Meeting",
"volume": "",
"issue": "",
"pages": "234--241",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, Kuang-hua and Hsin-Hsi Chen (1994), \"Extracting Noun Phrases from Large-Scale Texts: A Hybrid Approach and Its Automatic Evaluation\", Proceedings of the 32nd ACL Annual Meeting, 1994, pp. 234-241.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automatic processing of large corpora for the resolution of anaphora references",
"authors": [
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Itai",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the 13th International Conference on Computational Linguistics (COLING'90)",
"volume": "III",
"issue": "",
"pages": "1--3",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dagan, Ido and Alon Itai (1990), \"Automatic processing of large corpora for the resolution of anaphora references\", Proceedings of the 13th International Conference on Computational Linguistics (COLING'90), Vol. III, 1-3, Helsinki, Finland.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatic resolution of anaphora in English",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Denber",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denber, Michel (1998), \"Automatic resolution of anaphora in English\", Technical report, Eastman Kodak Co.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Improving anaphora resolution by identifying animate entities in texts",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "Constantin",
"middle": [],
"last": "Orasan",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of DAARC-2000",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evans, Richard and Constantin Orasan (2000), \"Improving anaphora resolution by identifying animate entities in texts\", In Proceedings of DAARC-2000.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Statistical Approach to Anaphora Resolution",
"authors": [
{
"first": "Niyu",
"middle": [],
"last": "Ge",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Hale",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Sixth Workshop on Very Large Corpora (COLING-ACL98)",
"volume": "",
"issue": "",
"pages": "161--170",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ge, Niyu, John Hale and Eugene Charniak (1998), \"A Statistical Approach to Anaphora Resolution\", Proceedings of the Sixth Workshop on Very Large Corpora (COLING-ACL98), pp.161-170.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Anaphora for everyone: Pronominal anaphora resolution without a parser",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Kennedy",
"suffix": ""
},
{
"first": "Branimir",
"middle": [],
"last": "Boguraev",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 16 th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "113--118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kennedy, Christopher and Branimir Boguraev (1996), \"Anaphora for everyone: Pronominal anaphora resolution without a parser\", Proceedings of the 16 th International Conference on Computational Linguistics, pp.113-118.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An Algorithm for Pronominal Anaphora Resolution",
"authors": [
{
"first": "Shalom",
"middle": [],
"last": "Lappin",
"suffix": ""
},
{
"first": "Herbert",
"middle": [],
"last": "Leass",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "20",
"issue": "",
"pages": "535--561",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lappin, Shalom and Herbert Leass (1994), \"An Algorithm for Pronominal Anaphora Resolution\", Computational Linguistics, Volume 20, Part 4, pp. 535-561.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Nouns in WordNet: A Lexical Inheritance System",
"authors": [
{
"first": "George",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "245--264",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miller, George (1993), \"Nouns in WordNet: A Lexical Inheritance System\", Journal of Lexicography, pp. 245-264.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Robust pronoun resolution with limited knowledge",
"authors": [
{
"first": "Ruslan",
"middle": [],
"last": "Mitkov",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 18th International Conference on Computational Linguistics (COLING'98)/ACL'98 Conference",
"volume": "",
"issue": "",
"pages": "869--875",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitkov, Ruslan (1998), \"Robust pronoun resolution with limited knowledge\", Proceedings of the 18th International Conference on Computational Linguistics (COLING'98)/ACL'98 Conference Montreal, Canada. pp. 869-875.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Anaphora Resolution: The State of the Art",
"authors": [
{
"first": "Ruslan",
"middle": [],
"last": "Mitkov",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitkov, Ruslan (1999), \"Anaphora Resolution: The State of the Art\", Working paper (Based on the COLING'98/ACL'98 tutorial on anaphora resolution)",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Evaluation tool for rule-based anaphora resolution methods",
"authors": [
{
"first": "Ruslan",
"middle": [],
"last": "Mitkov",
"suffix": ""
},
{
"first": "Catalina",
"middle": [],
"last": "Barbu",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of ACL'01",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitkov, Ruslan and Catalina Barbu (2001), \"Evaluation tool for rule-based anaphora resolution methods\", Proceedings of ACL'01, Toulouse, 2001.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A new, fully automatic version of Mitkov's knowledge-poor pronoun resolution method",
"authors": [
{
"first": "Ruslan",
"middle": [],
"last": "Mitkov",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "Constantin",
"middle": [],
"last": "Orasan",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of CICLing-2000",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitkov, Ruslan, Richard Evans and Constantin Orasan (2002), \"A new, fully automatic version of Mitkov's knowledge-poor pronoun resolution method\", In Proceedings of CICLing-2000, Mexico City, Mexico.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Anaphora Resolution in Chinese Financial News for Information Extraction",
"authors": [
{
"first": "Ning",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chunfa",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "K",
"middle": [
"F"
],
"last": "Wang",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of 4th World Congress on Intelligent Control and Automation",
"volume": "",
"issue": "",
"pages": "2422--2426",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, Ning, Chunfa Yuan, K.F. Wang and Wenjie Li (2002), \"Anaphora Resolution in Chinese Financial News for Information Extraction\", Proceedings of 4th World Congress on Intelligent Control and Automation, June 2002, Shanghai, pp.2422-2426.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Architecture overview.",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "Finite state machine for a noun phrase.",
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"num": null,
"text": "An Example output of base noun phrase.",
"type_str": "figure",
"uris": null
},
"FIGREF4": {
"num": null,
"text": "Thresholds of Animacy Entities.",
"type_str": "figure",
"uris": null
},
"FIGREF5": {
"num": null,
"text": "Categories of the Brown corpus.",
"type_str": "figure",
"uris": null
},
"FIGREF6": {
"num": null,
"text": "The main system window.",
"type_str": "figure",
"uris": null
},
"FIGREF7": {
"num": null,
"text": "Anaphora pairs.",
"type_str": "figure",
"uris": null
},
"FIGREF8": {
"num": null,
"text": "Anaphor with antecedent annotation.",
"type_str": "figure",
"uris": null
},
"FIGREF9": {
"num": null,
"text": "Referential word distance distribution.",
"type_str": "figure",
"uris": null
},
"TABREF0": {
"html": null,
"content": "<table><tr><td>1,</td></tr><tr><td>0, -1) against each candidate noun phrases. They play a decisive role in tracking down</td></tr><tr><td>the antecedent from a set of possible candidates. CogNIAC (COGnition eNIAC)</td></tr><tr><td>[Baldwin, 97] is a system developed at the University of Pennsylvania to resolve</td></tr><tr><td>pronouns with limited knowledge and linguistic resources. It presents a high precision</td></tr><tr><td>pronoun resolution system that is capable of greater than 90% precision with 60%</td></tr><tr><td>recall for some pronouns. [Mitkov, 02] presented a new, advanced and completely</td></tr><tr><td>revamped version of Mitkov's knowledge-poor approach to pronoun resolution. In</td></tr><tr><td>contrast to most anaphora resolution approaches, the system MARS, operates in fully</td></tr><tr><td>automatic mode. The three new indicators that were included in MARS are Boost</td></tr><tr><td>Pronoun, Syntactic Parallelism and Frequent Candidates.</td></tr><tr><td>In [Mitkov, 01], they proposed an evaluation environment for comparing</td></tr><tr><td>anaphora resolution algorithms which is illustrated by presenting the results of the</td></tr><tr><td>comparative evaluation on</td></tr></table>",
"text": "the basis of several evaluation measures. Their testing corpus contains 28,272 words, with 19,305 noun phrases and 422 pronouns, out of which 362 are anaphoric expressions. The overall success rate calculated for the 422",
"type_str": "table",
"num": null
},
"TABREF1": {
"html": null,
"content": "<table><tr><td colspan=\"3\">Graphic User Interface window size are \u220f \u2211 k i j \u2211 \u00d7 \u2212 = j i con rule pre rule ana can score ) _ _ ( ) , (</td><td>agreement</td><td>k</td></tr><tr><td/><td>Text Input</td><td/></tr><tr><td/><td>POS Tagging</td><td>Preference</td></tr><tr><td>Pleonastic It</td><td>NP Finder</td><td>Constraint</td></tr><tr><td/><td>Candidate Set</td><td>Animacy Agreement</td><td>WordNet</td></tr><tr><td/><td>Number Agreement</td><td>Gender Agreement</td><td>Name Data</td></tr></table>",
"text": "collected as antecedent candidates. Then the candidate set is furtherly filtered by the gender and animacy agreement.5. The remaining candidates are evaluated by heuristic rules afterward. These rules can be classified into preference rules and constraint rules. A scoring equation(equation 1)is made to evaluate how likely a candidate will be selected as the antecedent.",
"type_str": "table",
"num": null
},
"TABREF4": {
"html": null,
"content": "<table><tr><td>Unique beginners</td><td>Example of verb</td></tr><tr><td>{cognition}</td><td>Think, analyze, judge \u2026</td></tr><tr><td>{communication}</td><td>Tell, ask, teach \u2026</td></tr><tr><td>{emotion}</td><td>Feel, love, fear \u2026</td></tr><tr><td>{social}</td><td>Participate, make, establish \u2026</td></tr></table>",
"text": "Example of animate verb.",
"type_str": "table",
"num": null
},
"TABREF5": {
"html": null,
"content": "<table><tr><td>Operating System</td><td>Microsoft Windows 2000 Advanced Server</td></tr><tr><td>Main Processor</td><td>AMD Athlon K7 866MHZ</td></tr><tr><td>Main Memory</td><td>256 MB SDRAM</td></tr><tr><td>Graphic Card</td><td>NVIDIA Geforce2 Mx 32M</td></tr><tr><td>Programming language</td><td>Borland C++ Builder 5.0</td></tr></table>",
"text": "System environment.",
"type_str": "table",
"num": null
},
"TABREF7": {
"html": null,
"content": "<table><tr><td/><td>Number of</td><td>Anaphoric</td><td>Number of</td><td>Ratio of</td><td>Accuracy of</td></tr><tr><td/><td>Anaphora</td><td>expression</td><td>Pleonastic-it</td><td>Pleonastic-it</td><td>identification</td></tr><tr><td>Total</td><td>530</td><td>483</td><td>47</td><td>9%</td><td>89%</td></tr></table>",
"text": "Pleonastic-it identification.",
"type_str": "table",
"num": null
},
"TABREF8": {
"html": null,
"content": "<table><tr><td>Genre</td><td colspan=\"6\">Words Lines NPs Anims Anaphors Success Rate</td></tr><tr><td>Reportage</td><td>1972</td><td>90</td><td>488</td><td>110</td><td>52</td><td>80%</td></tr><tr><td>Editorial</td><td>1967</td><td>95</td><td>458</td><td>54</td><td>54</td><td>80%</td></tr><tr><td>Reviews</td><td>2104</td><td>113</td><td>480</td><td>121</td><td>92</td><td>79%</td></tr><tr><td>Religion</td><td>2002</td><td>80</td><td>395</td><td>75</td><td>68</td><td>76%</td></tr><tr><td>Skills</td><td>2027</td><td>89</td><td>391</td><td>67</td><td>89</td><td>78%</td></tr><tr><td>Lore</td><td>2018</td><td>75</td><td>434</td><td>51</td><td>69</td><td>69%</td></tr><tr><td>Fiction</td><td>2034</td><td>120</td><td>324</td><td>53</td><td>106</td><td>79%</td></tr><tr><td>Total</td><td>14124</td><td>662</td><td>2970</td><td>531</td><td>530</td><td>77%</td></tr></table>",
"text": "Success rate of AR system.",
"type_str": "table",
"num": null
}
}
}
}