{ "paper_id": "O04-2002", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:00:51.744833Z" }, "title": "Automatic Pronominal Anaphora Resolution in English Texts", "authors": [ { "first": "Tyne", "middle": [], "last": "Liang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Chiao Tung University", "location": { "settlement": "Hsinchu", "country": "Taiwan" } }, "email": "tliang@cis.nctu.edu.tw" }, { "first": "Dian-Song", "middle": [], "last": "Wu", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Chiao Tung University", "location": { "settlement": "Hsinchu", "country": "Taiwan" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Anaphora is a common phenomenon in discourses as well as an important research issue in the applications of natural language processing. In this paper, anaphora resolution is achieved by employing WordNet ontology and heuristic rules. The proposed system identifies both intra-sentential and inter-sentential antecedents of anaphors. Information about animacy is obtained by analyzing the hierarchical relations of nouns and verbs in the surrounding context. The identification of animacy entities and pleonastic-it usage in English discourses are employed to promote resolution accuracy.", "pdf_parse": { "paper_id": "O04-2002", "_pdf_hash": "", "abstract": [ { "text": "Anaphora is a common phenomenon in discourses as well as an important research issue in the applications of natural language processing. In this paper, anaphora resolution is achieved by employing WordNet ontology and heuristic rules. The proposed system identifies both intra-sentential and inter-sentential antecedents of anaphors. Information about animacy is obtained by analyzing the hierarchical relations of nouns and verbs in the surrounding context. The identification of animacy entities and pleonastic-it usage in English discourses are employed to promote resolution accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Traditionally, anaphora resolution systems have relied on syntactic, semantic or pragmatic clues to identify the antecedent of an anaphor. Our proposed method makes use of WordNet ontology to identify animate entities as well as essential gender information. In the animacy agreement module, the property is identified by the hypernym relation between entities and their unique beginners defined in WordNet. In addition, the verb of the entity is also an important clue used to reduce the uncertainty. An experiment was conducted using a balanced corpus to resolve the pronominal anaphora phenomenon. The methods proposed in [Lappin and Leass, 94] and [Mitkov, 01] focus on the corpora with only inanimate pronouns such as \"it\" or \"its\". Thus the results of intra-sentential and inter-sentential anaphora distribution are different. In an experiment using Brown corpus, we found that the distribution proportion of intra-sentential anaphora is about 60%. Seven heuristic rules are applied in our system; five of them are preference rules, and two are constraint rules. They are derived from syntactic, semantic, pragmatic conventions and from the analysis of training data. A relative measurement indicates that about 30% of the errors can be eliminated by applying heuristic module.", "cite_spans": [ { "start": 625, "end": 643, "text": "[Lappin and Leass,", "ref_id": null }, { "start": 644, "end": 647, "text": "94]", "ref_id": null }, { "start": 652, "end": 660, "text": "[Mitkov,", "ref_id": null }, { "start": 661, "end": 664, "text": "01]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Anaphora resolution is vital in applications such as machine translation, summarization, question-answering systems and so on. In machine translation, anaphora must be resolved in the case of languages that mark the gender of pronouns. One main drawback with most current machine translation systems is that the translation produced usually does not go beyond the sentence level and, thus, does not successfully deal with discourse understanding. Inter-sentential anaphora resolution would, thus, be of great assistance in the development of machine translation systems. On the other hand, many automatic text summarization systems apply a scoring mechanism to identify the most salient sentences. However, the task results are not always guaranteed to be coherent with each other. This could lead to errors if a selected sentence contained anaphoric expressions. To improve accuracy in extracting important sentences, it is essential to solve the problem of anaphoric references beforehand.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem description", "sec_num": "1.1" }, { "text": "Pronominal anaphora, where pronouns are substituted by previously mentioned entities, is a common phenomenon. This type of anaphora can be further divided into four subclasses, namely:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem description", "sec_num": "1.1" }, { "text": "nominative: {he, she, it, they}; reflexive: {himself, herself, itself, themselves}; possessive: {his, her, its, their}; objective: {him, her, it, them}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem description", "sec_num": "1.1" }, { "text": "However, \"it\" can also be a non-anaphoric expression which does not refer to any previously mentioned item, in which case it is called an expletive or the pleonastic-it [Lappin and Leass, 94] . Although pleonastic pronouns are not considered anaphoric since they do not have antecedents to refer to, recognizing such occurrences is, nevertheless, essential during anaphora resolution. In [Mitkov, 01] , non-anaphoric pronouns were found to constitute 14.2% of a corpus of 28,272 words.", "cite_spans": [ { "start": 169, "end": 187, "text": "[Lappin and Leass,", "ref_id": null }, { "start": 188, "end": 191, "text": "94]", "ref_id": null }, { "start": 388, "end": 396, "text": "[Mitkov,", "ref_id": null }, { "start": 397, "end": 400, "text": "01]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Problem description", "sec_num": "1.1" }, { "text": "Definite noun phrase anaphora occurs where the antecedent is referred by a general concept entity. The general concept entity can be a semantically close phrase, such as a synonym or super-ordinates of the antecedent [Mitkov, 99] . The word one has a number of different usages apart from counting. One of its important functions is as an anaphoric form. For example:", "cite_spans": [ { "start": 217, "end": 225, "text": "[Mitkov,", "ref_id": null }, { "start": 226, "end": 229, "text": "99]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Problem description", "sec_num": "1.1" }, { "text": "Intra-sentential anaphora means that the anaphor and the corresponding antecedent occur in the same sentence. Inter-sentential anaphora means the antecedent occurs in a sentence prior to the sentence with the anaphor. In [Lappin and Leass, 94] , there were 15.9% inter-sentential cases and 84.1% intra-sentential cases in the testing results. In [Mitkov, 01] , there were 33.4% inter-sentential cases and 66.6% intra-sentential cases.", "cite_spans": [ { "start": 221, "end": 239, "text": "[Lappin and Leass,", "ref_id": null }, { "start": 240, "end": 243, "text": "94]", "ref_id": null }, { "start": 346, "end": 354, "text": "[Mitkov,", "ref_id": null }, { "start": 355, "end": 358, "text": "01]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Problem description", "sec_num": "1.1" }, { "text": "Traditionally, anaphora resolution systems have relied on syntactic, semantic or pragmatic clues to identify the antecedent of an anaphor. Hobbs' algorithm [Hobbs, 76] was the first syntax-oriented method presented in this research domain. From the result of a syntactic tree, they checked the number and gender agreement between antecedent candidates and a specified pronoun. In RAP (Resolution of Anaphora Procedure) proposed by Lappin and Leass [94] , an algorithm is applied to the syntactic representations generated by McCord's Slot Grammar parser, and salience measures are derived from the syntactic structure. It does not make use of semantic information or real world knowledge in choosing among the candidates. A modified version of RAP system was proposed by [Kennedy and Boguraev, 96] . It employed only part-of-speech tagging with a shallow syntactic parse indicating the grammatical roles of NPs and their containment in adjuncts or noun phrases. Cardie et al. [99] treated coreferencing as a clustering task. Then a distance metric function was used to decide whether two noun phrases were similar or not. In [Denber, 98] , an algorithm called Anaphora Matcher (AM) was implemented to handle inter-sentential anaphora in a two-sentence context. This method uses information about the sentence as well as real world semantic knowledge obtained from other sources. The lexical database system WordNet is utilized to acquire semantic clues about the words in the input sentences. It is noted that anaphora do not refer back more than one sentence in most cases. Thus, a two-sentence \"window size\" is sufficient for anaphora resolution in the domain of image queries.", "cite_spans": [ { "start": 156, "end": 163, "text": "[Hobbs,", "ref_id": null }, { "start": 164, "end": 167, "text": "76]", "ref_id": null }, { "start": 431, "end": 452, "text": "Lappin and Leass [94]", "ref_id": null }, { "start": 771, "end": 793, "text": "[Kennedy and Boguraev,", "ref_id": null }, { "start": 794, "end": 797, "text": "96]", "ref_id": null }, { "start": 962, "end": 980, "text": "Cardie et al. [99]", "ref_id": null }, { "start": 1125, "end": 1133, "text": "[Denber,", "ref_id": null }, { "start": 1134, "end": 1137, "text": "98]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Problem description", "sec_num": "1.1" }, { "text": "A statistical approach to disambiguate pronoun \"it\" in sentences was introduced in [Dagan and Itai, 90] . The disambiguation is based on the co-occurring patterns obtained from a corpus to find the antecedent. The antecedent candidate with the highest frequency in the co-occurring patterns is selected as a match for the anaphor.", "cite_spans": [ { "start": 83, "end": 99, "text": "[Dagan and Itai,", "ref_id": null }, { "start": 100, "end": 103, "text": "90]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Problem description", "sec_num": "1.1" }, { "text": "A knowledge-poor approach was proposed in [Mitkov, 98] ; it can be applied to different languages (English, Polish, and Arabic). The main components of this method are the so-called \"antecedent indicators\" which are used to assign a score (2, 1, 0, -1) for each candidate noun phrase. The scores play a decisive role in tracking down the antecedent from a set of possible candidates. CogNIAC (COGnition eNIAC) [Baldwin, 97] is a system developed at the University of Pennsylvania to resolve pronouns using limited knowledge and linguistic resources. It is a high precision pronoun resolution system that is capable of achieving more than 90% precision with 60% recall for some pronouns. Mitkov [02] presented a new, advanced and completely revamped version of his own knowledge-poor approach to pronoun resolution. In contrast to most anaphora resolution approaches, the system called MARS operates in the fully automatic mode. Three new indicators included in MARS are Boost Pronoun, Syntactic Parallelism and Frequent Candidates.", "cite_spans": [ { "start": 42, "end": 50, "text": "[Mitkov,", "ref_id": null }, { "start": 51, "end": 54, "text": "98]", "ref_id": null }, { "start": 410, "end": 419, "text": "[Baldwin,", "ref_id": null }, { "start": 420, "end": 423, "text": "97]", "ref_id": null }, { "start": 687, "end": 698, "text": "Mitkov [02]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Problem description", "sec_num": "1.1" }, { "text": "In [Mitkov, 01] , the authors proposed an evaluation environment for comparing anaphora resolution algorithms. Performances are illustrated by presenting the results of a comparative evaluation conducted on the basis of several evaluation measures. Their testing corpus contained 28,272 words, with 19,305 noun phrases and 422 pronouns, of which 362 were anaphoric expressions. The overall success rate calculated for the 422 pronouns found in the texts was 56.9% for Mitkov's method, 49.72% for Cogniac and 61.6% for Kennedy and Boguraev's method. The procedure used to identify antecedents is described as follows:", "cite_spans": [ { "start": 3, "end": 11, "text": "[Mitkov,", "ref_id": null }, { "start": 12, "end": 15, "text": "01]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Problem description", "sec_num": "1.1" }, { "text": "Candidate Set Animacy Agreement Number Agreement Gender Agreement Name Data WordNet 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed System Overview", "sec_num": "2.1" }, { "text": ". Each text is parsed into sentences and tagged by POS tagger. An internal representation data structure with essential information (such as sentence offset, word offset, word POS, base form, etc.) is stored. 2. Base noun phrases in each sentence are identified by NP finder module and stored in a global data structure. Then the number agreement is applied to the head noun. Capitalized nouns in the name gazetteer are tested to find personal names. A name will be tagged with the gender feature if it can be found uniquely in male or female class defined in gender agreement module. In this phase, WordNet is also used to find possible gender clues for improving resolution performance. The gender attribute is ignored to avoid ambiguity when the person name can be masculine or feminine. 3. Anaphors are checked sequentially from the beginning of the first sentence. They are stored in a list with sentence offset and word offset information. Then pleonastic-it is checked so that no further attempts at resolution are made. 4. The remaining noun phrases preceding the anaphor within a predefined window size are collected as antecedent candidates. Then the candidate set is further filtered by means of gender and animacy agreement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed System Overview", "sec_num": "2.1" }, { "text": "The remaining candidates are then evaluated by means of heuristic rules. These rules can be classified as preference rules or constraint rules. A scoring equation equation 1is used to evaluate how likely it is that a candidate will be selected as the antecedent. The scoring equation calculates the accumulated score of each possible candidate. The parameter agreement k denotes number agreement, gender agreement and animacy agreement output. If one of these three outputs indicates disagreement, the score will be set to zero. The parameter value enclosed in parentheses is the accumulated number of rules that fit our predefined heuristic rules:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) \u220f \u2211 \u2211 \u00d7 \u239f \u239f \u23a0 \u239e \u239c \u239c \u239d \u239b \u2212 = k k j j i i agreement con rule pre rule ana can score , _ _ ,", "eq_num": "(1)" } ], "section": "5.", "sec_num": null }, { "text": "where can: each candidate noun phrase for the specified anaphor; ana: anaphor to be resolved; rule_pre i : the ith preference rule; rule_con i : the ith constraint rule; agreement k : denotes number agreement, gender agreement and animacy agreement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.", "sec_num": null }, { "text": "The TOSCA-ICLE tagger [Aarts et al., 97] has been used to lemmatize and tag English learner corpora. The TOSCA-ICLE tag set consists of 16 major word classes. These major word classes may be further specified by means of features of subclasses as well as a variety of syntactic, semantic and morphological characteristics.", "cite_spans": [ { "start": 22, "end": 36, "text": "[Aarts et al.,", "ref_id": null }, { "start": 37, "end": 40, "text": "97]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "POS Tagging", "sec_num": "2.2.1" }, { "text": "According to the part-of-speech result, the basic noun phrase patterns are found to be as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NP Finder", "sec_num": "2.2.2" }, { "text": "base NP \u2192 modifier\uff0bhead noun modifier \u2192 At the beginning, our system identifies base noun phrases that contain no other smaller noun phrases within them. For example, the chief executive officer of a financial company is divided into the chief executive officer and a financial company for the convenience of judging whether the noun phrase is a prepositional noun phrase or not. This could be of help in selecting a correct candidate for a specific anaphor. Once the final candidate is selected, the entire modifier is combined together again.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NP Finder", "sec_num": "2.2.2" }, { "text": "The proposed base noun phrase finder is implemented based on a finite state machine ( Figure 2 ). Each state indicates a particular part-of-speech of a word. The arcs between states indicate a word input from the first word of the sentence. If a word sequence can be recognized from the initial state and ends in a final state, it is accepted as a base noun phrase with no recursion; otherwise, it is rejected. An example of base noun phrase output is illustrated in Figure 3 . ", "cite_spans": [], "ref_spans": [ { "start": 86, "end": 94, "text": "Figure 2", "ref_id": "FIGREF2" }, { "start": 467, "end": 475, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "NP Finder", "sec_num": "2.2.2" }, { "text": "The pleonastic-it module is used to filter out those semantic empty usage conditions which are essential for pronominal anaphora resolution. A word \"it\" is said to be pleonastic when it is used in a discourse where the word does not refer to any antecedent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pleonastic-it Module", "sec_num": "2.2.3" }, { "text": "References of \"pleonastic-it\" can be classified as state references or passive references [Denber, 98] . State references are usually used for assertions about the weather or the time, and this category is further divided into meteorological references and temporal references.", "cite_spans": [ { "start": 90, "end": 98, "text": "[Denber,", "ref_id": null }, { "start": 99, "end": 102, "text": "98]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Pleonastic-it Module", "sec_num": "2.2.3" }, { "text": "Passive references consist of modal adjectives and cognitive verbs. Modal adjectives (Modaladj) like advisable, convenient, desirable, difficult, easy, economical, certain, etc. are specified. The set of modal adjectives is extended by adding their comparative and superlative forms. Cognitive verbs (Cogv), on the other hand, are words like anticipate, assume, believe, expect, know, recommend, think, etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pleonastic-it Module", "sec_num": "2.2.3" }, { "text": "Most instances of \"pleonastic-it\" can be described by the following patterns:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pleonastic-it Module", "sec_num": "2.2.3" }, { "text": "1. It is Modaladj that S.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pleonastic-it Module", "sec_num": "2.2.3" }, { "text": "2. It is Modaladj (for NP) to VP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pleonastic-it Module", "sec_num": "2.2.3" }, { "text": "3. It is Cogv-ed that S.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pleonastic-it Module", "sec_num": "2.2.3" }, { "text": "It seems/appears/means/follows (that) S.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": ". NP makes/finds it Modaladj (for NP) to VP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5", "sec_num": null }, { "text": "6. It is time to VP. 7. It is thanks to NP that S.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5", "sec_num": null }, { "text": "The quantity of a countable noun can be singular (one entity) or plural (numerous entities). It makes the process of deciding on candidates easier since they must be consistent in number. With the output of the specific tagger, all the noun phrases and pronouns are annotated with number (single or plural). For a specified pronoun, we can discard those noun phrases that differ in number from the pronoun.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Number Agreement", "sec_num": "2.2.4" }, { "text": "The gender recognition process can deal with words that have gender features. To distinguish the gender information of a person, we use an English first name list collected from (http://www.behindthename.com/) covering 5,661 male first name entries and 5,087 female ones. In addition, we employ some useful clues from WordNet results by conducting keyword search around the query result. These keywords can be divided into two classes\uff1a Class_Female= {feminine, female, woman, women} Class_Male= {masculine, male, man, men}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gender Agreement", "sec_num": "2.2.5" }, { "text": "Animacy denotes the living entities which can be referred to by some gender-marked pronouns (he, she, him, her, his, hers, himself, herself) in texts. Conventionally, animate entities include people and animals. Since it is hard to obtain the property of animacy with respect to a noun phrase by its surface morphology, we use WordNet [Miller, 93] to recognize animate entities in which a noun can only have one hypernym but can have many hyponyms. With twenty-five unique beginners, we observe that two of them can be taken as representations of animacy. These two unique beginners are {animal, fauna} and {person, human being}. Since all the hyponyms inherit properties from their hypernyms, the animacy of a noun can be determined by making use of this hierarchical relation. However, a noun may have several senses, depending on the context. The output result with respect to a noun must be employed to resolve this problem. First of all, a threshold value t_noun is defined (equation 2) as the ratio of the number of senses in animacy files to the number of total senses. This threshold value can be obtained by training a corpus, and the value is selected when the accuracy rate reaches its maximum: Besides the noun hypernym relation, unique beginners of verbs are also taken into consideration. These lexicographical files with respect to verb synsets are {cognition}, {communication}, {emotion}, and {social} ( Table 1) . The sense of a verb, for example \"read,\" varies from context to context as well. We can also define a threshold value t_verb as the ratio of the number of senses in animacy files (Table 1) to the number of total senses. The training data that we obtained from the Brown corpus consisted of 10,134 words, 2,155 noun phrases, and 517 animacy entities. We found that 24% of the noun phrases in the corpus referred to animate entities, whereas 76% of them referred to inanimate ones. We utilized the ratio of senses from the WordNet output to decide whether the entity was an animate entity or not. Therefore, the ratio of senses in the noun and its verb is obtained in the training phase to achieve the highest possible accuracy. Afterwards, the testing phase makes use of these two threshold values to decide on the animate feature. Threshold values can be obtained by training on the corpus and selecting the value when the accuracy rate (equation 4) reaches its maximum. Therefore, t_noun and t_verb were found to be 0.8 and 0.9, respectively, according to the distribution in Figure 4 . The process of determining whether a noun phrase is animate or inanimate is described below\uff1a", "cite_spans": [ { "start": 335, "end": 343, "text": "[Miller,", "ref_id": null }, { "start": 344, "end": 347, "text": "93]", "ref_id": null } ], "ref_spans": [ { "start": 1420, "end": 1428, "text": "Table 1)", "ref_id": "TABREF1" }, { "start": 1610, "end": 1619, "text": "(Table 1)", "ref_id": "TABREF1" }, { "start": 2508, "end": 2516, "text": "Figure 4", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Animacy Agreement", "sec_num": "2.2.6" }, { "text": "The syntactic parallelism of an anaphor and an antecedent could be an important clue when other constraints or preferences can not be employed to identify a unique unambiguous antecedent. The rule reflects the preference that the correct antecedent has the same part-of-speech and grammatical function as the anaphor. Nouns can function grammatically as subjects, objects or subject complements. The subject is the person, thing, concept or idea that is the topic of the sentence. The object is directly or indirectly affected by the nature of the verb. Words which follow verbs are not always direct or indirect objects. After a particular kind of verb, such as verb \"be\", nouns remain in the subjective case. We call these subjective completions or subject complements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I. Syntactic parallelism rule", "sec_num": null }, { "text": "The security guard took off the uniform after getting off duty.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For example:", "sec_num": null }, { "text": "He put it in the bottom of the closet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For example:", "sec_num": null }, { "text": "\"He\" (the subject) in the second sentence refers to \"The security guard,\" which is also the subject of the first sentence. In the same way, \"it\" refers to \"the uniform,\" which is the object of the first sentence. Empirical evidence also shows that anaphors usually match their antecedents in terms of their syntactic functions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For example:", "sec_num": null }, { "text": "This preference works by identifying collocation patterns in which anaphora appear. In this way, the system can automatically identify semantic roles and employ them to select the most appropriate candidate. Collocation relations specify the relations between words that tend to co-occur in the same lexical contexts. The rule emphasizes that those noun phrases with the same semantic roles as the anaphor are preferred answer candidates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "II. Semantic parallelism rule", "sec_num": null }, { "text": "Definiteness is a category concerned with the grammaticalization of the identifiability and non-identifiability of referents. A definite noun phrase is a noun phrase that starts with the word \"the\"; for example, \"the young lady\" is a definite noun phrase. Definite noun phrases which can be identified uniquely are more likely to be antecedents of anaphors than indefinite noun phrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "III. Definiteness rule", "sec_num": null }, { "text": "Recurring items in a context are regarded as likely candidates for the antecedent of an anaphor. Generally, high frequency items indicate the topic as well as the most likely candidate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IV. Mention Frequency rule", "sec_num": null }, { "text": "Recency information is employed in most of the implementations of anaphora resolution. In [Lappin, 94] , the recency factor is the one with the highest weight among a set of factors that influence the choice of antecedent. The recency factor states that if there are two (or more) candidate antecedents for an anaphor, and that all of these candidates satisfy the consistency restrictions for the anaphor (i.e., they are qualified candidates), then the most recent one (the one closest to the anaphor) is chosen. In [Mitkov et al., 01] , the average distance (within a sentence) between the anaphor and the antecedent was found to be 1.3, and the average distance for noun phrases was found to be 4.3 NPs.", "cite_spans": [ { "start": 90, "end": 98, "text": "[Lappin,", "ref_id": null }, { "start": 99, "end": 102, "text": "94]", "ref_id": null }, { "start": 516, "end": 531, "text": "[Mitkov et al.,", "ref_id": null }, { "start": 532, "end": 535, "text": "01]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "V. Sentence recency rule", "sec_num": null }, { "text": "A noun phrase not contained in another noun phrase is considered a possible candidate. This condition can be explained from the perspective of functional ranking: subject > direct object > indirect object. A noun phrase embedded in a prepositional noun phrase is usually an indirect object.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "VI. Non-prepositional noun phrase rule", "sec_num": null }, { "text": "Conjunctions are usually used to link words, phrases and clauses. If a candidate is connected with an anaphor by a conjunction, the anaphora relation is hard to be constructed between these two entities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "VII. Conjunction constraint rule", "sec_num": null }, { "text": "For example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "VII. Conjunction constraint rule", "sec_num": null }, { "text": "Mr. Brown teaches in a high school. Both Jane and he enjoy watching movies on weekends.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "VII. Conjunction constraint rule", "sec_num": null }, { "text": "The training and testing texts were selected randomly from the Brown corpus. The Corpus is divided into 500 samples of about 2000 words each. The samples represent a wide range of styles and varieties of prose. The main categories are listed in Figure 5 . ", "cite_spans": [], "ref_spans": [ { "start": 245, "end": 253, "text": "Figure 5", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "The Brown Corpus", "sec_num": "2.3" }, { "text": "The main system window is shown in Figure 6 . The text editor is used to input raw text without any annotations and to show the analysis result. The POS tagger component takes the input text and outputs tokens, lemmas, most likely tags and the number of alternative tags. The NP chunker makes use of a finite state machine (FSM) to recognize strings which belong to a specified regular set. After the selection procedure is performed, the most appropriate antecedent is chosen to match each anaphor in the text. Figure 7 illustrates the result of anaphora pairs in each line, in which sentence number and word number are attached at the end of each entity. For example, \"it\" as the first word of the first sentence denotes a pleonastic-it, and the other \"it,\" the 57th word of the second sentence refers to \"the heart.\" Figure 8 shows the original text input with antecedent annotation following each anaphor in the text. All the annotations are highlighted to facilitate subsequent testing. ", "cite_spans": [], "ref_spans": [ { "start": 35, "end": 43, "text": "Figure 6", "ref_id": "FIGREF7" }, { "start": 512, "end": 520, "text": "Figure 7", "ref_id": "FIGREF8" }, { "start": 820, "end": 828, "text": "Figure 8", "ref_id": "FIGREF9" } ], "eq_spans": [], "section": "System functions", "sec_num": "2.4" }, { "text": "The evaluation experiment employed random texts of different genres selected from the Brown corpus. There were 14,124 words, 2,970 noun phrases and 530 anaphors in the testing data. Two baseline models were established to compare the progress of performance with our proposed anaphora resolution (AR) system. The first baseline model (called the baseline subject) determined the number and gender agreement between candidates and anaphors, and then chose the most recent subject as the antecedent from the candidate set. The second baseline model (called baseline recent) performed a similar procedure, but it selected the most recent noun phrase as the antecedent which matched the anaphor in terms of number and gender agreement. The success rate was calculated as follows: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results and Analysis", "sec_num": "3." }, { "text": "The results obtained (Table 3) showed that there are 41% of the antecedents could be identified by finding the most recent subject; however, only 17% of the antecedents could be resolved by selecting the most recent noun phrase with the same gender and number agreement as the anaphor. Figure 9 presents the distribution of the sentence distance between antecedents and anaphors. The value 0 denotes intra-sentential anaphora and other values indicate inter-sentential anaphora. In the experiment, a balanced corpus was used to resolve the pronominal anaphora phenomenon. The methods proposed in [Lappin and Leass, 94] and [Mitkov, 01] employ corpora with only inanimate pronouns, such as \"it\" or \"its.\" Thus, the results for intra-sentential and inter-sentential anaphora distribution obtained using those methods are different. In our experiment on the Brown corpus, the distribution proportion of intra-sentential anaphora was about 60%. Figure 10 shows the average word distance distribution for each genre. The pleonastic-it could be identified with 89% accuracy (Table 4) . The next experiment provided empirical evidence showing that the enforcement of agreement constraints increases the system's chances of selecting a correct antecedent from an initial candidate set. To access the effectiveness of each module, the total number of candidates in each genre was determined after applying the following four phases which include number agreement, gender agreement, animacy agreement, and heuristic rules ( Figure 11 ). As shown in Figure 12 , the error rates for two genres of testing data indicated the improvement in choosing correct antecedents following each resolution phase. Apparently, the animate module achieved more significant error rate reduction in the reportage domain than the other one. The final evaluation results obtained using our system, which applied animacy agreement and heuristic rules to resolution, are listed in Table 6 . It also shows the results for each individual genre of testing data and the overall success rate, which reached 77%. Our proposed method makes use of the WordNet ontology to identify animate entities as well as essential gender information. In the animacy agreement module, each property is identified by the hypernym relation between entities and their unique beginners defined in WordNet. In addition, the verb of the entity is also an important clue for reducing the uncertainty. An overall comparison is shown below:", "cite_spans": [ { "start": 596, "end": 614, "text": "[Lappin and Leass,", "ref_id": null }, { "start": 615, "end": 618, "text": "94]", "ref_id": null }, { "start": 623, "end": 631, "text": "[Mitkov,", "ref_id": null }, { "start": 632, "end": 635, "text": "01]", "ref_id": null } ], "ref_spans": [ { "start": 21, "end": 30, "text": "(Table 3)", "ref_id": "TABREF2" }, { "start": 286, "end": 294, "text": "Figure 9", "ref_id": null }, { "start": 941, "end": 950, "text": "Figure 10", "ref_id": "FIGREF1" }, { "start": 1068, "end": 1077, "text": "(Table 4)", "ref_id": "TABREF4" }, { "start": 1514, "end": 1524, "text": "Figure 11", "ref_id": "FIGREF1" }, { "start": 1540, "end": 1549, "text": "Figure 12", "ref_id": "FIGREF1" }, { "start": 1949, "end": 1956, "text": "Table 6", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experimental Results and Analysis", "sec_num": "3." }, { "text": "Our method [Kennedy and Boguraev, 96 In the preprocessing phase, the accuracy of the POS tagger was about 95%. If a noun is misclassified as another part-of -speech, for example, if the noun \"patient\" is tagged as an adjective, then there is no chance for it to be considered as a legal antecedent candidate of an anaphor. The other problems encountered in the system are multiple antecedents and unknown word phenomena. In the case of multiple antecedents, the correct answer is composed of more than one entity, such as \"Boys and girls are singing with pleasure.\" In this case, additional heuristic are needed to decide whether the entities should be combined into one entity or not. In the case of an unknown word, the tagger may fail to identify the part of speech of the word so that in WordNet, no unique beginner can be assigned. This can lead to a matching failure if the entity turns out to be the correct anaphoric reference.", "cite_spans": [ { "start": 11, "end": 33, "text": "[Kennedy and Boguraev,", "ref_id": null }, { "start": 34, "end": 36, "text": "96", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Results and Analysis", "sec_num": "3." }, { "text": "In this paper, the WordNet ontology and heuristic rules have been adopted to perform anaphora resolution. The recognition of animacy entities and gender features in discourses is helpful for improving resolution accuracy. The proposed system is able to deal with intra-sentential and inter-sentential anaphora in English texts and deals appropriately with pleonastic pronouns. From the experiment results, our proposed method is comparable in performance with prior works that fully parse the text. In contrast to most anaphora resolution approaches, our system benefits from the recognition of animacy agreement and operates in a fully automatic mode to achieve optimal performance. With the growing interest in natural language processing and its various applications, anaphora resolution is essential for further message understanding and the coherence of discourses during text processing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "4." }, { "text": "Our future works will be as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "4." }, { "text": "1. Extending the set of anaphors to be processed:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "4." }, { "text": "This analysis aims at identifying instances (such as definite anaphors) that could be useful in anaphora resolution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "4." }, { "text": "The language resource WordNet can be utilized to identify coreference entities by their synonymy/hypernym/hyponym relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Resolving nominal coreferences:", "sec_num": "2." } ], "back_matter": [ { "text": "This research is partially supported by National Science Council, R.O.C., under NSC contract 91-2213-E-009-082 and by MediaTek Research Center, National Chiao Tung University, Taiwan.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The TOSCA-ICLE Tagset: Tagging Manual", "authors": [ { "first": "Aarts", "middle": [], "last": "Jan", "suffix": "" }, { "first": "Henk", "middle": [], "last": "Barkema", "suffix": "" }, { "first": "Nelleke", "middle": [], "last": "Oostdijk", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jan, Aarts, Henk Barkema and Nelleke Oostdijk, \"The TOSCA-ICLE Tagset: Tagging Manual,\" TOSCA Research Group for Corpus Linguistics, 1997.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "CogNIAC: high precision coreference with limited knowledge and linguistic resources", "authors": [ { "first": "Breck", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the ACL'97/EACL'97 workshop on Operational factors in practical, robust anaphora resolution", "volume": "", "issue": "", "pages": "38--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baldwin, Breck, \"CogNIAC: high precision coreference with limited knowledge and linguistic resources,\" In Proceedings of the ACL'97/EACL'97 workshop on Operational factors in practical, robust anaphora resolution, 1997, pp. 38-45.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Shallow Methods for Named Entity Coreference Resolution", "authors": [ { "first": "Kalina", "middle": [], "last": "Bontcheva", "suffix": "" }, { "first": "Marin", "middle": [], "last": "Dimitrov", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Maynard", "suffix": "" }, { "first": "Valentin", "middle": [], "last": "Tablan", "suffix": "" } ], "year": 2002, "venue": "Proceedings of TRAITEMENT AUTOMATIQUE DES LANGUES NATURELLES (TALN)", "volume": "", "issue": "", "pages": "24--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bontcheva, Kalina, Marin Dimitrov, Diana Maynard and Valentin Tablan, \"Shallow Methods for Named Entity Coreference Resolution,\" In Proceedings of TRAITEMENT AUTOMATIQUE DES LANGUES NATURELLES (TALN), 2002, pp. 24-32.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Noun Phrase Coreference as Clustering", "authors": [ { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "Kiri", "middle": [], "last": "Wagstaff", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Very Large Corpora", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cardie, Claire and Kiri Wagstaff, \"Noun Phrase Coreference as Clustering,\" In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, 1999.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Extracting Noun Phrases from Large-Scale Texts: A Hybrid Approach and Its Automatic Evaluation", "authors": [ { "first": "Kuang-Hua", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Hsin-Hsi", "middle": [], "last": "Chen", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 32nd ACL Annual Meeting", "volume": "", "issue": "", "pages": "234--241", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, Kuang-hua and Hsin-Hsi Chen, \"Extracting Noun Phrases from Large-Scale Texts: A Hybrid Approach and Its Automatic Evaluation,\" In Proceedings of the 32nd ACL Annual Meeting, 1994, pp. 234-241.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Automatic processing of large corpora for the resolution of anaphora references", "authors": [ { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Itai", "suffix": "" } ], "year": 1990, "venue": "Proceedings of the 13th International Conference on Computational Linguistics (COLING'90)", "volume": "III", "issue": "", "pages": "1--3", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dagan, Ido and Alon Itai, \"Automatic processing of large corpora for the resolution of anaphora references,\" In Proceedings of the 13th International Conference on Computational Linguistics (COLING'90), Vol. III, 1-3, 1990.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Automatic resolution of anaphora in English", "authors": [ { "first": "Michel", "middle": [], "last": "Denber", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Denber, Michel, \"Automatic resolution of anaphora in English,\" Technical report, Eastman Kodak Co. , 1998.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Improving anaphora resolution by identifying animate entities in texts", "authors": [ { "first": "Richard", "middle": [], "last": "Evans", "suffix": "" }, { "first": "Constantin", "middle": [], "last": "Orasan", "suffix": "" } ], "year": 2000, "venue": "Proceedings of DAARC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Evans, Richard and Constantin Orasan, \"Improving anaphora resolution by identifying animate entities in texts,\" In Proceedings of DAARC, 2000.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A Statistical Approach to Anaphora Resolution", "authors": [ { "first": "Niyu", "middle": [], "last": "Ge", "suffix": "" }, { "first": "John", "middle": [], "last": "Hale", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the Sixth Workshop on Very Large Corpora (COLING-ACL98)", "volume": "", "issue": "", "pages": "161--170", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ge, Niyu, John Hale and Eugene Charniak, \"A Statistical Approach to Anaphora Resolution,\" In Proceedings of the Sixth Workshop on Very Large Corpora (COLING-ACL98), 1998, pp.161-170.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Anaphora for everyone: Pronominal anaphora resolution without a parser", "authors": [ { "first": "Christopher", "middle": [], "last": "Kennedy", "suffix": "" }, { "first": "Branimir", "middle": [], "last": "Boguraev", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 16 th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "113--118", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kennedy, Christopher and Branimir Boguraev, \"Anaphora for everyone: Pronominal anaphora resolution without a parser,\" In Proceedings of the 16 th International Conference on Computational Linguistics, 1996, pp.113-118.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "An Algorithm for Pronominal Anaphora Resolution", "authors": [ { "first": "Shalom", "middle": [], "last": "Lappin", "suffix": "" }, { "first": "Herbert", "middle": [], "last": "Leass", "suffix": "" } ], "year": 1994, "venue": "Computational Linguistics", "volume": "20", "issue": "4", "pages": "535--561", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lappin, Shalom and Herbert Leass, \"An Algorithm for Pronominal Anaphora Resolution,\" Computational Linguistics, Volume 20, Part 4, 1994, pp. 535-561.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Nouns in WordNet: A Lexical Inheritance System", "authors": [ { "first": "George", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1993, "venue": "Journal of Lexicography", "volume": "", "issue": "", "pages": "245--264", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miller, George, \"Nouns in WordNet: A Lexical Inheritance System,\" Journal of Lexicography, 1993, pp. 245-264.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Robust pronoun resolution with limited knowledge", "authors": [ { "first": "Ruslan", "middle": [], "last": "Mitkov", "suffix": "" } ], "year": null, "venue": "Proceedings of the 18th International Conference on Computational Linguistics (COLING'98)/ACL'98", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitkov, Ruslan, \"Robust pronoun resolution with limited knowledge, \" In Proceedings of the 18th International Conference on Computational Linguistics (COLING'98)/ACL'98", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Anaphora Resolution: The State of the Art", "authors": [ { "first": "Ruslan", "middle": [], "last": "Mitkov", "suffix": "" } ], "year": 1999, "venue": "Working paper (Based on the COLING'98/ACL'98 tutorial on anaphora resolution)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitkov, Ruslan, \"Anaphora Resolution: The State of the Art,\" Working paper (Based on the COLING'98/ACL'98 tutorial on anaphora resolution), 1999.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Evaluation tool for rule-based anaphora resolution methods", "authors": [ { "first": "Ruslan", "middle": [], "last": "Mitkov", "suffix": "" }, { "first": "Catalina", "middle": [], "last": "Barbu", "suffix": "" } ], "year": 2001, "venue": "Proeedings of ACL'01", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitkov, Ruslan and Catalina Barbu, \"Evaluation tool for rule-based anaphora resolution methods,\" In Proeedings of ACL'01, Toulouse, 2001.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A new fully automatic version of Mitkov's knowledge-poor pronoun resolution method", "authors": [ { "first": "Ruslan", "middle": [], "last": "Mitkov", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Evans", "suffix": "" }, { "first": "Constantin", "middle": [], "last": "Orasan", "suffix": "" } ], "year": null, "venue": "Proceedings of CICLing-2000", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitkov, Ruslan, Richard Evans and Constantin Orasan, \"A new fully automatic version of Mitkov's knowledge-poor pronoun resolution method,\" In Proceedings of CICLing- 2000, Mexico City, Mexico.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Anaphora Resolution in Chinese Financial News for Information Extraction", "authors": [ { "first": "Ning", "middle": [], "last": "Wang", "suffix": "" }, { "first": "", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "", "middle": [], "last": "Chunfa", "suffix": "" }, { "first": "K", "middle": [ "F" ], "last": "Wang", "suffix": "" }, { "first": "Wenjie", "middle": [], "last": "Li", "suffix": "" } ], "year": 2002, "venue": "Proceedings of 4th World Congress on Intelligent Control and Automation", "volume": "", "issue": "", "pages": "2422--2426", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang, Ning Yuan, Chunfa, Wang, K.F. and Li, Wenjie \"Anaphora Resolution in Chinese Financial News for Information Extraction,\" In Proceedings of 4th World Congress on Intelligent Control and Automation, June 2002, pp.2422-2426.", "links": null } }, "ref_entries": { "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "Architecture overview." }, "FIGREF2": { "num": null, "type_str": "figure", "uris": null, "text": "Finite state machine for a noun phrase." }, "FIGREF3": { "num": null, "type_str": "figure", "uris": null, "text": "An example output of a base noun phrase." }, "FIGREF5": { "num": null, "type_str": "figure", "uris": null, "text": "Thresholds of Animacy Entities." }, "FIGREF6": { "num": null, "type_str": "figure", "uris": null, "text": "Categories of the Brown corpus." }, "FIGREF7": { "num": null, "type_str": "figure", "uris": null, "text": "The main system window." }, "FIGREF8": { "num": null, "type_str": "figure", "uris": null, "text": "Anaphora pairs." }, "FIGREF9": { "num": null, "type_str": "figure", "uris": null, "text": "Anaphor with antecedent annotation." }, "FIGREF12": { "num": null, "type_str": "figure", "uris": null, "text": "Referential word distance distribution." }, "FIGREF14": { "num": null, "type_str": "figure", "uris": null, "text": "Candidate distribution after applying resolution modules. Error rate after applying resolution modules." }, "TABREF1": { "content": "
Unique beginnersExample of verb
{cognition}Think, analyze, judge \u2026
{communication}Tell, ask, teach \u2026
{emotion}Feel, love, fear \u2026
{social}Participate, make, establish \u2026
", "type_str": "table", "num": null, "text": "", "html": null }, "TABREF2": { "content": "", "type_str": "table", "num": null, "text": "", "html": null }, "TABREF3": { "content": "
", "type_str": "table", "num": null, "text": "Success rate of baseline models.", "html": null }, "TABREF4": { "content": "
Number ofNumber ofNumber ofRatio ofAccuracy of
anaphoraAnaphoricPleonastic-itsPleonastic-it toidentification
expressionspronoun
Total530483479%89%
", "type_str": "table", "num": null, "text": "", "html": null }, "TABREF5": { "content": "
GenreWordsLinesNPsAnimsAnaphorsSuccess Rate
Reportage1972904881105280%
Editorial196795458545480%
Reviews21041134801219279%
Religion200280395756876%
Skills202789391678978%
Lore201875434516969%
Fiction20341203245310679%
Total14124662297053153077%
", "type_str": "table", "num": null, "text": "", "html": null } } } }