{ "paper_id": "I05-1032", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:26:06.102788Z" }, "title": "Finding Taxonomical Relation from an MRD for Thesaurus Extension", "authors": [ { "first": "Seonhwa", "middle": [], "last": "Choi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Chonnam National University", "location": { "addrLine": "300 Youngbong-Dong, Puk-Ku Gwangju", "postCode": "500-757", "country": "Korea" } }, "email": "" }, { "first": "Hyukro", "middle": [], "last": "Park", "suffix": "", "affiliation": { "laboratory": "", "institution": "Chonnam National University", "location": { "addrLine": "300 Youngbong-Dong, Puk-Ku Gwangju", "postCode": "500-757", "country": "Korea" } }, "email": "hyukro@chonnam.ac.kr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Building a thesaurus is very costly and time-consuming task. To alleviate this problem, this paper proposes a new method for extending a thesaurus by adding taxonomic information automatically extracted from an MRD. The proposed method adopts a machine learning algorithm in acquiring rules for identifying a taxonomic relationship to minimize human-intervention. The accuracy of our method in identifying hypernyms of a noun is 89.7%, and it shows that the proposed method can be successfully applied to the problem of extending a thesaurus.", "pdf_parse": { "paper_id": "I05-1032", "_pdf_hash": "", "abstract": [ { "text": "Building a thesaurus is very costly and time-consuming task. To alleviate this problem, this paper proposes a new method for extending a thesaurus by adding taxonomic information automatically extracted from an MRD. The proposed method adopts a machine learning algorithm in acquiring rules for identifying a taxonomic relationship to minimize human-intervention. The accuracy of our method in identifying hypernyms of a noun is 89.7%, and it shows that the proposed method can be successfully applied to the problem of extending a thesaurus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "As the natural language processing (NLP) systems became large and applied to wide variety of application domains, the need for a broad-coverage lexical knowledge-base has increased more than ever before. A thesaurus, as one of these lexical knowledgebases, mainly represents a taxonomic relationship between nouns. However, because building broad-coverage thesauri is a very costly and time-consuming job, they are not readily available and often too general to be applied to a specific domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The work presented here is an attempt to alleviate this problem by devising a new method for extending a thesaurus automatically using taxonomic information extracted from a machine readable dictionary (MRD).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most of the previous approaches for extracting hypernyms of a noun from the definition in an MRD rely on the lexico-syntactic patterns compiled by human experts. Not only these methods require high cost for compiling lexico-syntactic patterns but also it is very difficult for human experts to compile a set of lexical-syntactic patterns with a broad-coverage because, in natural languages, there are various different expressions which represent the same concept. Accordingly the applicable scope of a set of lexico-syntactic patterns compiled by human is very limited.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To overcome the drawbacks of human-compiled lexico-syntactic patterns, we use part-of-speech (POS) patterns only and try to induce these patterns automatically using a small bootstrapping thesaurus and machine learning methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of the paper is organized as follows. We introduce the related works in section 2. Section 3 deals with the problem of features selection. In section 4, our problem is formally defined as a machine learning method and discuss implementation details. Section 5 is devoted to experimenal result. Finally, we come to the conclusion of this paper in section 6. [3] introduced a method for the automatic acquisition of the hyponymy lexical relation from unrestricted text, and gave several examples of lexico-syntactic patterns for hyponymy that can be used to detect these relationships including those used here, along with an algorithm for identifying new patterns. Hearst's approach is complementary to statistically based approaches that find semantic relations between terms, in that hers requires a single specially expressed instance of a relation while the others require a statistically significant number of generally expressed relations. The hyponym-hypernym pairs found by Hearst's algorithm include some that she describes as \"context and point-of-view dependent\", such as \"Washington/nationalist\" and \"aircraft/target\". [4] was somewhat less sensitive to this kind of problem since only the most common hypernym of an entire cluster of nouns is reported, so much of the noise is filtered. [3] tried to discover new patterns for hyponymy by hand, nevertheless it is a costly and time-consuming job. In the case of [3] and [4] , since the hierarchy was learned from text, it got to be domain-specific different from a generalpurpose resource such as WordNet.", "cite_spans": [ { "start": 366, "end": 369, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 1139, "end": 1142, "text": "[4]", "ref_id": "BIBREF3" }, { "start": 1308, "end": 1311, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 1432, "end": 1435, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 1440, "end": 1443, "text": "[4]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "[2] proposed a method that combines a set of unsupervised algorithms in order to accurately build large taxonomies from any MRD, and a system that 1)performs fully automatic extraction of a taxonomic link from MRD entries and 2) ranks the extracted relations in a way that selective manual refinement is allowed. In this project, they introduced the idea of the hyponym-hypernym relationship appears between the entry word and the genus term. Thus, usually a dictionary definition is written to employ a genus term combined with differentia which distinguishes the word being defined from other words with the same genus term. They found the genus term by simple heuristic defined using several examples of lexico-syntactic patterns for hyponymy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "[1] presented the method to extract semantic information from standard dictionary definitions. Their automated mechanism for finding the genus terms is based on the observation that the genus term from verb and noun definitions is typically the head of the defining phrase. The syntax of the verb phrase used in verb definitions makes it possible to locate its head with a simple heuristic: the head is the single verb following the word to. He asserted that heads are bounded on the left and right by specific lexical defined by human intuition, and the substring after eliminating boundary words from definitions is regarded as a head.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "By the similar idea to [2] , [10] introduced six kinds of rule extracting a hypernym from Korean MRD according to a structure of a dictionary definition. In this work, Moon proposed that only a subset of the possible instances of the hypernym relation will appear in a particular form, and she divides a definition sentence into a head term combined with differentia and a functional term. For extracting a hypernym, Moon analyzed a definition of a noun by word list and the position of words, and then searched a pattern coinciding with the lexico-syntactic patterns made by human intuition in the definition of any noun, and then extracted a hypernym using an appropriate rule among 6 rules. For example, rule 2 states that if a word X occurs in front of a lexical pattern \"leul bu-leu-deon i-leum ( the name to call )\",then X is extracted as a hypernym of the entry word.", "cite_spans": [ { "start": 23, "end": 26, "text": "[2]", "ref_id": "BIBREF1" }, { "start": 29, "end": 33, "text": "[10]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Several ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Machine learning approaches require an example to be represented as a feature vector. How an example is represented or what features are used to represent the example has profound impact on the performance of the machine learning algorithms. This section deals with the problems of feature selection with respect to characteristics of Korean for successful identification of hypernyms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features for Hypernym Identification", "sec_num": "3" }, { "text": "Location of a word. In Korean, a head word usually appears after its modifying words. Therefore a head word has tendency to be located at the end of a sentence. In the definition sentences in a Korean MRD, this tendency becomes much stronger. In the training examples, we found that 11% of the hypernyms appeared at the start, 81% of them appeared at the end and 7% appeared at the middle of a definition sentence. Thus, the location of a noun in a definition sentences is an important feature for determining whether the word is a hypernym or not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features for Hypernym Identification", "sec_num": "3" }, { "text": "Korean is an agglutinative language in which a word-phrase is generally a composition of a content word and some number of function words. A function word denotes the grammatical relationship between word-phrases, while a content word contains the central meaning of the word-phrase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POS of a function word attached to a noun.", "sec_num": null }, { "text": "In the definition sentences, the function words which attached to hypernyms are confined to a small number of POSs. For example, nominalization endings, objective case postpositions come frequently after hypernyms but dative postpositions or locative postpositions never appear after hypernyms. A functional word is appropriate feature for identifying hypernyms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POS of a function word attached to a noun.", "sec_num": null }, { "text": "The context in which a word appears is valuable information and a wide variety of applications such as word clustering or word sense disambiguation make use of it. Like in many other applications, context of a noun is important in deciding hyperhyms too because hypernyms mainly appear in some limited context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context of a noun.", "sec_num": null }, { "text": "Although lexico-syntactic patterns can represent more specific contexts, building set of lexco-syntactic patterns requires enormous training data. So we confined ourselves only to syntactic patterns in which hypernyms appear.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context of a noun.", "sec_num": null }, { "text": "We limited the context of a noun to be 4 word-phrases appearing around the noun. Because the relations between word-phrases are represented by the function words of these word-phrases, the context of a noun includes only POSs of the function words of the neighboring word-phrases. When a word-phrase has more than a functional morpheme, a representative functional morpheme is selected by an algorithm proposed by [8] .", "cite_spans": [ { "start": 414, "end": 417, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Context of a noun.", "sec_num": null }, { "text": "When a noun appears at the start or at the end of a sentence, it does not have right or left context respectively. In this case, two treatments are possible. The simplest approach is to treat the missing context as don't care terms. On the other hand, we could extend the range of available context to compensate the missing context. For example, the context of a noun at the start of a sentence includes 4 POSs of function words in its right-side neighboring word-phrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context of a noun.", "sec_num": null }, { "text": "Decision tree learning is one of the most widely used and a practical methods for inductive inference such as ID3, ASSISTANT, and C4. 5[14] . Because decision tree learning is a method for approximating discrete-valued functions that is robust to noisy data, it has therefore been applied to various classification problems successfully.", "cite_spans": [ { "start": 134, "end": 139, "text": "5[14]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Learning Classification Rules", "sec_num": "4" }, { "text": "Our problem is to determine for each noun in definition sentences of a word whether it is a hypernym of the word or not. Thus our problem can be modeled as two-category classification problem. This observation leads us to use a decision tree learning algorithm C4.5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Classification Rules", "sec_num": "4" }, { "text": "Our learning problem can be formally defined as followings:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Classification Rules", "sec_num": "4" }, { "text": "\u2022 Task T : determining whether a noun is a hypernym of an entry word or not .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Classification Rules", "sec_num": "4" }, { "text": "\u2022 Performance measure P : percentage of nouns correctly classified.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Classification Rules", "sec_num": "4" }, { "text": "\u2022 Training examples E : a set of nouns appearing in the definition sentences of the MRD with their feature vectors and target values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Classification Rules", "sec_num": "4" }, { "text": "To collect training examples, we used a Korean MRD provided by Korean TermBank Project[15] and a Korean thesaurus compiled by Electronic Communication Research Institute. The dictionary contains approximately 220,000 nouns with their definition sentences while the thesaurus has approximately 120,000 nouns and taxonomy relations between them. The fact that 46% of nouns in the dictionary are missing from the thesaurus shows that it is necessary to extend a thesaurus using an MRD.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Classification Rules", "sec_num": "4" }, { "text": "Using the thesaurus and the MRD, we found that 107,000 nouns in the thesaurus have their hypernyms in the definition sentences in the MRD. We used 70% of these nouns as training data and the remaining 30% of them as evaluation data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Classification Rules", "sec_num": "4" }, { "text": "For each training pair of hypernym/hyponym nouns, we build a triple in the form of (hyponym definition-sentences hypernym) as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Classification Rules", "sec_num": "4" }, { "text": "[ a-leum-da-un gyeong-chi (a beautiful scene)] gyeong-chi hyponym definition sentence hypernym Morphological analysis and Part-Of-Speech tagging are applied to the definition sentences. After that, each noun appearing in the definition sentences is converted into a feature vector using features mentioned in section 3 along with a target value (i.e. whether this noun is a hypernym of the entry word or not). Table 1 shows some of the training examples. In this table, the attribute IsHypernym which can have a value either Y or N is a target value for given noun. Hence the purpose of learning is to build a classifier which will predict this value for a noun unseen from the training examples.", "cite_spans": [], "ref_spans": [ { "start": 410, "end": 417, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "ga-gyeong", "sec_num": null }, { "text": "In Table 1 , Location denotes the location of a noun in a definition sentence. 0 indicates that the noun appears at the start of the sentence, 1 denotes at the middle of the sentence, and 2 denotes at the end of a sentence respectively. FW of a hypernym is the POS of a function word attachted to the noun and context1,...,context4 denote the POSs of function words appearing to the right/left of the noun. \"*\" denotes a don't care condition. The meanings of POS tags are list in Appendix A. .. Fig. 1 shows a part of decision tree learned by C4.5 algorithm. From this tree, we can easily find that the most discriminating attribute is Location while the least one is Context. Fig. 1 . A learned decision tree for task T", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 495, "end": 501, "text": "Fig. 1", "ref_id": null }, { "start": 677, "end": 683, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "ga-gyeong", "sec_num": null }, { "text": "To evaluate the proposed method, we measure classification accuracy as well as precision, recall, and F-measure which are defined as followings respectively. Table 3 shows the performance of the proposed approach. We have conducted two suite of experiments. The purpose of the first suite of experiment is to measure the performance differences according to the different definitions for the context of a word. In the experiment denoted A in table 3, the context of a word is defined as 4 POSs of the function words, 2 of them immediately proceeding and 2 of them immediately following the word. In the experiment denoted B, when the word appears at the beginning of a sentence or at the end of a sentence, we used only right or left context of the word respectively. Our experiement shows that the performance of B is slightly better than that of A.", "cite_spans": [], "ref_spans": [ { "start": 158, "end": 165, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Experiment", "sec_num": "5" }, { "text": "In the second suite of experiment, we measure the performance of our system for nouns which do not appear in the thesaurus. This performance can give us a figure about how well our system can be applied to the problem of extending a thesaurus. The result is shown in Table 3 in the row labeled with C. As we expected, the performance is droped slightly, but the difference is very small. This fact convince us that the proposed method can be successfully applied to the problem of extending a thesuarus. Table 4 compares the classification accuracy of the proposed method with those of the previous works. Our method outperforms the performance of the previous works reported in the literature[10] by 3.51%.", "cite_spans": [], "ref_spans": [ { "start": 267, "end": 274, "text": "Table 3", "ref_id": null }, { "start": 504, "end": 511, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Experiment", "sec_num": "5" }, { "text": "Because the performance of the previous works are measured with small data in a restricted domain, we reimplemented one of the those previous works[10] to compare the performances using same data. The result is shown in Table 4 under the column marked D. Column C is the performance of the [10] reported in the literature. This result shows that as the heuristic rules in [10] are dependent on lexical information, if the document collection is changed or the application domain is changed, the performance of the method degrades seriously.", "cite_spans": [], "ref_spans": [ { "start": 220, "end": 227, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Experiment", "sec_num": "5" }, { "text": "To extend a thesaurus, it is necessary to identify hypernyms of a noun. There have been several works to build taxonomy of nouns from an MRD. However, most of them relied on the lexico-syntactic patterns compiled by human experts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "This paper has proposed a new method for extending a thesaurus by adding a taxonomic relationship extracted from an MRD. The taxonomic relationship is identified using nouns appearing in the definition sentences of a noun in the MRD and syntactic pattern rules compiled by a machine learning algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Our experiment shows that the classification accuracy of the proposed method is 89.7% for nouns not appearing in the thesaurus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Throughout our research, we have found that machine learning approaches to the problems of identifying hypernyms from an MRD could be a competitive alternative to the methods using human-compiled lexico-syntactic patterns, and such taxonomy automatically extracted from an MRD can effectively supplement an existing thesaurus. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Extracting Semantic Hierarchies From A Large On-Line Dictionary", "authors": [ { "first": "Martin", "middle": [ "S" ], "last": "Chodorow", "suffix": "" }, { "first": "Roy", "middle": [ "J" ], "last": "Byrd", "suffix": "" }, { "first": "George", "middle": [ "E" ], "last": "Heidorn", "suffix": "" } ], "year": 1985, "venue": "Proceedings of the 23 rd Conference of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin S. Chodorow, Roy J. Byrd, George E. Heidorn. : Extracting Semantic Hierarchies From A Large On-Line Dictionary. In Proceedings of the 23 rd Conference of the Association for Computational Linguistics (1985)", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Building Accurate Semantic Taxonomies from Mololingual MRDs", "authors": [ { "first": "G", "middle": [], "last": "Rigau", "suffix": "" }, { "first": "H", "middle": [], "last": "Rodriguez", "suffix": "" }, { "first": "E", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 36 th Conference of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rigau G., Rodriguez H., Agirre E. : Building Accurate Semantic Taxonomies from Mololingual MRDs. In Proceedings of the 36 th Conference of the Association for Computational Linguistics (1998)", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Automatic acquisition of hyonyms from large text corpora", "authors": [ { "first": "A", "middle": [], "last": "Marti", "suffix": "" }, { "first": "", "middle": [], "last": "Hearst", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the Fourteenth International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marti A. Hearst. : Automatic acquisition of hyonyms from large text corpora. In Proceedings of the Fourteenth International Conference on Computational Linguistics (1992)", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Automatic construction of a hypernym-labled noun hierarchy from text", "authors": [ { "first": "A", "middle": [], "last": "Sharon", "suffix": "" }, { "first": "", "middle": [], "last": "Caraballo", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 37 th Conference of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sharon A. Caraballo. : Automatic construction of a hypernym-labled noun hierarchy from text. In Proceedings of the 37 th Conference of the Association for Computational Linguistics (1999).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Distributional clustering of English words", "authors": [ { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Naftali", "middle": [], "last": "Thishby", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the 31 th Conference of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fernando Pereira, Naftali Thishby, Lillian Lee. : Distributional clustering of English words. In Proceedings of the 31 th Conference of the Association for Computational Linguistics (1993)", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Noun-phrase co-occurrence statistics for semi-automatic semantic lexicon construction", "authors": [ { "first": "Brian", "middle": [], "last": "Roark", "suffix": "" }, { "first": "Eugen", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 36 th Conference of the Association for Computational Linguistics and 17 th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brian Roark, Eugen Charniak. : Noun-phrase co-occurrence statistics for semi-automatic semantic lexicon construction. In Proceedings of the 36 th Conference of the Association for Computational Linguistics and 17 th International Conference on Computational Linguistics (1998)", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Machine Learning", "authors": [ { "first": "Tom", "middle": [ "M" ], "last": "Mitchell", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom M. Mitchell.: Machine Learning. Carnegie Mellon University. McGraw-Hill (1997).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A New Method for Inducing Korean Dependency Grammars reflecting the Characteristics of Korean Dependency Relations", "authors": [ { "first": "Seonhwa", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Hyukro", "middle": [], "last": "Park", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 3 rd Conterence on East-Asian Language Processing and Internet Information Technology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "SeonHwa Choi, HyukRo Park. : A New Method for Inducing Korean Dependency Grammars reflecting the Characteristics of Korean Dependency Relations. In Proceedings of the 3 rd Conterence on East-Asian Language Processing and Internet Information Technology (2003)", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The Automatic Extraction of Hypernym in Korean", "authors": [ { "first": "Yoojin", "middle": [], "last": "Moon", "suffix": "" }, { "first": "Yeongtak", "middle": [], "last": "Kim", "suffix": "" } ], "year": 1994, "venue": "Preceedings of Korea Information Science Society", "volume": "21", "issue": "", "pages": "613--616", "other_ids": {}, "num": null, "urls": [], "raw_text": "YooJin Moon, YeongTak Kim. :The Automatic Extraction of Hypernym in Korean. In Preceedings of Korea Information Science Society Vol. 21, NO. 2 (1994) 613-616", "links": null } }, "ref_entries": { "TABREF1": { "num": null, "content": "
Noun Location FW of acontext1 context2 context3 context4 IsHypernym
hypernym
N11jcecxexmnq*Y
N22*exmecxjcnqY
N32*exmjcncaexmY
N41exmjcjcecxmN
N51jcjcecxmjcaN
N61jcecxmjcaexmY
N72*exmexmjcaexmY
N81*ncjcaexmjcN
N91jcancncncjcY
N102exnancajcncaY
..............
", "type_str": "table", "text": "Some of training examples", "html": null }, "TABREF2": { "num": null, "content": "
Yes is correctNo is correct
Yes was assignedab
No was assignedcd
Table 3. Evaluation result
Classification accuracyPrecesionRecallF-Measure
A91.91%95.62%92.55%94.06%
B92.37%93.67%95.23%94.44%
C89.75%83.83%89.92%86.20%
Table 4. Evaluation result
ProposedM.S.KimY.J.Moon 96[10]Y.M.Choi
AB95[11]CD98[13]
Classification Accuracy91.91% 92.37%88.40%88.40% 68.81%89.40%
", "type_str": "table", "text": "Contingency table for evaluating a binary classifier", "html": null }, "TABREF3": { "num": null, "content": "
CATEGORYTAGDESCRIPTION
auxiliaryjxauxiliary
predicativejcppredicative particle
endingprefinalefpprefinal ending
conjunctiveecqcoordinate conjunctive ending
ecssubordinate conjunctive ending
ecxauxiliary conjunctive ending
transformexnnominalizing ending
exmadnominalizing ending
exaadverbalizing ending
finaleffinal ending
affixprefixxfprefix
suffixxnsuffix
xpvverb-derivational suffix
xpaadjective-derivational suffix
CATEGORYTAGDESCRIPTION
nouncommonnncommon noun
ncaactive common noun
ncsstatove common noun
ncttime common noun
propernqproper noun
boundnbbound noun
nbuunit bound noun
numeralnnnumeral
pronounnpppersonal pronoun
npddemonstrative pronoun
predicateverbpvverb
adjectivepaadjective
paddemonstrative adjective
auxiliarypxauxiliary verb
modificationadnounmadnoun
mddemonstrative adnoun
mnnumeral adnoun
adverbageneral adverb
ajssentence conjunctive adverb
ajwword conjunctive adverb
addemonstrative adverb
independence interjectioniiinterjection
particlecasejccase
jcaadverbial case particle
jcmadnominal case particle
jjconjunctive case particle
jcvvocative case particle
", "type_str": "table", "text": "10. YooJin Moon. : The Design and Implementation of WordNet for Korean Nouns. In Proceedings of Korea Information Science Society (1996) 11. MinSoo Kim, TaeYeon Kim, BongNam Noh. : The Automatic Extraction of Hypernyms and the Development of WordNet Prototype for Korean Nouns using Koran MRD. In Proceedings of Korea Information Processing Society (1995) 12. PyongOk Jo, MiJeong An, CheolYung Ock, SooDong Lee. : A Semantic Hierarchy of Korean Nouns using the Definitions of Words in a Dictionary. In Proceedings of Korea Cognition Society (1999) 13. YuMi Choi and SaKong Chul. : Development of the Algorithm for the Automatic Extraction of Broad Term. In Proceedings of Korea Information Management Society (1998) 227-230 14. Quinlan J. R.: C4.5: Programs for Machine Learning. San Mateo, CA: Morgan Kaufman (1993) http://www.rulequest.com/Personal/ 15. KORTERM. : KAIST language resources http://www.korterm.or.kr/ POS tag set", "html": null } } } }