{ "paper_id": "O13-3000", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:03:34.334291Z" }, "title": "Computational Linguistics & Chinese Language Processing Aims and Scope", "authors": [ { "first": "Shu-Kai", "middle": [], "last": "Hsieh", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Chiayi University", "location": { "settlement": "Chiayi", "country": "Taiwan, R.O.C" } }, "email": "shukaihsieh@ntu.edu.tw" }, { "first": "Yu-Ta", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Chiayi University", "location": { "settlement": "Chiayi", "country": "Taiwan, R.O.C" } }, "email": "ychen@mail.ncyu.edu.tw" }, { "first": "Yaw-Huei", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Chiayi University", "location": { "settlement": "Chiayi", "country": "Taiwan, R.O.C" } }, "email": "" }, { "first": "Yu-Chih", "middle": [], "last": "Cheng", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Chiayi University", "location": { "settlement": "Chiayi", "country": "Taiwan, R.O.C" } }, "email": "" }, { "first": "Chan-Chia", "middle": [], "last": "Hsu", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Chiayi University", "location": { "settlement": "Chiayi", "country": "Taiwan, R.O.C" } }, "email": "chanchiah@gmail.com" }, { "first": "", "middle": [], "last": "Chen", "suffix": "", "affiliation": {}, "email": "" }, { "first": "", "middle": [], "last": "Tsai", "suffix": "", "affiliation": {}, "email": "" }, { "first": ";", "middle": [], "last": "Sung", "suffix": "", "affiliation": {}, "email": "" }, { "first": "", "middle": [], "last": "Chang", "suffix": "", "affiliation": {}, "email": "" }, { "first": "&", "middle": [], "last": "Chen", "suffix": "", "affiliation": {}, "email": "" }, { "first": "", "middle": [], "last": "Hsu", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Chiayi University", "location": { "settlement": "Chiayi", "country": "Taiwan, R.O.C" } }, "email": "" }, { "first": "*", "middle": [], "last": "\u570b\uf9f7\u653f\u6cbb\u5927\u5b78\u8cc7\u8a0a\u7ba1\uf9e4\u5b78\u7cfb", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper investigates the appropriateness of using lexical cohesion analysis to assess Chinese readability. In addition to term frequency features, we derive features from the result of lexical chaining to capture the lexical cohesive information, where E-HowNet lexical database is used to compute semantic similarity between nouns with high word frequency. Classification models for assessing readability of Chinese text are learned from the features using support vector machines. We select articles from textbooks of elementary schools to train and test the classification models. The experiments compare the prediction results of different sets of features.", "pdf_parse": { "paper_id": "O13-3000", "_pdf_hash": "", "abstract": [ { "text": "This paper investigates the appropriateness of using lexical cohesion analysis to assess Chinese readability. In addition to term frequency features, we derive features from the result of lexical chaining to capture the lexical cohesive information, where E-HowNet lexical database is used to compute semantic similarity between nouns with high word frequency. Classification models for assessing readability of Chinese text are learned from the features using support vector machines. We select articles from textbooks of elementary schools to train and test the classification models. The experiments compare the prediction results of different sets of features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The role of lexical resources in the field of NLP has been well recognized in recent years, great advances have been achieved in developing tools and databases, as well as techniques for the automatic acquisition, alignment and enrichment for lexical resources. However, comparing to the major European languages, the lack of available comprehensive lexical resources in Chinese, and the resulting under determination of lexical representation theory by empirical lexical data, have posed crucial theoretical issues and exacerbated many difficulties in Chinese processing application tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Foreword", "sec_num": null }, { "text": "The aim of this special issue is to solicit research papers addressing aforementioned issues. It is pleasing to note that we have gathered together a diverse range of papers in this issue, reflected in the titles of the papers. The first paper \"Assessing Chinese Readability using Term Frequency and Lexical Chain\" investigates the automatic assessment of Chinese readability by extracting information from E-HowNet lexical database. The second paper \"Cross-Strait Lexical Differences: A Comparative Study based on Chinese Gigaword Corpus\" conducts a contrastive study on Chinese Concept Dictionary (CCD) and Chinese Wordnet (CWN), with their lexical usage based on a large comparative corpus. The third paper \"A Definition-based Shared-concept Extraction within Groups of Chinese Synonyms: A Study Utilizing the Extended Chinese Synonym Forest\" proposes a multi-layered gloss association method to synonyms extraction by applying it to the CiLin Thesaurus. The last paper \"Back to the Basic: Exploring Base Concepts from the Wordnet Glosses\" conducts an empirical investigation of the glosses of the Chinese Wordnet as a resource for the task of base concepts identification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Foreword", "sec_num": null }, { "text": "I would like to thank all of the authors whose work features in this special issue, and all the reviewers for their valuable contributions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Foreword", "sec_num": null }, { "text": "Readability of an article indicates its level in terms of reading comprehension of children in general. Readability assessment is a process that measures the reading level of a piece of text, which can help in finding reading materials suitable for children. Automatic readability assessment can significantly facilitate this process. There are other applications of automatic readability assessment such as the support of building a web search engine that can distinguish the reading levels of web pages (Eickhoff, Serdyukov, & de Vries, 2010; Miltsakaki & Troutt, 2008) and the incorporation into a text simplification system (Aluisio, Specia, Gasperin, & Scarton, 2010) . Traditional measures of text readability focus on vocabulary and syntactic aspects of text difficulty, but recent work tries to discover the connections between text readability and the semantic or discourse structure of texts (Feng, Elhadad, & Huenerfauth, 2009; Pitler & Nenkova, 2008) .", "cite_spans": [ { "start": 505, "end": 544, "text": "(Eickhoff, Serdyukov, & de Vries, 2010;", "ref_id": null }, { "start": 545, "end": 571, "text": "Miltsakaki & Troutt, 2008)", "ref_id": null }, { "start": 628, "end": 672, "text": "(Aluisio, Specia, Gasperin, & Scarton, 2010)", "ref_id": "BIBREF0" }, { "start": 902, "end": 938, "text": "(Feng, Elhadad, & Huenerfauth, 2009;", "ref_id": null }, { "start": 939, "end": 962, "text": "Pitler & Nenkova, 2008)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Most of the existing work on automatic readability assessment is conducted for English", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Two properties of texts are widely used to indicate the quality of a text, coherence and cohesion. According to Morris and Hirst (1991) , coherence refers to the fact that there is sense in a text, while cohesion refers to the fact that elements in a text tend to hang together. The former is an implicit quality within the text, whereas the latter is an explicit quality that can be observed through the text itself. Observing the interaction between textual units in terms of these properties is a way of analyzing the discourse structure of texts (Stokes, 2004) . Discourse structure of a text is sometimes subjective and may require knowledge from the real world in order to truly understand the text coherence. However, according to Hasan (1984) , analyzing the degree of interaction between cohesive chains in a text can help the reader indirectly measure the coherence of a text. Such cohesion analysis is more objective and less computationally expensive. Halliday and Hasan (1976) classify cohesion into five types: (1) conjunction, (2) reference, (3) lexical cohesion, (4) substitution, and (5) ellipsis. Among these types, lexical cohesion is the most useful one and is the easiest to identify automatically since it requires less implicit information behind the text to be discovered (Hasan, 1984) . Lexical cohesion is defined as the cohesion that arises from semantic relationships between words (Morris & Hirst, 1991) . Halliday and Hasan (1976) further define five types of lexical cohesive ties in text: (1) repetition, (2) repetition through synonymy, (3) word association through specialization/ generalization, (4) word association through part-whole relationships, and (5) word association through collocation. All of the semantic relationships mentioned above except for collocation can be obtained from lexicographic resources such as a thesaurus. The collocation information can be obtained by computing word co-occurrences from a corpus or be captured using an n-gram language model with n > 1.", "cite_spans": [ { "start": 112, "end": 135, "text": "Morris and Hirst (1991)", "ref_id": null }, { "start": 550, "end": 564, "text": "(Stokes, 2004)", "ref_id": null }, { "start": 738, "end": 750, "text": "Hasan (1984)", "ref_id": null }, { "start": 964, "end": 989, "text": "Halliday and Hasan (1976)", "ref_id": null }, { "start": 1042, "end": 1045, "text": "(2)", "ref_id": "BIBREF54" }, { "start": 1296, "end": 1309, "text": "(Hasan, 1984)", "ref_id": null }, { "start": 1410, "end": 1432, "text": "(Morris & Hirst, 1991)", "ref_id": null }, { "start": 1435, "end": 1460, "text": "Halliday and Hasan (1976)", "ref_id": null }, { "start": 1521, "end": 1524, "text": "(1)", "ref_id": "BIBREF53" }, { "start": 1537, "end": 1540, "text": "(2)", "ref_id": "BIBREF54" }, { "start": 1570, "end": 1573, "text": "(3)", "ref_id": "BIBREF55" }, { "start": 1631, "end": 1634, "text": "(4)", "ref_id": "BIBREF56" }, { "start": 1690, "end": 1693, "text": "(5)", "ref_id": "BIBREF57" } ], "ref_spans": [], "eq_spans": [], "section": "Lexical Cohesion Analysis", "sec_num": "2.2" }, { "text": "Lexical chaining is a technique that is widely used as a method to represent lexical cohesive structure of a text (Stokes, 2004) . A lexical chain is a sequence of semantically related words in a passage, where the semantic relatedness between words is determined by the above-mentioned lexical cohesive ties usually with the help of a lexicographic resource such as a thesaurus. Lexical chains have been used to support a wide range of natural language processing tasks including word sense disambiguation, text segmentation, text summarization, topic detection, and malapropism detection.", "cite_spans": [ { "start": 114, "end": 128, "text": "(Stokes, 2004)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Lexical Cohesion Analysis", "sec_num": "2.2" }, { "text": "Different lexicographic resources capture different subset of the lexical cohesive ties in text. Morris and Hirst (1991) use Roget's thesaurus to find cohesive ties between words in order to build lexical chains. WordNet (Fellbaum, 1998) is an online lexical database and has predominant use in information retrieval and natural language processing tasks, including lexical chaining. The major relationship between words in WordNet is synonymy, and other types of relationships such as hypernymy and hyponymy are defined among synsets, sets of synonymous words, forming a semantic network of concepts.", "cite_spans": [ { "start": 97, "end": 120, "text": "Morris and Hirst (1991)", "ref_id": null }, { "start": 221, "end": 237, "text": "(Fellbaum, 1998)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Lexical Cohesion Analysis", "sec_num": "2.2" }, { "text": "HowNet is a lexical database for Chinese words developed by Dong (n.d.) . The idea of HowNet is to use a finite set of primitives to express concepts or senses in the world. The whole set of primitives are defined in a hierarchical structure based on their hypernymy and hyponymy relationships. Each sense of a word is defined in a dictionary of HowNet using a subset of the primitives. HowNet so far has two major versions: the 2000 version and the 2002 version. The 2000 version defines a word sense by a flat set of primitives with some relational symbols that determine the relation between the primitive and the target word sense. On the other hand, the 2002 version of HowNet uses a nesting grammar to define a word sense. A definition consists of primitives and a framework. The framework organizes the primitives into a complete definition. Dai, Liu, Xia, & Wu (2008) propose a method to compute lexical semantic similarity between Chinese words using the 2002 version of HowNet. For traditional Chinese, E-HowNet (Extended HowNet) is a lexical semantic representation system developed by Academia Sinica in Taiwan (CKIP Group, 2009) . It is similar to the 2002 version of HowNet with the following major differences: (1) Word senses (concepts) are defined by not only primitives but also any well-defined concepts and conceptual relations, (2) Content words, function words, and phrases are represented uniformly, and 3The incorporation of functions as a new type of primitive. An example of word sense definition is shown in Figure 1 . Due to the first major difference mentioned above, a word sense definition may contain another well-defined word sense, such as \"\u5927\u5b78\" (university, college) in the example. A bottom level expansion of the definition can be obtained by expanding all well-defined concepts in the top level definition, as shown in Figure 2 . ", "cite_spans": [ { "start": 60, "end": 71, "text": "Dong (n.d.)", "ref_id": null }, { "start": 849, "end": 875, "text": "Dai, Liu, Xia, & Wu (2008)", "ref_id": null }, { "start": 1123, "end": 1141, "text": "(CKIP Group, 2009)", "ref_id": "BIBREF5" }, { "start": 1349, "end": 1352, "text": "(2)", "ref_id": "BIBREF54" } ], "ref_spans": [ { "start": 1535, "end": 1543, "text": "Figure 1", "ref_id": "FIGREF7" }, { "start": 1856, "end": 1864, "text": "Figure 2", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Lexical Cohesion Analysis", "sec_num": "2.2" }, { "text": "It has been suggested that coherent texts are easier to read (Feng et al., 2010) , and some previous studies have used lexical-chain-based features to assist in readability assessment of English text (Feng et al., 2009; Feng et al., 2010) . Some other ways of modeling text coherence are also used for readability assessment, such as the entity-grid representation of discourse structure and coreference chains (Barzilay & Lapata, 2008; Feng et al., 2009; Pitler & Nenkova, 2008) . However, none of these discourse-based factors are tested on Chinese text for estimating readability. In this paper, we evaluate a combination of term frequency features and lexical chain features for generating classification models on Chinese readability.", "cite_spans": [ { "start": 61, "end": 80, "text": "(Feng et al., 2010)", "ref_id": null }, { "start": 200, "end": 219, "text": "(Feng et al., 2009;", "ref_id": null }, { "start": 220, "end": 238, "text": "Feng et al., 2010)", "ref_id": null }, { "start": 411, "end": 436, "text": "(Barzilay & Lapata, 2008;", "ref_id": "BIBREF1" }, { "start": 437, "end": 455, "text": "Feng et al., 2009;", "ref_id": null }, { "start": 456, "end": 479, "text": "Pitler & Nenkova, 2008)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 2. Bottom level expansion of the definition of a word sense in E-HowNet.", "sec_num": null }, { "text": "This section presents the methodology adopted for assessing readability of Chinese text using SVM. We first explain the problem of readability assessment, basic concepts of SVM classification, and the system design. Then we describe how we conduct the text processing Chen et al. step, followed by the features we use for representing each article in the corpus. Finally, we discuss the performance measures used in the experiments.", "cite_spans": [ { "start": 268, "end": 279, "text": "Chen et al.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Assessing Readability using SVM", "sec_num": "3." }, { "text": "Various types of prediction models have been tested on the task of readability assessment in previous research (Aluisio et al., 2010; Heilman, Collins-Thompson, & Eskenazi, 2008) , including classification and regression models. Since several studies obtain better results when using SVM classification than regression models (Feng et al., 2010; Petersen & Ostendorf, 2009; Schwarm & Ostendorf, 2005) , in this paper we treat the problem of Chinese readability assessment as a classification task where SVM is used to build classifiers that predict the reading levels of given texts.", "cite_spans": [ { "start": 111, "end": 133, "text": "(Aluisio et al., 2010;", "ref_id": "BIBREF0" }, { "start": 134, "end": 178, "text": "Heilman, Collins-Thompson, & Eskenazi, 2008)", "ref_id": null }, { "start": 326, "end": 345, "text": "(Feng et al., 2010;", "ref_id": null }, { "start": 346, "end": 373, "text": "Petersen & Ostendorf, 2009;", "ref_id": null }, { "start": 374, "end": 400, "text": "Schwarm & Ostendorf, 2005)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Problem Definition", "sec_num": "3.1" }, { "text": "Readability can be classified according to grade levels, but the difference between adjacent grades may be insignificant, which makes the classification result less accurate. More importantly, grade-level readability is too fine for many applications and a broader range of readability level is more practical. For example, the U.S. government surveyed over 26,000 individuals aged 16 and older and reported data with only five levels of literacy skills (National Center for Education Statistics, 2002) . Therefore, we divide reading skills of elementary school students into three levels: lower grade, middle grade, and higher grade, where lower grade corresponds to the first and second grade levels, middle grade corresponds to the third and fourth grade levels, and higher grade corresponds to the fifth and sixth grade levels.", "cite_spans": [ { "start": 485, "end": 502, "text": "Statistics, 2002)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Problem Definition", "sec_num": "3.1" }, { "text": "In this paper, we try to evaluate different combinations of features for predicting the reading level of a text written in traditional Chinese as suitable for lower grade or middle grade. We will build one prediction model for lower grade level and another prediction model for middle grade level. These binary SVM classifiers can be combined to solve the multiclass problem of predicting the reading level of an article (Duan & Keerthi, 2005; Hsu & Lin, 2002) .", "cite_spans": [ { "start": 421, "end": 443, "text": "(Duan & Keerthi, 2005;", "ref_id": null }, { "start": 444, "end": 460, "text": "Hsu & Lin, 2002)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Problem Definition", "sec_num": "3.1" }, { "text": "While most studies on readability assessment view the reading levels as discrete classes, we think readability is continuous. That is, an article that is suitable for students of a certain level must also be comprehensible for students of higher levels. Similarly, if a student can understand an article of a certain reading level, he/she must also be able to understand any article of a lower reading level. Therefore, when building classifiers for lower grade, we use articles of grades 1 and 2 as positive data, while the others are negative data. When building classifiers for middle grade, articles of grade 1 through grade 4 are used altogether as positive data, while those of higher grade levels are used as negative data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Definition", "sec_num": "3.1" }, { "text": "Assessing Chinese Readability using Term Frequency and Lexical Chain 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Definition", "sec_num": "3.1" }, { "text": "After the data set is collected, each article is undergone a word segmentation process as a pre-processing step before deriving features from the texts. Word segmentation is done using a word segmentation system provided by Academia Sinica (CKIP Group; n.d.). The segmentation result is stored in XML format, where POS-tags are attached to all words and sentence boundaries are marked.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Processing", "sec_num": "3.2" }, { "text": "It is reported by Yang and Petersen (1997) that chi-square test (\u03c7 2 ) performs better than other feature selection methods such as mutual information and information gain in automatic text classification. Therefore, we use chi-square test to evaluate the importance of terms in the corpus with respect to their discriminative power among reading levels. The chi-square test is used to test the independence of two events, which, in feature selection, are the occurrence of the term and the occurrence of the class. Higher chi-square test value indicates higher discriminative power of the term to the classes. For each prediction model, we compute chi-square test value for each term in the corpus. Such information will benefit our feature derivation process described below. We do not perform stop word removal and stemming because Collins-Thompson and Callan (2005) report that these processes may harm the performance of classifier on lower grade levels.", "cite_spans": [ { "start": 835, "end": 869, "text": "Collins-Thompson and Callan (2005)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Text Processing", "sec_num": "3.2" }, { "text": "The use of term frequencies as the primary information for assessing Chinese readability has been investigated (Chen, Tsai, & Chen, 2011) , where TF-IDF values of the terms with high discriminative power are used as features for SVM classification. This paper investigates the appropriateness of using lexical cohesion analysis to improve the performance of Chinese readability assessment. Therefore, we build lexical chains for both the training and testing documents and deriving features from the lexical chains to capture the lexical cohesive aspect of the texts.", "cite_spans": [ { "start": 111, "end": 137, "text": "(Chen, Tsai, & Chen, 2011)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Feature Deriving", "sec_num": "3.3" }, { "text": "A general algorithm for generating lexical chains is shown in Figure 3 , which is a simplified version of that proposed by Morris and Hirst (1991) as described in (Stokes, 2004) . The chaining constraints in the algorithm are highly customizable and are the key to the quality of the generated lexical chains. The allowable word distance constraint is based on the assumption that relationships between words are best disambiguated with respect to the words that lie nearest to each other in the text. The semantic similarity is the most important factor that determines term relatedness and is generally based on any subset of the lexical cohesive ties mentioned above. Figure 4 shows an example of the lexical chaining result. Derived Lexical Chains: lexical chain 1: (1)\u9280\ufa08\u696d-3 (2)\u9280\ufa08\u696d-25 (3)\u696d\u52d9-34 (4)\u9280\ufa08\u696d-45 (5)\uf90a\u878d-59 lexical chain 2: (1)\u6574\u9ad4-4 (2)\u7269\u4ef6-14 (3)\u671f\u9593-61 (4)\u671f\u9593-62 (5)\u6642\u9593-74 (6)\u76ee\u524d-79 (7)\u6574 \u9ad4-82 (8)\u524d\u8005-94 (9)\u5f8c\u8005-95 (10)\u76ee\u524d-104 lexical chain 3: (1)\u6210\uf969-73 (2)\u767e\u5206\u9ede-86 (3)\uf9dd\uf961-96 lexical chain 4: (1)\u7cfb\u7d71\u6027-26 (2)\uf92d\u6e90-28 (3)\u689d\u4ef6-50 (4)\u6c1b\u570d-51 (5)\uf9fa\u6cc1-52 (6)\u6c23\u6c1b-56 (7)\u50f9\u683c-64 (8)\u80fd\uf98a-65 (9)\u4fe1\u7528-68 (10)\u6a19\u7684-72 (11)\uf92d\u6e90-90 (12)\u58d3\uf98a-118", "cite_spans": [ { "start": 123, "end": 146, "text": "Morris and Hirst (1991)", "ref_id": null }, { "start": 163, "end": 177, "text": "(Stokes, 2004)", "ref_id": null } ], "ref_spans": [ { "start": 62, "end": 70, "text": "Figure 3", "ref_id": "FIGREF1" }, { "start": 671, "end": 679, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Feature Deriving", "sec_num": "3.3" }, { "text": "Choose a set of highly informative terms for chaining, t 1 , t 2 , \u2026, t n .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Deriving", "sec_num": "3.3" }, { "text": "The first candidate term in the text, t 1 , becomes the head of the first chain, c 1 . For each remaining term t i do For each chain c m do If the chain is most strongly related to t i with respect to allowable word distance and semantic similarity Then t i becomes a member of c m , Else t i becomes the head of a new chain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Deriving", "sec_num": "3.3" }, { "text": "The algorithm is adopted in this paper for the construction of lexical chains. We select nouns in the balanced corpus created by Academia Sinica (CKIP Group, 2010) with word frequency higher than a given threshold as candidate terms for lexical chaining. We apply the method proposed by Dai et al. (2008) to compute semantic similarity between words using E-HowNet instead of HowNet as the lexical database. The difference is that the primitives of function type are treated as descriptors. Let P and Q be two word senses and the number of modifying primitives of P is less than that of Q. The semantic similarity between P and Q is computed by Equation 1,", "cite_spans": [ { "start": 287, "end": 304, "text": "Dai et al. (2008)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "End for End for", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "0 0 ( , )", "eq_num": "( , )" } ], "section": "End for End for", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "max ( ( , )) i j i P j Q Sim P Q Sim P Q Sim P Q P S T S T \u03b1 \u03b2 \u03b3 \u2264 < \u2264 < \u2032 \u2032 = \u00d7 + \u00d7 + \u00d7 + \u2211 \u2229", "eq_num": "(1)" } ], "section": "End for End for", "sec_num": null }, { "text": "where P' and Q' are the primary primitives of P and Q, respectively, |P| and |Q| are the numbers of modifying primitives in their respective word senses, S and T are the sets of descriptors of frameworks of P and Q, respectively, |S\u2229T| is the number of common descriptors of S and T, |S| and |T| are the numbers of descriptors in S and T, and \u03b1, \u03b2, and \u03b3 are the relative weights of the three parts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "End for End for", "sec_num": null }, { "text": "After constructing lexical chains, we derive five features from the lexical chains for each article. The five features are the number of lexical chains, the average length of lexical chains, the average span of lexical chains, the number of lexical chains with span longer than the half length of the article, and the average number of active chains per word. The features are normalized by dividing the article length. Table 1 shows the lexical chain features and their representing codes used in this paper. ", "cite_spans": [], "ref_spans": [ { "start": 420, "end": 427, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "End for End for", "sec_num": null }, { "text": "We apply support vector machines (SVM) as the modeling technique for our classification problem. The goal of an SVM, which is a vector-space-based large margin classifier, is to find a decision surface that is maximally far away from any data point in the two classes. When data in the input space (X) cannot be linearly separated, we transform the data into a high-dimensional space called the feature space (F) using a function \u03c8: X\u2192F so that the data are now linearly separable. Then in the feature space we find a linear decision function that best separates the data into two classes. An SVM toolkit, LIBSVM (Chang & Lin, n.d.) , is used for building prediction models. When training the prediction model for each reading level, texts belonging to that reading level are used as positive data, while the rest of the texts are used as negative data. We follow the procedure suggested by Hsu, Chang, & Lin (2010) including the use of radial basis function kernel, scaling, and cross-validation.", "cite_spans": [ { "start": 613, "end": 632, "text": "(Chang & Lin, n.d.)", "ref_id": null }, { "start": 891, "end": 915, "text": "Hsu, Chang, & Lin (2010)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "SVM Classification", "sec_num": "3.4" }, { "text": "In this paper, we use precision, recall, F-measure, and accuracy to evaluate the learned prediction models. For the test data, we use the same procedure for text processing and feature deriving. Correct prediction refers to the agreement between the predicted reading level and the original reading level. We compute the following quantities: true positive (TP) is the number of articles correctly classified as positive, false negative (FN) is the number of positive articles incorrectly classified as negative, true negative (TN) stands for the number of articles correctly classified as negative, and false positive (FP) refers to the number of negative articles incorrectly classified as positive. Precision, recall, F-measure, and accuracy are defined as follows. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "= +", "eq_num": "(2)" } ], "section": "TP TP FP", "sec_num": null }, { "text": "We will test on different sets of features to find the best feature combination for training the prediction models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TP TP FP", "sec_num": null }, { "text": "In this section we present our experiment setup and the results of the experiments on the textbooks corpus using different feature combinations. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4." }, { "text": "The corpus used as empirical data consists of articles selected from the textbooks of elementary schools in Taiwan. We collect the digital versions of the textbooks of three subjects, Mandarin, Social Studies, and Life Science, for all of the six grade levels from publishers Nan I and Han Lin, resulting in a total number of 740 articles. Table 2 shows details of the collected data set. ", "cite_spans": [], "ref_spans": [ { "start": 340, "end": 347, "text": "Table 2", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Empirical Data", "sec_num": "4.2" }, { "text": "In each experiment, we use one set of features with a fixed parameter setting and target a certain grade level. We equally divide the corpus into five data sets to support 5-fold cross validation, and we present the average precision, recall, F-measure, and accuracy of the five folds.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Design", "sec_num": "4.3" }, { "text": "Since the textbooks corpus does not contain articles beyond elementary school levels, we only build prediction models for lower grade and middle grade. For convenience, we denote feature sets by a string with special syntax. Feature types are indicated in the string by the abbreviation of that feature type. For example, \"lc\" refers to the lexical chain feature type and \"tf\" refers to the TF-IDF feature type. Options of a feature type are indicated in the string by a dash followed by the code name for that option, attached to the end of the feature type indicator.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Design", "sec_num": "4.3" }, { "text": "To test the capability of lexical chain features on Chinese readability assessment, the lexical chain features listed in Table 1 are used and the results are shown in Table 3 and Table 4 . ", "cite_spans": [], "ref_spans": [ { "start": 121, "end": 128, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 167, "end": 174, "text": "Table 3", "ref_id": "TABREF6" }, { "start": 179, "end": 186, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Experiments on Lexical Chain Features", "sec_num": "4.4" }, { "text": "It is interesting to see whether incorporating a small number of TF-IDF features into lexical chain features can produce the same or even better results. We first use TF-IDF features generated from top 50 to top 500 terms to produce classifiers for lower grade. The precision, recall, F-measure, and accuracy of the classifiers using different number of TF-IDF features are shown in Table 5 . Then, we add the five lexical chain features to the TF-IDF feature sets and repeat the same experiments. Their precision, recall, F-measure, and accuracy values are shown in Table 6 . Figure 5 illustrates line graphs generated from F-measure values of the two tables, from which we find that the overall performance is improved for lower grade classifiers when using a combination of TF-IDF features and lexical chain features. The same set of experiments is conducted for the middle grade classifiers. Precision, recall, F-measure, and accuracy values of classifiers generated from TF-IDF features and the combination of TF-IDF and lexical chain features are shown in Table 7 and Table 8 , respectively. The line graphs of F-measure values are shown in Figure 6 , where the combined TF-IDF and lexical chain features generate the same or better F-measure in all cases. Therefore, incorporating a small number of TF-IDF features into lexical chain features is recommended for middle grade classifiers. ", "cite_spans": [], "ref_spans": [ { "start": 383, "end": 390, "text": "Table 5", "ref_id": "TABREF16" }, { "start": 567, "end": 574, "text": "Table 6", "ref_id": null }, { "start": 577, "end": 585, "text": "Figure 5", "ref_id": "FIGREF3" }, { "start": 1062, "end": 1069, "text": "Table 7", "ref_id": null }, { "start": 1074, "end": 1081, "text": "Table 8", "ref_id": null }, { "start": 1147, "end": 1155, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Comparison with TF-IDF Features", "sec_num": "4.5" }, { "text": "This paper focuses on evaluating the effect of lexical cohesion analysis, more specifically, the effect of features based on lexical chains and term frequency, on the performance of readability assessment for Chinese text. The experiments produce satisfactory results on the textbooks corpus. Combining lexical chain and TF-IDF features usually produces better results, suggesting that both term frequency and lexical chain are useful features in Chinese readability assessment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5." }, { "text": "Future work can be done to have more articles annotated with reading levels or resort to other types of corpora where reading levels are inherent. On the other hand, lexical cohesion is only one of several aspects of text cohesion, and other aspects of text cohesion may also have some impact on the task of readability assessment. Several existing models of text cohesion, such as Coh-metrix and entity grid representation, try to model other aspect of text cohesion and have been extensively used in other natural language processing tasks such as writing quality assessment. Future work can be done to verify whether these models can benefit the task of readability assessment for Chinese text. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5." }, { "text": "Studies of cross-strait lexical differences in the use of Mandarin Chinese reveal that a divergence has become increasingly evident. This divergence is apparent in phonological, semantic, and pragmatic analyses and has become an obstacle to knowledge-sharing and information exchange. Given the wide range of divergences, it seems that Chinese character forms offer the most reliable regular mapping between cross-strait usage contrasts. In this study, we take general cross-strait lexical wordforms to discovery of cross-strait lexical differences and explore their contrasts and variations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null }, { "text": "Based on Hong and Huang (2006) , we discuss the same conceptual words between cross-strait usages by WordNet, Chinese Concept Dictionary (CCD) and Chinese Wordnet (CWN). In this study, we take all words which appear in CCD and CWN to check their lexical contrasts of traditional Chinese character data and simplified Chinese character data in Gigaword Corpus, explore their appearances and distributions, and compare and demonstrate them via Google website.", "cite_spans": [ { "start": 9, "end": 30, "text": "Hong and Huang (2006)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null }, { "text": "\uf978\u5cb8\u4f7f\u7528\u8a5e\u5f59\u7684\u5dee\uf962\u554f\u984c\uff0c\u5728\u76ee\u524d\uf978\u5cb8\u4eba\u6c11\u7684\u5404\u7a2e\u4ea4\uf9ca\u4e2d\uff0c\u65e9\u5c31\u5df2\u7d93\u5448\u73fe\u51fa\u8a31\u591a\u7121\u6cd5\u6e9d \u901a\u3001\uf9e4\u89e3\u56f0\u96e3\uff0c\u6216\u662f\u5f35\u51a0\uf9e1\u6234\uff0c\u8868\u9054\uf967\u5408\u5b9c\u7684\u932f\u8aa4\u7a98\u5883\u3002\u63a2\u8a0e\uf978\u5cb8\u4f7f\u7528\u8a5e\u5f59\u7684\u5dee\uf962\u6027\u6642\uff0c \uf967\u50c5\u8b93\u5927\uf97e\u4f7f\u7528\u8a5e\u5f59\u7684\u8a18\u8005\u5011\uff0c\u611f\u53d7\u5230\uf978\u5cb8\u7684\u5dee\uf962 (\u5982\uff1a\u83ef\u590f\u7d93\u7def\u7db2\uff0c2004\uff1b\u5357\u4eac\u8a9e\u8a00 \u6587\u5b57\u7db2\uff0c2004\uff1b\u5ec8\u9580\u65e5\u5831\uff0c2004)\uff0c\u751a\u81f3\uff0c\u8fd1\uf98e\uf92d\u672a\uf9ba\u56e0\u61c9\u5c0d\u5cb8\u4eba\u6c11\uf92d\u53f0\uf983\ufa08\uff0c\u53f0\u7063\u4ea4\u901a \u90e8\u89c0\u5149\u5c40\u4e5f\u6574\uf9e4\uf9ba\u300c\uf978\u5cb8\u5730\u5340\u5e38\u7528\u8a5e\u5f59\u5c0d\u7167\u8868\u300d(\u4e2d\u83ef\u6c11\u570b\u4ea4\u901a\u90e8\u89c0\u5149\u5c40\uff0c2011)\uff0c\u9019\u4e9b \u7686\u6210\u70ba\u6f22\u8a9e\u8a5e\u5f59\u5b78\u8207\u8a5e\u5f59\u8a9e\u7fa9\u5b78\u4e0a\u7814\u7a76\u7684\u91cd\u8981\u8ab2\u984c(\u5982\uff1a\u738b\u9435\u6606\u8207\uf9e1\ufa08\u5065\uff0c1996\uff1b\u59da\u69ae\u677e\uff0c 1997)\u3002 \u4ee5\u5f80\u5c0d\u65bc\u9019\u500b\u8b70\u984c\u7684\u7814\u7a76\uff0c\uf967\uf941\u8a9e\u8a00\u5b78\u5b78\u8005\u6216\u6587\u5b57\u5de5\u4f5c\u8005\u6ce8\u610f\u5230\u9019\u500b\u554f\u984c\u6642\uff0c\u50c5\u80fd \u5c31\u6240\u89c0\u5bdf\u5230\u7279\u5b9a\u8a5e\u5f59\u7684\u5c40\u90e8\u5c0d\u61c9\uff0c\uf92d\u63d0\u51fa\u5206\u6790\u8207\u89e3\u91cb\u800c\u7f3a\u4e4f\u5168\u9762\u7cfb\u7d71\u6027\u7684\u7814\u7a76\u3002\u672c\u6587\u7684 \u7814\u7a76\u65b9\u6cd5\uff0c\u7b2c\u4e00\u662f\u5ef6\u7e8c Hong \u548c Huang (2006)\u3001\u6d2a \u7b49\u4eba(\u6d2a\u5609\u99a1\u8207\u9ec3\u5c45\u4ec1\uff0c2008)\u7684\u7814\u7a76 \u65b9\u6cd5\uff0c\u5148\u4ee5 WordNet \u505a\u8a5e\u7fa9\u6982\uf9a3\u7684\u5224\u6e96\uff0c\u6bd4\u5c0d\u4e2d\u6587\u6982\uf9a3\u8fad\u5178\u8207\u4e2d\u6587\u8a5e\u7db2\uf9e8\uff0c\u6982\uf9a3\u76f8\u540c\u3001 \u8a9e\u7fa9\u76f8\u540c\u7684\u8a5e\u5f59\u4f7f\u7528\uf9fa\u6cc1\uff1b\u7b2c\u4e8c\u3001\u662f\u4ee5\u6709\u5927\uf97e\uf978\u5cb8\u5c0d\u6bd4\u8a9e\uf9be\u7684 Gigaword Corpus \u4f5c\u70ba\u5be6 \u8b49\u7814\u7a76\u7684\u57fa\u790e\uff0c\u9a57\u8b49\u4e2d\u6587\u6982\uf9a3\u8fad\u5178\u8207\u4e2d\u6587\u8a5e\u7db2\u5c0d\u65bc\u76f8\u540c\u6982\uf9a3\u8a9e\u7fa9\u7684\u8a5e\u5f59\uff0c\u4f7f\u7528\u4e0a\uff0c\u78ba\u5be6 \u6709\u5176\u5dee\uf962\u6027\u7684\u5b58\u5728\u3002\u9019\u662f\u4e00\u500b\u4ee5\u5be6\u969b\u8a9e\uf9be\u3001\u5be6\u969b\uf969\u64da\u9032\ufa08\u6bd4\u5c0d\uff0c\u4e14\u5177\u6709\u5b8c\u6574\u6027\u3001\u5168\u9762\u6027\u3001 \u6982\u62ec\u6027\u7684\u7814\u7a76\u3002 \u53c8 Miller \u7b49\u4eba (Miller, Beckwith, Fellbaum, Gross & Miller, 1993)\u8a8d\u70ba\u4ed6\u5011\u53ef\u900f\u904e\u4f7f \u7528\u540c\u7fa9\u8a5e\u96c6\uf92d\u8868\u73fe\u8a5e\u5f59\u6982\uf9a3\u548c\u63cf\u8ff0\u8a5e\u5f59\u7684\u8a9e\u7fa9\u5167\u5bb9\uff0c\u6240\u4ee5\u4ed6\u5011\u5efa\uf9f7\uf9ba WordNet\uff0c\u8fd1\uf98e\uf92d\uff0c \u4ee5\u4e2d\u6587\u5341\u5104\u8a5e\u8a9e\uf9be\u5eab\u70ba\u57fa\u790e\u4e4b\uf978\u5cb8\u8a5e\u5f59\u5c0d\u6bd4\u7814\u7a76 21 \u4e5f\u6709\uf967\u5c11\u7814\u7a76\u5718\u968a\u5728\u8655\uf9e4\u4ee5 WordNet \u70ba\u51fa\u767c\u9ede\u7684\uf967\u540c\u8a9e\u8a00\u7ffb\u8b6f\u3002 \u503c\u5f97\u4e00\u63d0\u7684\u662f\uff0c\u540c\u5c6c\u65bc\u6f22\u8a9e\u8a5e\u5f59\u7cfb\u7d71\u7684\u7e41\u9ad4\u4e2d\u6587\u7cfb\u7d71\u8207\u7c21\u9ad4\u4e2d\u6587\u7cfb\u7d71\uff0c\u5728\u4e2d\u592e\u7814\u7a76 \u9662\u8a9e\u8a00\u6240\u8207\uf963\u4eac\u5927\u5b78\u8a08\u7b97\u8a9e\u8a00\u6240\u7684\u7814\u7a76\u5718\u968a\uf9e8\uff0c\u4e5f\u91dd\u5c0d\u6b64\u8b70\u984c\uff0c\u505a\uf9ba\uf967\u5c11\u76f8\u95dc\u7684\u7814\u7a76\uff0c \u56e0\u6b64\uff0c\u672c\u6587\u60f3\u8981\u63a2\u8a0e\u7684\u662f\uff0c\u76f8\u540c\u6982\uf9a3\u7684\u6f22\u8a9e\u8a5e\u5f59\u8a9e\u7fa9\uff0c\u5728\u7e41\u9ad4\u4e2d\u6587\u8207\u7c21\u9ad4\u4e2d\u6587\u7684\u4f7f\u7528\uf9fa \u6cc1\u3002 \u53e6\u5916\uff0c\u5c0d\u65bc\u7e41\u9ad4\u4e2d\u6587\u7cfb\u7d71\u8207\u7c21\u9ad4\u4e2d\u6587\u7cfb\u7d71\u7684\u5c0d\u61c9\uff0c\u6211\u5011\u4ee5 WordNet \u7576\u4f5c\u7814\u7a76\u8a9e\uf9be\u7684 \u57fa\u790e\uff0c\u662f\u70ba\uf9ba\u53ef\u4ee5\u5efa\uf9f7\u4e00\u5957\u7b26\u5408\u8a5e\u5f59\u77e5\uf9fc\u539f\u5247\u4e26\u80fd\u904b\u7528\u65bc\u82f1\u4e2d\u5c0d\u8b6f\u7684\u7cfb\u7d71\uff0c\u5982\u6b64\u4e00\uf92d\uff0c \u5373\u53ef\u6bd4\u5c0d\u51fa\uf978\u500b\u4e2d\u6587\u7cfb\u7d71\uff0c\u5728\u8a9e\u8a00\u4f7f\u7528\u4e0a\u7684\u5dee\uf962\u6027\u3002 2. \u7814\u7a76\u52d5\u6a5f\u8207\u76ee\u7684 \u81ea\uf978\u5cb8\u4ea4\uf9ca\u65e5\u8da8\u983b\u7e41\u4e4b\u5f8c\uff0c\u672c\u5c6c\u65bc\u540c\u6587\u540c\u7a2e\u7684\u6f22\u8a9e\u7cfb\u7d71\uff0c\u78ba\u6709\uf967\u5c11\u77e5\uf9fc\u8207\u4fe1\u606f\u4ea4\uf9ca\u7684\u969c \u7919\uff0c\u9020\u6210\u9019\u6a23\u7684\u539f\u56e0\uff0c\u83ab\u904e\u65bc\uf978\u5cb8\u8a5e\u5f59\u4f7f\u7528\u7684\u5dee\uf962\u3002\u76f8\u540c\u7684\u8a5e\u5f62\uff0c\u537b\u4ee3\u8868\uf967\u540c\u7684\u8a5e\u7fa9\uff1b \u6216\u76f8\u540c\u7684\u8a9e\u7fa9\uff0c\u537b\u6709\uf978\u7a2e\uf967\u540c\u7684\u8868\u9054\u8a5e\u5f59\u3002\u9019\u7a2e\u554f\u984c\uff0c\u5df2\u7d93\u8b93\u8a31\u591a\u6587\u5b57\u5de5\u4f5c\u8005\u8cbb\u76e1\u5fc3\u601d\uff0c \u8a66\u5716\uf92d\u89e3\u6c7a\u9019\u6a23\u7684\u7a98\u5883\uff1b\u800c\u8a9e\u8a00\u5b78\u8005\u5c0d\u65bc\u9019\u7a2e\u73fe\u8c61\uff0c\u4e5f\u8a66\u5716\u5f9e\u8a9e\u97f3\u3001\u8a9e\u7fa9\u3001\u8a9e\u7528\u7b49\u65b9\u9762 \u8457\u624b\uff0c\u5e0c\u671b\u5f9e\u5404\u7a2e\u8207\u8a9e\u8a00\u76f8\u95dc\u7684\u89d2\ufa01\uff0c\uf92d\u63a2\u7a76\uf978\u5cb8\u8a5e\u5f59\u7684\u5dee\uf962\u3002 \u5728\u7814\u7a76\u8b70\u984c\u4e0a\uff0c\u5149\u662f\u89c0\u5bdf\u5230\uf978\u5cb8\u9078\u64c7\u4ee5\uf967\u540c\u7684\u8a5e\u5f62\uf92d\u4ee3\u8868\u76f8\u540c\u7684\u8a9e\u7fa9\uff0c\u5982\u4e0b\u8ff0\uf9b5\u5b50 (1)\u3001(2)\uff0c\u9019\u6a23\u662f\uf967\u5920\u7684\u3002\u4ee5 Gigaword Corpus \u7684\u8a9e\uf9be\u5448\u73fe\u53f0\u7063/ \u5927\uf9d3\u4f7f\u7528\u7684\uf9fa\u6cc1\u53ca\u5176\u5728 Gigaword Corpus \u4e2d\u7684\u8a5e\u983b\uff0c\u5982\u4e0b\uff1a (1) \u53f0\u7063\u7684\u300c\u715e (155/ 65)\u300d \u3001\u5927\uf9d3\u7684\u300c\u975e\u5178 (354/ 33504)\u300d ( \u300cSars (SEVERE ACUTE RESPIRATORY SYNDROME)\u300d \u300c\u56b4\u91cd\u6025\u6027 \u547c\u5438\u9053\u7d9c\u5408\u75c7\u300d\u7684\u7ffb\u8b6f) (2) \u53f0\u7063\u7684\u300c\u8a08\u7a0b\uf902 (22670/ 68)\u300d \u3001\u5927\uf9d3\u7684\u300c\u51fa\u79df\uf902 (422/ 5935)\u300d \u5728\u8a5e\u5f59\u8a9e\u7fa9\u5b78\u7814\u7a76\u4e0a\uff0c\u6211\u5011\u5fc5\u9808\u9032\u4e00\u6b65\u8ffd\u7a76\uff0c\u9019\u4e9b\u5c0d\u6bd4\u7684\u52d5\u6a5f\uff0c\u8a9e\u8a00\u7684\u8a5e\u5f59\u8207\u8a5e\u7fa9 \u6f14\u8b8a\u7684\u52d5\uf98a\u662f\u5426\u76f8\u95dc\uff0c\u5c0d\u6bd4\u6709\u7121\u7cfb\u7d71\u6027\u7684\u89e3\u91cb\u7b49\u3002Chinese GigaWord Corpus \u5305\u542b\uf9ba\uf92d\u81ea \uf978\u5cb8\u7684\u5927\uf97e\u8a9e\uf9be\uff0c\u5176\u4e2d\uff0c\u6709\u7d04 5 \u5104\u5b57\u65b0\u83ef\u793e\u8cc7\uf9be(XIN)\u3001\u7d04 8 \u5104\u5b57\u4e2d\u592e\u793e\u8cc7\uf9be(CNA)\uff0c\u53ef \u4ee5\u770b\u51fa\u53f0\u7063\u548c\u5927\uf9d3\u5c0d\u65bc\u540c\u4e00\u6982\uf9a3\u800c\u4f7f\u7528\uf967\u540c\u8a5e\u5f59\u7684\u5be6\u969b\uf9fa\u6cc1\u8207\u5206\u4f48\u3002 \u5982\u8981\u8ffd\u7a76\u52d5\u6a5f\u8207\u89e3\u91cb\u7b49\uf9e4\uf941\u67b6\u69cb\u554f\u984c\uff0c\u7576\u7136\uf967\u80fd\u53ea\u9760\u5c11\uf969\u89c0\u5bdf\u5230\u7684\uf9b5\u5b50\uff0c\u800c\u5fc5\u9808\u5efa \uf9f7\u5728\uf969\uf97e\u8f03\u5927\u7684\u8a9e\uf9be\u5eab\u4e0a\uff0c\u4ee5\uf965\u505a\u5168\u9762\u6df1\u5165\u7684\u5206\u6790\u3002\u4ee5\u4e0a\uf978\u500b\u5c0d\u6bd4\u70ba\uf9b5\uff0c\u5176\u5be6\u5728\u5927\uf9d3\u8207 \u53f0\u7063\u7684\u8a9e\uf9be\u4e2d\uff0c\u90fd\u6709\u76f8\u7576\u591a\u7684\u8b8a\uf9b5\u51fa\u73fe\u3002", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u7dd2\uf941", "sec_num": "1." }, { "text": "WordNet \u662f\u4e00\u500b\u96fb\u5b50\u8a5e\u5f59\u5eab\u7684\u8cc7\uf9be\u5eab\uff0c\u662f\u91cd\u8981\u8a9e\uf9be\uf92d\u6e90\u7684\u5176\u4e2d\u4e00\u500b\u8a9e\uf9be\u8cc7\uf9be\u5eab\uff0cWordNet \u7684\u8a2d\u8a08\uf9b3\u611f\u6e90\u81ea\u65bc\u8fd1\u4ee3\u5fc3\uf9e4\u8a9e\u8a00\u5b78\u548c\u4eba\uf9d0\u8a5e\u5f59\u8a18\u61b6\u7684\u8a08\u7b97\uf9e4\uf941\uff0c\u63d0\u4f9b\u7814\u7a76\u8005\u5728\u8a08\u7b97\u8a9e\u8a00 \u6d2a\u5609\u99a1\u3001\u9ec3\u5c45\u4ec1 \u5b78\uff0c\u6587\u672c\u5206\u6790\u548c\u8a31\u8a31\u591a\u591a\u76f8\u95dc\u7684\u7814\u7a76 (Miller et al., 1993; Fellaum, 1998) ", "cite_spans": [ { "start": 111, "end": 132, "text": "(Miller et al., 1993;", "ref_id": "BIBREF12" }, { "start": 133, "end": 147, "text": "Fellaum, 1998)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "WordNet\u548c\u4e2d\u6587\u8a5e\u7db2(CWN)", "sec_num": "3." }, { "text": "\u7528 \u4ee5 \u8868 \u9054 \u8a72 \uf9d0 \u5225 \u7684 \u8a5e \u5f59 \u3002 \u6700 \u5f8c \uff0c \u6211 \u5011 \u5c07 \u6240 \u4f7f \u7528 \u7684 \u91cb \u7fa9 \u95dc \uf997 \u6bd4 \u8f03 \u539f \u5247 \u8207 Sketch", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet\u548c\u4e2d\u6587\u8a5e\u7db2(CWN)", "sec_num": "3." }, { "text": "Engine (Kilgarriff et al., 2004) \u7684\u8a9e\uf9be\u5eab\u7d71\u8a08\u65b9\u6cd5\u9032\ufa08\u6bd4\u8f03\uff0c\u5c0d\u6bd4\uf978\u7a2e\u65b9\u6cd5\u5728\u8a5e\u5f59\u5171\u540c\u7fa9\u6db5 \u8a08\u7b97\u4e0a\uf962\u540c\u3002Chinese Sketch Engine \u8a9e\uf9be\u7d71\u8a08\u65b9\u6cd5\u57fa\u65bc\u5927\uf97e\u4e2d\u6587\u8a9e\uf9be\u9032\ufa08\u8a9e\u6cd5\u7d71\u8a08\u8207\u5171\u540c \u51fa\u73fe\u8a5e\u983b\u8a08\u7b97\u53d6\u51fa\u540c\u8fd1\u7fa9\u8a5e (Huang et al. 2004 ", "cite_spans": [ { "start": 7, "end": 32, "text": "(Kilgarriff et al., 2004)", "ref_id": null }, { "start": 125, "end": 143, "text": "(Huang et al. 2004", "ref_id": "BIBREF52" } ], "ref_spans": [], "eq_spans": [], "section": "WordNet\u548c\u4e2d\u6587\u8a5e\u7db2(CWN)", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "X 1 \u70ba\u689d\u76ee x \u6240\u6709\u7b26\u5408\u904e\uf984\u689d\u4ef6\u7684\u8fad\u5178\u91cb\u7fa9\u8a5e\u5f59\uff0c\u5373\u7b2c\u4e00\u968e(\u76f4\u63a5)\u91cb\u7fa9\u8a5e\u5f59 \u7d44\u5408\uff1bX 2 \u70ba X 1 \u91cb\u7fa9\u8a5e\u5f59\u518d\u7d93\u904e\u8fad\u5178\u91cb\u7fa9\u6587\u5b57\u64f4\u5145\u4e26\u7b26\u5408\u904e\uf984\u689d\u4ef6\u8a5e\u5f59\uff0c\u5373\u7b2c\u4e8c\u968e\u91cb\u7fa9 \u8a5e\u5f59\u7d44\u5408(\u91cb\u7fa9\u6587\u5b57\u7684\u91cb\u7fa9)\uff1b\u540c\uf9e4\uff0cY 2 \u70ba Y 1 \u7b26\u5408\u7684\u91cb\u7fa9\u8a5e\u5f59\uff0c\u518d\u7d93\u8fad\u5178\u91cb\u7fa9\u64f4\u5145\u5f8c\u4e14\u7b26 \u5408\u904e\uf984\u689d\u4ef6\u8a5e\u5f59\u96c6\u5408\u3002\u800c (1 2) X + \u70ba X 1 \u7b2c\u4e00\u968e(\u76f4\u63a5)\u8207 X 2 \u7b2c\u4e8c\u968e(\u91cb\u7fa9\u6587\u5b57\u7684\u91cb\u7fa9)\u91cb\u7fa9\u8a5e \u5f59\u7684\u96c6\u5408\u7e3d\u5408\uff1b\u540c\uf9e4\uff0c (1 2) Y + \u70ba Y 1 \u7b2c\u4e00\u968e\u8207 Y 2 \u7b2c\u4e8c\u968e\u91cb\u7fa9\u8a5e\u5f59\u7684\u96c6\u5408\u7e3d\u5408\u3002\u800c X 1 \u3001X 2 \u8207 Y 1 \u3001 Y 2 \u5171\u540c\u64c1\u6709\u7684\u91cb\u7fa9\u6587\u5b57\u7684\u96c6\u5408\u5247\u8868\u793a\u70ba (1 2) (1 2) X Y + + \u2229 \uff0c \u4f7f\u524d\u8ff0\u7684\u7b2c\u4e8c\u968e \u5171\u6709\u91cb\u7fa9\u95dc\uf997\u503c\u8a08\u7b97\u5247\u4fee\u6539\u6210 1 2 1 2 2 , 1 2 ( ) ( ) ( ) ( ) x y X X Y Y CoAP x X X + + = + \u2229 \u4ee5\u8868\u793a\u6240\u6709\u7528\uf92d\u91cb\u7fa9 x \u8a5e\u689d\u7684\u7b2c\u4e00\u968e\u8207\u7b2c\u4e8c\u968e\u6587\u5b57\u96c6\u5408\uff0c\u8207\u6240\u6709\u7528\uf92d\u91cb\u7fa9 y \u7684\u7b2c\u4e00\u968e\u8207\u7b2c\u4e8c\u968e\u6587\u5b57\u96c6\u5408\uff0c\uf978\u8005 \u4e4b\u9593\u4ea4\u96c6\u5171\u540c\u51fa\u73fe\u7684\u64f4\u5145\u91cb\u7fa9\u5b57\u8a5e\u7684\u6bd4\uf9b5\u3002\u4ee5\u6b64\uf9d0\u63a8\uff0c\u5247\u7b2c n \u968e\u5c64\u91cb\u7fa9\u95dc\uf997\u7684\u4e00\u822c\u5f0f\u5373 \u53ef\u5beb\u6210\uff1a , ( ) i i n i n i n x y i i n X Y CoAP x X = = = = \u2211 \u2211 \u2211 \u2229 (3) \u5f9e\u800c\u7528\uf92d\u8a08\u7b97\uf978\u8005\u4e4b\u9593\u7684\u7b2c n \u968e\u91cb\u7fa9\u95dc\uf997\uf969\u503c\u5247\u4fee\u6539\u6210\u70ba\u4e0b\uf99c\u4e00\u822c\u5f0f\uff1a , ,", "eq_num": ", , , , ( ) (" } ], "section": "WordNet\u548c\u4e2d\u6587\u8a5e\u7db2(CWN)", "sec_num": "3." }, { "text": ") 2 , 0 1 ( ) ( ) n n x y x y n n x y x y n n x y x y CoAP x CoAP y SRD SRD CoAP x CoAP y = \u2264 \u2264 + (4) \u900f\u904e\u524d\u8ff0\u591a\u968e\u5c64\u53cd\u8986\u91cb\u7fa9\u4e26\u8a08\u7b97\u5171\u540c\u51fa\u73fe\u7684\u91cb\u7fa9\u6587\u5b57\u6bd4\uf9b5\uff0c\u5728\u5148\u524d SRD \u7684\u6bd4\u8f03\u65b9\u6cd5 \u7814\u7a76\u4e2d\u5df2\u8a0e\uf941\u904e\uff0c\u9664\uf9ba\u53ef\u4ee5\u767c\u6398\u8a5e\u5f59\u6df1\u5c64\u7684\u91cb\u7fa9\u8a5e\u5f59\uff0c\u4ea6\u53ef\u4f7f\u91cb\u7fa9\u95dc\uf997\u503c\uf967\u53d7\u91cb\u7fa9\u6587\u5b57 \u8d99\u9022\u6bc5\u3001\u937e\u66c9\u82b3 \u4e9b\u5fae\u4fee\u6539\u800c\u5927\u5e45\u5f71\u97ff\u91cb\u7fa9\u95dc\uf997\u503c\u3002\u5728\u8a08\u7b97\u5171\u540c\u64c1\u6709\u7684\u91cb\u7fa9\u6587\u5b57\u5728\u53cd\u8986\u7684\u968e\u5c64\u91cb\u7fa9\u904e\u7a0b\u4e2d\uff0c \u4e5f\u53ef\uf9dd\u7528\u53cd\u8986\u91cb\u7fa9\u800c\u51fa\u73fe\u7684\u8a5e\u5f59\uf94f\u8a08\uff0c\u7a81\u986f\u51fa\u6b0a\u91cd\u9ad8\u7684\u4f54\u6709\u8a5e\u5f59\u4e26\u52a0\u5165\u8a08\u7b97\u3002\u5728\u591a\u968e\u5c64 \u53cd\u8986\u91cb\u7fa9\u7684\u904e\u7a0b\uff0c\u80fd\u627e\u51fa\uf978\u8a5e\u5f59\u4e4b\u9593\u6df1\u5c64\u7684\u5171\u6709\u91cb\u7fa9\u8a5e\u5f59\uff0c\u4e26\u4e14\u56e0\u70ba\u80fd\u88ab\u64f4\u5145\u91cb\u7fa9\u7684\u8a5e \u5f59\u90fd\u5df2\u7d93\u900f\u904e\u8fad\u5178\u91cb\u7fa9\u904e\uff0c\u6240\u4ee5\u53cd\u8986\u91cb\u7fa9\u6709\u76ca\u65bc\u7a69\u5b9a\u8a08\u7b97\u95dc\uf997\u7684\u7d50\u679c(\u5373\u6240\u6709\u91cb\u7fa9\u6210\u4efd\u6240 \u4f54\u767e\u5206\u6bd4\u6703\u50be\u5411\u4e00\u5b9a\u7684\u7d44\u6210\u6bd4\uf961)\u3002\u6700\u5f8c\u5728\u524d\u5148\u7684\u7814\u7a76\u4e2d\u4ea6\u5efa\u8b70\u591a\u968e\u5c64\u91cb\u7fa9\u95dc\uf997\u503c\uff0c\u53ef\u4ee5 \u53d6\u7528\u7b2c\u56db\u968e\u5c64\u7684\u8a9e\u7fa9\u95dc\uf997\u7a0b\ufa01\u9032\ufa08\u8a0e\uf941(\u5373\u53cd\u8986\u9032\ufa08\u91cb\u7fa9\u8655\uf9e4\u81f3 X 1 \u3001X 2 \u3001X 3 \u3001X 4 \uff0c\u4e26\u5c07\u6240 \u6709\u7d50\u679c\u52a0\u7e3d\u8a08\u6b21)\u3002 3. \u300a\u540c\u7fa9\u8a5e\u8a5e\uf9f4(\u64f4\u5c55\u7248) \u300b \u54c8\u723e\u6ff1\u5de5\u696d\u5927\u5b78\u4fe1\u606f\u6aa2\uf96a\u7814\u7a76\u5ba4 (HIT IR Lab) \u6240\u63d0\u4f9b\u7684\u516c\u958b\u7248\u672c\u300a\u64f4\u5c55\u7248\u300b\uff0c\u662f\u6574\uf9e4\u81ea \u300a\u8a5e\uf9f4\u300b(\u6885\u5bb6\u99d2\u7b49\uff0c1983)\uff0c\u9664\uf9ba\u522a\u9664\u820a\u8a5e\u8207\u7f55\u7528\u8a5e\u5916\uff0c\u4e26\u4f9d\u65b0\u805e\u8a9e\uf9be\u52a0\u5165\u5e38\u7528\u65b0\u8a5e\u3002 \u5728\u6885\u7248\u7684\u300a\u8a5e\uf9f4\u300b\u4e2d\uff0c\u6536\uf93f\uf92d\u81ea\u8a5e\u7d20\u3001\u8a5e\u7d44\u3001\u6210\u8a9e\u3001\u65b9\u8a00\u8a5e\u8207\u53e4\u8a9e\u7b49\u8a5e\u5171\u4e94\u842c\u4e09\u5343\u591a\u8a5e \u5f59\uf969\uff0c\u4e26\u4e14\u4f9d\u7167\u540c\u7fa9\u8a5e\u5206\uf9d0\u6db5\u7fa9\u6709\u7cfb\u7d71\u5730\u5340\u5206\u70ba\u4eba\u3001\u7269\u3001\u6642\u9593\u3001\u7a7a\u9593\u3001\u62bd\u8c61\u4e8b\u7269\u3001\u7279\u5fb5\u3001 \u52d5\u4f5c\u3001\u5fc3\uf9e4\u6d3b\u52d5\u3001\u73fe\u8c61\u8207\uf9fa\u614b\u3001\u95dc\uf997\u3001\u8a9e\u52a9\u3001\u656c\u8a9e\u7b49\u5341\u4e8c\u7d44\u5927\uf9d0(\u4ee5 A \u5230 L \u6a19\u8a18\uf9d0\u5225\u7684 \u7b2c\u4e00\u4f4d\u82f1\u6587\u5b57)\u4ee5\u53ca\uf974\u5e72\u4e2d\uf9d0\u8207\u5c0f\uf9d0\u3002\u540c\uf9d0\u578b\u8a5e\u8a9e\u4f9d\u7167\u300c\u76f8\u5c0d\u3001\u6bd4\u8f03\u300d\u7684\u6392\u5e8f\u539f\u5247(\u6885\u5bb6 \u99d2\u7b49\uff0c1983)\u4f9d\u540c\u7fa9/\u8fd1\u7fa9\u7a0b\ufa01\u5728\u540c\uf9d0\u578b\u4e2d\uff0c\u55ae\ufa08\u8a5e\u8a9e\u7531\u5de6\u81ea\u53f3\u6392\uf99c\uff0c\u8a5e\u8a9e\u6240\u5c6c\uf9d0\u5225\u8207\uf99c \u8209\u4f4d\u7f6e\u5247\u96b1\u542b\u6709\u4f5c\u8005\u5011\u7684\u5de7\u601d\u3002\u300a\u64f4\u5c55\u7248\u300b\u5c0d\u539f\u59cb\u7684\u5206\uf9d0\u4e5f\u64f4\u5c55\u5230\u4e94\u5c64\uff0c\u5176\u4e2d\u52a0\u5165\u300c\u76f8 \u7b49\u3001\u540c\u7fa9\u300d(=)\u3001\u300c\uf967\u7b49\u3001\u540c\uf9d0\u300d(#)\u53ca\u300c\u81ea\u6211\u5c01\u9589\u3001\u7368\uf9f7\u300d(@)\u7b49\u76f8\u95dc\u6db5\u7fa9\u3002 \u8868 1.\u300a\u540c\u7fa9\u8a5e\u8a5e\uf9f4(\u64f4\u5c55\u7248)\u300b\uf9b5 Bp20B03=\u62db\u5b50\u5e4c\u5b50\u5e02\u62db Dd15A09=\u5e4c\u5b50\u62db\u724c\u724c\u5b50\u65d7\u865f\uf90a\u5b57\u62db\u724c Aa01B03#\uf97c\u6c11\u9806\u6c11 Bg02B07#\u8d85\u8072\u6ce2\u4f4e\u8072\u6ce2\u8072\u6ce2 Aa01C05@\u773e\u5b78\u751f Bg03A01@\u706b \u5728\u8868 1 \u4e2d\u53ef\u770b\u5f97\u51fa\uff0c\u300a\u64f4\u5c55\u7248\u300b\u90fd\u4fdd\uf9cd\uf9ba\u5206\uf9d0\uf9d0\u5225\u3001\u5b57\u5f59\u53ca\u540c\u7fa9\u8a5e\u5f59\uff0c\u4e26\u6c92\u6709\u91dd\u5c0d \u8a72\uf9d0\u5225\u7d66\u4e88\u660e\u78ba\u7684\uf9d0\u5225\u6db5\u7fa9\u5b9a\u7fa9\uff0c\u4ea6\u6c92\u6709\u5c0d\u6240\uf9d0\u5225\u4e2d\u7684\u8a5e\u5f59\u7d66\u4e88\u660e\u78ba\u5b9a\u7fa9\u3002\u4ee5 Bp20B03= \uf92d\uf96f\u660e\uff0c\u96d6\u4f9d\u300a\u8a5e\uf9f4\u300b\u7de8\u64b0\u539f\u5247\u2500\u540c\u7fa9\u8a5e\u5728\u524d\u3001\u8fd1\u7fa9\u8a5e\u5728\u5f8c\u2500\uf96f\u660e\"\u62db\u5b50\uff02\u8207 Bp20B03= \u61c9\u5c6c\u540c\u7fa9\uff0c\u5728\u300a\u570b\u8a9e\u8fad\u5178\u300b\u4e2d\u5206\u70ba\uf9d1\u500b\u91cb\u7fa9\uff0c\u5176\u5206\u5225\u70ba\u300c\u62db\u724c\u3001\u5ee3\u544a\u3001\u6d77\u5831\u300d \u3001 \u300c\u9580\u7968\u3002\u300d\u3001 \u300c\u4ea6\u7a31\u70ba\u82b1\u62db\u3001\u62db\u5152\u300d\u3001\u300c\u6b7b\u5211\u72af\u5c31\u5211\u6642\uff0c\u63d2\u65bc\u80cc\u5f8c\u7684\u7d19\u6a19\uff0c\u7528\uf92d\u63ed\u793a\u72af\u4eba\u7684\u7f6a\uf9fa\u3001\u59d3 \u540d\u3002\u300d\u3001\u300c\u96a8\u98a8\u62db\u5c55\u7684\u9577\u5e03\uf9a6\u5b50\u3002\u300d\u3001\u300c\u773c\u775b\u3002\u591a\u7528\u65bc\u6c5f\u6e56\u4eba\u7269\u9593\u3002\u300d\u3002\u800c\u6392\u5e8f\u5728\u7b2c\u4e8c \u500b\u8a5e\u5e4c\u5b50\uff0c\u5728\u300a\u570b\u8a9e\u8fad\u5178\u300b\u4e2d\u5206\u70ba\u4e8c\u500b\u91cb\u7fa9\uff0c\u5206\u5225\u70ba\u300c\u639b\u5728\u5e97\u92ea\u9580\u5916\uff0c\u7528\uf92d\u62db\u5fa0\u9867\u5ba2\u7684 \u62db\u724c\u3002\u300d\u8207\u300c\u8868\u73fe\u5728\u5916\u7528\u4ee5\u8499\u853d\u4ed6\u4eba\u7684\u8a00\ufa08\u3002\u300d\uff1b\u5e02\u62db\u7684\u91cb\u7fa9\u50c5\u53ea\u6709\u300c\u5546\u5e97\u9580\u5916\u6a19\u793a\u5176 \u540d\u865f\u53ca\u6240\u8ce3\u8ca8\u7269\u7684\u62db\u724c\u6216\u6a19\u8a8c\u3002\u300d\u3002\u96d6\u7136\u5206\uf9d0\uf9d0\u5225\u662f\u5c6c B \uf9d0(\u7269)\uff0c\u4f46\u5f9e\u5206\uf9d0\u67b6\u69cb\u6216\u6bd4\u8f03 \u540c\uf9d0\u578b\u8a5e\u7d44\uff0c\u4ea6\u7121\u6cd5\u5f9e\u300a\u64f4\u5c55\u7248\u300b\u7684\u5206\uf9d0\u4e4b\u4e2d\u5f97\u77e5\u5728 Bp20B03=\u662f\u5c6c\"\u62db\u5b50\uff02\u7684\uf9d1\u500b\u91cb \u7fa9\u4e2d\u90a3\u4e00\u500b\u5b9a\u7fa9\u3002\u6b64\u5916\uff0c\u4e00\u8a5e\u591a\u7fa9(Homographs)\u5728\u300a\u64f4\u5c55\u7248\u300b\u4e4b\u4e2d\u5247\u6703\u5206\uf99c\u5728\uf967\u540c\u7684\uf9d0 \u57fa\u65bc\u5b57\u5178\u91cb\u7fa9\u95dc\uf997\u65b9\u6cd5\u7684\u540c\u7fa9\u8a5e\u6982\uf9a3\u64f7\u53d6: 41 \u4ee5\u300a\u540c\u7fa9\u8a5e\u8a5e\uf9f4(\u64f4\u5c55\u7248)\u300b\u70ba\uf9b5 \u5225\u4e4b\u4e2d\uff0c\u5982\u524d\u8ff0\u7684\"\u62db\u5b50\uff02\u5247\u5206\u5225\u88ab\u6b78\u5728 Bp20B03=(B \uf9d0\uff0c\u7269)\u8207 Dk15A03= (D \uf9d0\uff0c\u62bd\u8c61 \u4e8b\u7269)\u4e4b\u4e2d\u3002\u6b64\u5206\uf9d0\u7d50\u679c\uf967\u50c5\u9032\ufa08\u540c\u7fa9\u8a5e\u6b78\uf9d0\u6642\u6703\u9020\u6210\u6a21\u7cca\uff0c\u4ea6\u6709\u53ef\u80fd\u6703\u5728\u5206\u6790\u6587\u672c\u6642\u9020 \u6210\u6240\u8868\u9054\u7684\u7fa9\u6db5\u8fa8\u5225\u932f\u8aa4\u3002\u7531\u65bc\u300a\u8a5e\uf9f4\u300b\u50c5\u5c07\u8a5e\u5f59\u9ad8\ufa01\u6982\u5316\u5f8c\uff0c\u518d\u5c07\u8a5e\u5f59\u7f6e\u653e\u65bc\u76f8\u5c0d\u7684 \uf9d0\u5225\u4e4b\u4e2d(\u9b91\u514b\u6021\uff0c1983)\uff0c\u96d6\u7136\u7d93\u904e\u589e\u3001\u522a\u3001\u4fee\u6539\u4e4b\u5f8c\u5df2\u8f03\u7b26\u5408\u6642\u4e0b\u7528\u8a9e\uff0c\u4f46\u9019\u6a23\u4eba\u70ba \u7684\u76f8\u5c0d\u5206\uf9d0\u539f\u5247\u5f88\u96e3\u8b93\u5f8c\u4eba\u628a\u7814\u7a76\u7684\u8a9e\uf9be\u6b78\u7d0d\u5230\u300a\u8a5e\uf9f4\u300b\u5206\uf9d0\u4e4b\u4e2d\u3002 4. \u540c\u7fa9\u6982\uf9a3\u7684\u76f8\u95dc\u7814\u7a76 \u5728\u8a0e\uf941\u8a5e\u5f59\u7684\u540c\u7fa9\u6982\uf9a3\u7684\u7814\u7a76\u4e0a\uff0c\u5927\u6982\u53ef\u5340\u5206\u70ba\uf978\u7a2e\u9762\u5411\u9032\ufa08\uff1a\u8a5e\u7fa9\u8a13\u8a41\u8207\u6a5f\u5668\u5b78\u7fd2\u3002 \u8a5e\u7fa9\u8a13\u8a41\u4e2d\u7684\u540c\u7fa9\u76f8\u8a13\u5de5\u4f5c\u5373\u662f\u5728\u5efa\uf9f7\u540c\u7fa9\u8a5e\u95dc\u4fc2\uff0c\u5982\u300a\u723e\u96c5\u2027\u91cb\u8a00\u300b\uff1a\u300c\u5bb5\uff0c\u591c\u4e5f\u3002\u300d \u8207\u300a\uf96f\u6587\u300b\uff1a\u300c\uf988\u723e\uff0c\u7336\u9761\uf988\u4e5f\u3002\u300d\u3002\u7136\u800c\u4f7f\u7528\u4eba\u5de5\u65b9\u6cd5\u9032\ufa08\u8a5e\u7fa9\u8a13\u8a41\u6642\uff0c\u5982\u300a\u723e\u96c5\u300b \u5728\u8a13\u91cb\u6563\u4f48\u65bc\u5404\u8655\u7684\u8cc7\uf9be\u4e2d\uff0c\u96e3\u4ee5\u5c0b\u627e\u5728\u540c\u4e00\u57fa\u790e\u6982\uf9a3\u4e0a\u69cb\u7bc9\u7684\u540c\u7fa9\u8a5e\u5f59\uff0c\u4f7f\u8a13\u8a41\u5de5\u4f5c \u9700\u8981\u5c08\u696d\u4eba\u58eb\u7d93\u904e\u9577\u6642\u9593\uf94f\u7a4d (\u738b\u5efa\u8389\uff0c2012)\u3002\u6b64\u5916\uff0c\u5728\u5df2\u7531\u524d\u4eba\u6b78\uf9d0\u5b8c\u6210\u7684\u540c\u7fa9\u8a5e\u7d44 \u5167\u6db5\u4e2d\uff0c\u4ea6\u662f\u7531\u65bc\u6c92\u6709\u660e\u78ba\uf96f\u660e\u8a5e\u7d44\u5167\u5bb9\uff0c\u5f9e\u800c\u4f7f\u5f8c\u4eba\u6703\u6709\uf9e4\u89e3\u4e0a\u7684\u5dee\uf962\u3002\u5982\u8303\u7d05\uf988", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet\u548c\u4e2d\u6587\u8a5e\u7db2(CWN)", "sec_num": "3." }, { "text": "(2011) \u5c0d\"\u62dc\"\u3001\"\u63d6\"\u3001\"\u7a3d\u9996\"\u3001\"\u9813\u9996\"\u3001\"\u7a3d\u9859\"\u3001\"\u62dc\u7a3d\u9996\"\u7b49\u8a5e\u7fa4\uff0c\u4ee5\u300a\u5de6\u50b3\u300b\u70ba\u8cc7\uf9be\u5206 \u6790\u5728\u540c\u7fa9\u8a5e\u7fa4\u4e2d\u7684\uf962\u540c\u3002\u800c\u5728\u73fe\u4ee3\u8a5e\u5f59\u7684\u7814\u7a76\u4e4b\u4e2d\uff0c\u7531\u65bc\u7121\u53e4\u6587\u53ef\u8a13\uff0c\u5f9e\u800c\u4f7f\u7528\u8a9e\uf9be\u5eab \u65b9\u6cd5\u9032\ufa08\u8a5e\u5f59\u540c\u7fa9\u6bd4\u8f03\u3002\u5168\u6587\u5955\u8207\u90ed\u8056\uf9f4(2012)\u5c0d\u73fe\u4ee3\u6f22\u8a9e\u4e2d\u7684\"\u8a08\u7a0b\uf902\"\u3001\"\u51fa\u79df\u6c7d\uf902\"\u3001 \"\u8a08\u7a0b\uf902\"\u3001\"\u5fb7\u58eb\"\u540c\u7fa9\u8a5e\u7fa4\u7684\uf92d\u6e90\u7fa9\u6db5\u8207\u7af6\u722d\u9032\ufa08\u8a0e\uf941\uff0c\u4e26\u8f14\u4ee5\uf963\u5927 CCL \u8a9e\uf9be\u5eab\u9a57\u8b49\uff0c \u4ee5\u8a0e\uf941\u5916\uf92d\u8b6f\u8a9e\u5b58\u5728\u7684\u5206\u4f48\u60c5\u6cc1\u8207\u5176\u9650\u5236\u3002 \u7136\u800c\u8868 3 \u8207\u8868 4 \u4e4b\u9593\u4ea6\u53ef\u5f9e\u8207\u5176\u5b83\u540c\u7d44\u8a5e\u5f59\u7684\u7b2c\u56db\u968e\u91cb\u7fa9\u8a9e\u7fa9\u95dc\uf997\u6700\u5927\u7684\u5e73\u5747\u503c\uff0c \u8207\u5171\u6709\u91cb\u7fa9\u8a5e\u5f59\u6700\u5927\u6db5\u84cb\uf969\uf92d\u6c7a\u5b9a\u6700\u9069\u5408\u4ee3\u8868\u7684\u8a5e\u5f59\u3002\u5728\u8868 3 \u4e2d\"\u5e4c\u5b50\uff02\u91cb\u7fa9\u70ba\"(1) \u8868 \u73fe\u5728\u5916\u7528\u4ee5\u8499\u853d\u4ed6\u4eba\u7684\u8a00\ufa08\u3002 (2) \u639b\u5728\u5e97\u92ea\u9580\u5916\uff0c\u7528\uf92d\u62db\u5fa0\u9867\u5ba2\u7684\u62db\u724c\u3002\uff02\u5728\u8868 4 \u4e2d\"\u62db \u724c\uff02\u91cb\u7fa9\u70ba\"(1) \u5546\u5e97\u6a5f\u69cb\u4f5c\u70ba\u6a19\uf9fc\u7684\u724c\u5b50\u3002 (2) \u6f14\u85dd\u4eba\u54e1\u6216\u5718\u9ad4\u63ed\u793a\u5176\u6240\u737b\u6280\u85dd\u6709\u95dc \u4e8b\u9805\u7684\u724c\u5b50\u3002(3) \u62ff\u624b\u7684\uff0c\u53ef\u4f5c\u70ba\u6a19\uf9fc\u7684\u3002(4) \u6bd4\u55bb\u9a19\u4eba\u7684\u5e4c\u5b50\u3002\uff02\u5f9e\u91cb\u7fa9\u4e4b\u4e2d\uff0c\u6211\u5011\u53ef \u4ee5\uf9e4\u89e3\u5171\u6709\u91cb\u7fa9\u7684\u8655\uf9e4\u539f\u5247\u662f\u5c07\"\u62db\u724c-4\uff02\u4e2d\u6240\u51fa\u73fe\u7684\"\u5e4c\u5b50\uff02\u91cb\u7fa9\u6587\u5b57\u7d0d\u5165\u8a08\u7b97\u800c\u5f97 \u7684\u6700\u5927\u503c\uf92d\u95dc\uf997\uff0c\u4f46\"\u62db\u724c\uff02\u5c0d\u5176\u5b83 Dd15A09=\u540c\u7fa9\u8a5e\u7d44\u4e2d\u7684\u8a5e\u5f59\u95dc\uf997\u537b\uf967\u5f9e\"\u62db\u724c-4\uff02 \u800c\uf92d\uff0c\u800c\u662f\"\u62db\u724c-1\uff02\u6216\"\u62db\u724c-3\uff02\u3002\u53e6\u4e00\u65b9\u9762\uff0c\"\u5e4c\u5b50\uff02\u80fd\u8207 Dd15A09=\u540c\u7fa9\u8a5e\u7d44\u4e2d\u7684 \u57fa\u65bc\u5b57\u5178\u91cb\u7fa9\u95dc\uf997\u65b9\u6cd5\u7684\u540c\u7fa9\u8a5e\u6982\uf9a3\u64f7\u53d6: 47 \u4ee5\u300a\u540c\u7fa9\u8a5e\u8a5e\uf9f4(\u64f4\u5c55\u7248)\u300b\u70ba\uf9b5 \u8a5e\u5f59\u7522\u751f\u91cb\u7fa9\u95dc\uf997\u7684\uff0c\u50c5\u50c5\u53ea\u6709\"\u5e4c\u5b50-2\uff02\u91cb\u7fa9\u4e2d\uff0c\u8f03\u70ba\u660e\u78ba\u7684\"\u62db\u724c\uff02\u91cb\u7fa9\u800c\uf92d\uff0c\u56e0 \u6b64\u5728\u6b64\u4e00\u540c\u7fa9\u7fa4\u7d44\u4e4b\u4e2d\u5f97\u5230\u7684\u5e73\u5747\uf969\u503c\u5c31\u6703\u4f4e\u8a31\u591a\u3002\u900f\u904e\u4e0a\u8ff0\u7684\uf96f\u660e\u6211\u5011\u53ef\u4ee5\u77e5\u9053\uff0c\u96d6 \u7136\u6700\u9069\u5408\u540c\u7fa9\u8a5e\u7d44\u7684\u4ee3\u8868\u8a5e\u5f59\u5fc5\u9700\u662f\u6eff\u8db3\u91cb\u7fa9\u8a9e\u7fa9\u95dc\uf997\u6700\u5927\u503c\u5e73\u5747\u503c\u8207\u5171\u6709\u91cb\u7fa9\u8a5e\u5f59\u6700 \u5927\u6db5\u84cb\uf969\u3002\u4f46\u8981\u540c\u6642\u9054\u5230\u9019\uf978\u9805\u689d\u4ef6\uff0c\u5247\u8a72\u8a5e\u5f59\u5fc5\u9808\u5305\u6db5\u7684\u5404\u968e\u5c64\u91cb\u7fa9\u6587\u5b57\u96c6\u5408\uff0c\u4e14\u5927 \u591a\u90fd\u8981\u80fd\u51fa\u73fe\u5728\u540c\u4e00\u8fad\u7d44\u4e4b\u4e2d\u5176\u5b83\u8a5e\u5f59\u7684\u91cb\u7fa9\u6587\u5b57\uf9e8\uff0c\u4e5f\u5c31\u662f\u8981\u300c\u88ab\u7528\uf92d\uf96f\u660e\u8fad\u5178\uf9e8\u7684 \u67d0\u500b\u8fad\u689d\u300d\u6642\uff0c\u624d\u6703\u53ef\u4ee5\u6210\u70ba\u8a72\u8fad\u7d44\u7684\u6700\u5408\u9069\u540c\u7fa9\u8a5e\u5f59\u3002 \u5728\u8a08\u7b97\u4e0a\uff0c\u4e26\u975e\u4ee5\u6df1\u5c64\u91cb\u7fa9\u95dc\uf997(\u5982\u7b2c\u516b\u968e\u5c64)\u6240\u80fd\u53d6\u5f97\u7684\u4e00\u822c\u6027\u6982\uf9a3\u8a5e\u5f59\u9032\ufa08\u8a08\u7b97\uff0c \u800c\u662f\u8a5e\u5c64\u6982\uf9a3\u5c64\u7d1a\u5728\u6709\u9650\ufa01(\u672c\u7814\u7a76\u4ee5\u7b2c\u56db\u968e\u5c64\u9032\ufa08\u8a08\u7b97)\u7684\u6982\u5316\u64f4\u5145\u4e4b\u5f8c\uff0c\u4ecd\u8981\u6eff\u8db3\u524d \u8ff0\u7684\u689d\u4ef6\u3002\u6b64\u5916\uff0c\u8a5e\u5f59\u9593\u662f part-and-whole \u7684\u96b1\u55bb\u95dc\u4fc2\u7684\"\uf90a\u5b57\u62db\u724c\uff02\u8207\"\u62db\u724c\uff02\uf978\u8a5e\uff0c \u5728\u300a\u8a5e\uf9f4\u300b\u7684\u4f9d\u540c\u7fa9/\u8fd1\u7fa9\u7a0b\ufa01\u5728\u540c\uf9d0\u578b\u4e2d\u7531\u5de6\u81f3\u53f3\u6392\uf99c\u539f\u5247\u4e4b\u4e0b\uff0c\u6211\u5011\u4e5f\u53ef\u4ee5\u5f97\u51fa\u76f8\u5c0d \u65bc\"\u62db\u724c\uff02\u4e00\u8a5e\uff0c\"\uf90a\u5b57\u62db\u724c\uff02\u53ef\u8996\u70ba\u8fd1\u7fa9\u8a5e\u800c\u975e\u540c\u7fa9\u95dc\u4fc2\u3002\u9019\u9805\u7d50\u679c\u4e5f\u53ef\u4ee5\u5f9e\u8868 4 \u4e2d \u770b\u51fa\"\u62db\u724c\uff02\u4e00\u8a5e\u53ef\u4ee5\u5b8c\u5168\u53ef\u4e3b\u5bb0\u6b64\u540c\u7fa9\u8a5e\u7d44\u5171\u6709\u91cb\u7fa9\uff0c\u800c\"\uf90a\u5b57\u62db\u724c\uff02\u537b\u7121\u6cd5\u4e3b\u5bb0\u770b \u51fa (\u56e0\u70ba\uf978\u8a5e\u5f59\u5728\u9ad8\ufa01\u91cb\u7fa9\u76f8\u95dc\u7684\u689d\u4ef6\u4e0b\uff0c\u61c9\u540c\u6642\u80fd\u4e3b\u5bb0\u8a72\u540c\u7fa9\u8a5e\u7d44\u3002\uf974\u5426\uff0c\u5728\u4e00\u8a5e\u591a \u7fa9\u7684\u689d\u4ef6\u4e4b\u4e0b\uff0c\u5247\u53ef\u63a8\uf941\uf978\u8005\u76f8\u95dc\uf997\u7684\u91cb\u7fa9\u4e26\u975e\u6b64\u540c\u7fa9\u8a5e\u7d44\u7684\u4e3b\u5bb0\u91cb\u7fa9)\u3002 \u6700\u5f8c\uff0c\u5f9e\u5171 \u540c\u64c1\u6709\u7684\u91cb\u7fa9\u6b0a\u91cd Top 20 \u4e2d\uff0c\uf978\u540c\u7fa9\u8a5e\u7d44\u6240\u8868\u73fe\u7684\u91cb\u7fa9\u8a5e\u5f59\u4ea6\u662f\uf967\u540c\u7684\u3002\u53ef\u4ee5\u770b\u51fa\uff0c Bp20B03=\u4e4b\u4e2d\u5f9e\u8f03\u5177\u9ad4\u7684\"\u62db\u724c\uff02\uff0c\u800c\u5728 Dd15A09=\u5247\u5f9e\u62bd\u8c61\u7684\"\u724c\u5b50\u3001\u8868\u793a\uff02\u7b49\u6982\uf9a3 \u6db5\u610f\uff0c\u6b64\u8207\u300a\u8a5e\uf9f4\u300b\u4e4b\u5206\uf9d0\u898f\u5247\u4e0a(B \uf9d0\u70ba\u7269\u3001D \uf9d0\u662f\u62bd\u8c61\u4e8b\u7269)\u662f\u76f8\u540c\u7684\u3002 \u96d6\u7136\u540c\u7fa9\u8a5e\u7d44\u7d93\u904e\u4e0a\u8ff0\u8a08\u7b97\u53ef\u4ee5\u5f97\u5230\u8a5e\u7d44\u4e4b\u4e2d\u6700\u9069\u5408\u7528\uf92d\u8868\u9054\u7684\u540c\u7fa9\u8a5e\u5f59\uff0c\u8207\u5171\u540c \u64c1\u6709\u91cb\u7fa9\u6587\u5b57\u7684\u6b0a\u91cd\u6bd4\uf9b5(\u5728\u6b64\u50c5\uf99c\u51fa Top 20)\uff0c\u4f46\u6b64\u65b9\u6cd5\u4ea6\u6709\u9650\u5236\u3002\u7531\u65bc\u6b64\u65b9\u6cd5\u9700\u900f\u904e \u8fad\u5178\u53cd\u8986\u5c0d\u8a5e\u5f59\u9032\ufa08\u91cb\u7fa9\u3001\u65b7\u8a5e\u518d\u64f4\u5145\u91cb\u7fa9\u6587\u5b57\uff0c\u6240\u4ee5\u7576\u7121\u6cd5\u53d6\u5f97\u91cb\u7fa9\u5167\u5bb9\u6642\uff0c\u5247\u6703\u9020 \u6210\u7121\u6cd5\u8a08\u7b97\u91cb\u7fa9\u95dc\uf997\u7684\u7a98\u5883\u3002\u4ee5\u4e0b\uf96b\u8003\u7dad\u57fa\u767e\u79d1\u70ba\uf9b5\u3002\u63cf\u8ff0\u300c\u8a13\u8a41\u5b78\u300d\u5b9a\u7fa9\u4e00\u7d44\u540c\u7fa9\u8a5e \"\u8a13\u8a41\u8a13\u6545\u6545\u8a13\u53e4\u8a13\u89e3\u6545\u89e3\u8a41\uff02\uff0c\u5176\u8a08\u7b97\u7d50\u679c\u5982\u4e0b\u8868 5\u3002\u5f9e\u8868\u4e2d\u53ef\u4ee5\u770b\u51fa\"\u8a13\u6545\uff02\u8207\"\u6545 \u8a13\uff02\u4e09\u8a5e\u5728\u6240\u4f7f\u7528\u7684\u6559\u80b2\u90e8\u300a\u570b\u8a9e\u8a5e\u5178\u300b\u4e2d\u627e\uf967\u5230\u91cb\u7fa9\uff0c\u56e0\u6b64\u7121\u6cd5\u8a08\u7b97\u91cb\u7fa9\u95dc\uf997\uff0c\u6b64\u70ba \u7f3a\u9ede\u4e00\u3002\u800c\u5f9e\u5176\u5b83\u4e09\u500b\u5b57\u7684\u56db\u968e\u5c64\u91cb\u7fa9\u95dc\uf997\u8a08\u7b97\u4e4b\u5f8c\u7684\u7d50\u679c\uff0c\u6700\u5927\u91cb\u7fa9\u95dc\uf997\u6587\u5b57\u53ef\u4ee5\u4f7f \u7528\"\u89e3\u8a41\"\u4ee3\u8868\uff0c\u56e0\u70ba\u7d93\u904e\u8a08\u7b97\u4e4b\u5f8c\u6700\u5927\u5e73\u5747\u503c\u662f 0.741\u3002\u9019\u9805\uf969\u503c\u8207\"\u89e3\u6545\uff02\u7684 0.740 \u4e4b\u9593\u96d6\u7136\u53ea\u5dee\u5343\u5206\u4e4b\u4e00\uff0c\u4f46\u4ecd\u7121\u6cd5\u76f4\u63a5\u5c07\"\u89e3\u8a41\uff02\u8207\"\u89e3\u6545\uff02\u8996\u70ba\u76f8\u540c\u8a5e\u5f59\u6216\u6982\uf9a3\u7fa9\u6db5\u76f8 \u540c(\u56e0\u70ba\u672c\u7814\u7a76\u6240\u7528\u65b9\u6cd5\u50c5\u6b62\u65bc\u8a08\u95b1\u91cb\u7fa9\u800c\u975e\u8a5e\u5f59\u6982\uf9a3)\u3002\u6700\u5f8c\u91cb\u7fa9\u6587\u5b57\u6bd4\u91cd\u524d\u4e8c\u5341\u500b\u5b57 \u8a5e\u4ea6\u80fd\u8868\u73fe\u51fa\u6b64\u540c\u7fa9\u5b57\u7d44\u7684\u5171\u6709\u91cb\u7fa9\u6587\u5b57\uff0c\u5982\uff1a\"\u6307\uff02\u3001\"\u89e3\u91cb\uff02\u3001\"\u53e4\u4ee3\uff02\u3001\"\u6587\u5b57\uff02\u3001 \"\uf96f\u660e\uff02\u3001\"\u5206\u6790\uff02\u7b49\uff0c\u4f46\u537b\uf967\u80fd\u7d44\u7e54\u4e26\u67b6\u69cb\u6210\u4e00\uf906\u7cbe\u7c21\u7684\u540c\u7fa9\u8a5e\u91cb\u7fa9(\u524d\u6587\u5df2\u8a0e\uf941\u904e)\u3002 \u5118\u7ba1\u6b64\u65b9\u6cd5\u9084\u6709\u8a31\u591a\u5c1a\u5f85\u6539\u5584\u7684\u7f3a\u9677\uff0c\u4f46\u80fd\u63d0\u5230\u8a5e\u5f59\u4e4b\u9593\u5ba2\u89c0\u7684\u6bd4\u8f03\u57fa\u6e96\u8207\u53ef\u4f9b\uf96b\u8003\u7684 \u91cb\u7fa9\u6587\u5b57\u6b0a\u91cd\uff0c\u76f8\u5c0d\u4eba\u5de5\u8a13\u8a41\u65b9\u6cd5\uff0c\u4ecd\u80fd\u5728\u773e\u591a\u7684\u8cc7\uf9be\u4e4b\u4e2d\u63d0\u4f9b\u5feb\u901f\u540c\u7fa9\u95dc\uf997\uf96b\u8003\u4f9d\u64da\u3002 \u8d99\u9022\u6bc5\u3001\u937e\u66c9\u82b3 \u8868 \u8d99\u9022\u6bc5\u3001\u937e\u66c9\u82b3 20 \u4e00 \u4f75 \u4f9d \u5e8f \uf90f \uf99c \u65bc \u6700 \u5f8c \uf91d \u4f4d \u4e4b \u4e2d \u3002\u5f9e\u8868 6 \u89c0\u5bdf\u5728\uf99c\u8868\u7684\u8cc7\uf9be\uff0c\u5176\u4e2d \u4e00\uf91d\u7d50\u679c\u70ba\u8a5e\u5f59\u5728\u8868\u9054\u8a72\u540c\u7fa9\u8a5e\u7d44\u5b57\u5f59 ci \u6db5\u84cb\uf961\u9ad8\uff0c\u4e14\u5176\u5e73\u5747\u503c\u4e5f\u662f\u6700\u9ad8\u7684\u7d50\u679c\u4e2d\uff0c Ab01C01= \u662f\u4ee5\"\u7537\uf981\uff02\u7b2c\u56db\u968e\u91cb\u7fa9\u8a9e\u7fa9\u95dc\uf997\u503c\u70ba 0.87\u3001Bf06B01=\u70ba\"\u96fb\uff020.90\u3001", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet\u548c\u4e2d\u6587\u8a5e\u7db2(CWN)", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "('VA', 'VAC', 'VB', 'VC', 'Vi', 'Vt', 'VCL', 'VD', 'VE', 'VF', 'VG', 'VH', 'VHC', 'VI', 'VJ', 'VK', 'VL', 'V_2') \u8207\u540d\u8a5e\u8a5e\uf9d0 ('Na', 'Nb', 'Nc', 'Ncc', 'Nd', 'N') \u9032\ufa08 \u91cb\u7fa9\u95dc\uf997\u7684\u8a08\u7b97\u3002\u5728\u91cb\u7fa9\u95dc\uf997\u8a08\u7b97\u539f\u5247\u4e0a\uff0c\u6211\u5011\u4fee\u6b63\u524d\u591a\u968e\u5c64\u91cb\u7fa9\u95dc\uf997\u8a08\u7b97\u539f\u5247\u3002\u7531\u65bc\u5148 \u524d\u63d0\u51fa\u7684\u591a\u968e\u5c64\u91cb\u7fa9\u95dc\uf997\u8a08\u7b97\u539f\u5247 (\u8d99\u9022\u6bc5\u8207\u937e\u66c9\u82b3\uff0c2011)\u5728\u9032\ufa08\u91cb\u7fa9\u968e\u5c64\u64f4\u5c55\u6642\uff0c\u6703 \u5c40\u9650\u5728\u7279\u5b9a\u7684\u90e8\u9996\u7684\u6982\uf9a3\u4e2d\uff0c\u4f7f\u968e\u5c64\u8f03\u9ad8\u7684\u91cb\u7fa9\u6b0a\u91cd\u8207\u4f4e\u968e\u5c64\u6b0a\u91cd\u76f8\u540c\u3002\uf967\u540c\u65bc\u5148\u524d\u7684 \u8a08\u7b97\u539f\u5247\uff0c\u5728\u672c\u7814\u7a76\u4e2d\u7684\u540c\u7fa9\u8a5e\u91cb\u7fa9\u95dc\uf997\u8a08\u7b97\u662f\u4ee5\u7121\u7279\u5b9a\u6982\uf9a3\u7684\u65b9\u5411\u64f4\u5c55\uff0c\u56e0\u6b64\u6df1\u968e\u5c64 \u91cb\u7fa9\u7684\u6982\uf9a3\u6b0a\u91cd\u61c9\u6bd4\u8f03\u4f4e\u968e\u5c64\u91cb\u7fa9\u6b0a\u91cd\u4f4e(\u5373\u91cb\u7fa9\u6b0a\u91cd\u8207\u968e\u5c64\u6df1\ufa01\u6210\u53cd\u6bd4)\u3002\u70ba\uf9ba\u4f7f\u7b2c\u4e00 \u968e\u5c64(\u76f4\u63a5)\u91cb\u7fa9\u6587\u5b57\u7684\u6b0a\u91cd\u80fd\u9ad8\u65bc\u7b2c\u4e8c\u968e\u5c64(\u91cb\u7fa9\u6587\u5b57\u7684\u91cb\u7fa9)\uff0c\u6211\u5011\u5247\u5c07\u6bcf\u6b21\u7684\u91cb\u7fa9\u8a08\u7b97 \u904e\u7a0b\u4e2d\uff0c\uf94f\u8a08\u4f4e\u968e\u5c64\u4e2d\u7684\u8a08\u6b21\uf969\u503c\uff0c\u4f7f\u7b2c\u4e00\u968e\u5c64\u4e2d\u51fa\u73fe\u7684\u91cb\u7fa9\u8a5e\u5f59\u6b0a\u91cd\uff0c\u6703\u8f03\u7b2c\u4e8c\u968e\u5c64 \u4e2d\u51fa\u73fe\u7684\u91cb\u7fa9\u8a5e\u5f59\u91cd(\uf967\u4e00\u5b9a\u6210\u500d\uf969\u95dc\u4fc2\u3002\u56e0\u70ba\u5728\u7b2c\u4e00\u968e\u5c64\u4e2d\u51fa\u73fe\u8a5e\u5f59\uf967\u4e00\u5b9a\u53ea\u6703\u51fa\u73fe\u4e00 \u6b21\uff0c\uf967\u4e00\u5b9a\u5728\u6bcf\u4e00\u968e\u5c64\u90fd\u6703\u51fa\u73fe\u3002)\u3002\uf974 x, y \u70ba\uf978\u5f85\u6e2c\u8a5e\u5f59\uff0cX',Y' \u70ba\u5305\u6db5 x, y \uf978\u8a5e\u5f59\u8207\u5176 \u91cb\u7fa9\u7684\u5404\u5225\u96c6\u5408\uff0c\u5247\u7b2c n \u968e\u5c64\u4e4b\u91cb\u7fa9\u8a5e\u5f59\u5247\u70ba 1 1 ' ' n n n X X X + + = + \u4e14 1 1 ' ' n n n Y Y Y + + = + : , ( ) i i n i n i n x y i i n X Y CoAP x X = = = \u2032 \u2032 \u2032 = \u2032 \u2211 \u2211 \u2211 \u2229 (5) , ,", "eq_num": ", , , , ' ( )" } ], "section": "WordNet\u548c\u4e2d\u6587\u8a5e\u7db2(CWN)", "sec_num": "3." }, { "text": "' ( ) ' 2 , 0 ' 1 ' ( ) ' ( ) n n x y x y n n x y x y n n x y x y", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet\u548c\u4e2d\u6587\u8a5e\u7db2(CWN)", "sec_num": "3." }, { "text": "- \u4e2d - - - - - 1.52% 1.67% 1.67% \u69cb\u6210 - - - 1.62% 1.60% 1.28% - - \u8a8d\uf9fc - - - - - 1.33% 1.52% 1.52% \u5448\u73fe - - 1.00% 1.62% 1.57% - - - \u6574\u9ad4 - - - - - - 1.46% 1.48% \u4eba - - - - - - 1.30% 1.58% \u67d0\u4e9b - - - - - - 1.26% 1.19% \u90e8\u4e0b - - - - - - 1.26% 1.18% \u79e9\u5e8f - - - - - - 1.26% 1.18% \u5c40\u90e8 - - - - - - 1.26% 1.18% \u90e8\u5c6c - - - - - - 1.26% 1.18% \u7a7a\u9593 - - - - - 1.29% - 1.07% \u7d0b\uf937 - - - - 1.60% - - - \u6307 - - - - - - - 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet\u548c\u4e2d\u6587\u8a5e\u7db2(CWN)", "sec_num": "3." }, { "text": "\u8a13\u6545 N/A N/A N/A N/A N/A N/A \u6545\u8a13 N/A N/A N/A N/A N/A N/A \u53e4\u8a13 0.814 N/A N/A", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet\u548c\u4e2d\u6587\u8a5e\u7db2(CWN)", "sec_num": "3." }, { "text": "Cb06B01= \u70ba \" \u9644 \u8fd1 \uff02 0.82 \u3001 Dk16B01= \u70ba \" \u96fb \u5831 \uff02 0.94 \u3001 Fa04B01= \u70ba \" \u6367 \uff02 0.94 \u3001 Ga17B01=\u70ba\"\u611f\u52d5\uff020.89 \u3001Hb02C03=\u70ba\"\u958b\u706b\uff020.78\u3001Ig04B01=\u70ba\"\u5faa\u74b0\uff020.76\u3001 Ka21A01=\u70ba\"\u8a17\u8a5e\uff020.83\u3001 La04C01=\u70ba\"\u5c0d\uf967\u4f4f\uff02 0.98 \u7b49\uff0c\u9019\u4e9b\u8a5e\u5f59\u5728\u8868\u9054\u8a72\u8a5e\u7d44 \u4e0a\u5f9e\u5b57\u9762\u4e0a\u5373\u53ef\u4ee5\uf9ba\u89e3\u8a72\u540c\u7fa9\u8a5e\u7d44\u7684\u4e3b\u8981\u5305\u6db5\u7684\u6982\uf9a3\u3002\u81f3\u65bc\u7121\u6cd5\u5f97\u5230\u8f03\u9ad8\u91cb\u7fa9\u95dc\uf997\uf969\u503c \u7684\u540c\u7fa9\u8a5e\u7d44\uff0c\u5982 Ef06C01=\"\u7a7a\uff020.61 \u8207 Jd08A03=\"\u5931\u6389\uff020.47 \uf978\u8fad\u7d44\uff0c\u4e3b\u56e0\u70ba\u8a72\u540c\u7fa9 \u8a5e\u7d44\u4e2d\u5305\u62ec\uf9ba\u6982\uf9a3\u5c64\u7d1a\u5f88\u4e0a\u4f4d\u7684\u8a5e\u5f59(Ef06C01=\uf9e8\u7684\"\u7a7a\uff02\u4f7f\u5176\u8a72\u8a5e\u5f59\u5728\u91cb\u7fa9\u7684\u6db5\u84cb\ufa01 \u5f88\u9ad8)\uff0c\u4e14\u5728\u8a5e\u7d44\u4e4b\u4e2d\u52a0\u5165\u8a5e\u5f59\u7684\u91cb\u7fa9\u8207\u5176\u5b83\u8a5e\u5f59\u91cb\u7fa9\u7684\u6982\uf9a3\uf9b4\u57df\u4e4b\u9593\u4ea4\u96c6\u7a0b\ufa01\u8f03\u4f4e\u7684\u8fd1 \u7fa9\u8a5e\u5f59(\u5982 Jd08A03= \"\u596a\uff02\u6709\u516b\u7d44\u91cb\u7fa9\u70ba\"\u5f37\u53d6\uff02\u3001\"\u524a\u9664\u3001\u4f7f\u5931\u53bb\u3002\uff02\u3001\"\u722d\u53d6\u3002\uff02\u3001 \"\u505a\u6c7a\u5b9a\u3002\uff02\u3001\"\u932f\u904e\u3002\uff02\u3001\"\u885d\u904e\u3002\uff02\u3001\"\u8000\u773c\u3001\u7729\u76ee\u3002\uff02\u3001\"\u812b\uf94e\u3001\uf94e\u6389\u3002\uff02\uff0c\u6b64 \u516b\u7d44\u91cb\u7fa9\u8207 Jd08A03=\u540c\u7fa9\u8a5e\u5f59\u80fd\u5efa\uf9f7\u8f03\u9ad8\u91cb\u7fa9\u95dc\uf997\u7684\u5171\u6709\u91cb\u7fa9\u8a5e\u5f59\u662f\"\u932f\u904e\u3002\uff02\uff1b\u5f9e\u800c \u4f7f Jd08A03=\u540c\u7fa9\u8a5e\u7d44\u4e2d\"\u5931\u53bb\uff02\u3001\"\u5931\u6389\uff02\u3001\"\u5931\u537b\uff02\u3001\"\u5931\uff02\u8207\"\u596a\uff02\u3001\"\u932f\u904e\uff02\u53ef \u5340\u5206\u70ba\uf978\u500b\u5b50\u8fad\u7d44\uff0c\u5f8c\u8005\u70ba\u524d\u8005\u7684\u8fd1\u7fa9\u8a5e\u3002)\u3002\u53e6\u5916\u8981\uf96f\u660e\u7684\u662f\uff0c\u5728\u8a5e\u5178\u4e4b\u4e2d\uff0c\u540c\u7fa9\u8a5e\u627e \uf967\u5230\u4efb\u4f55\u8a5e\u689d\u91cb\u7fa9\u5c31\u7121\u6cd5\u9032\ufa08\u95dc\uf997\u904b\u7b97\uff0c\u56e0\u6b64\u5728\u8cc7\uf9be\u5247\u6703\u5982\u4e0a\u8868 6 \u4e2d\u7684 Bq03C03= \u540c\u7fa9 \u8a5e\uf9d0\u4ee5\"| x | x |\"\u8868\u793a\uff1b\u540c\uf9e4\uff0c\uf974\u50c5\u53ea\u627e\u4e00\u500b\u8a5e\u5f59\u5247\u6703\u7121\u6cd5\u8a08\u7b97\u91cb\u7fa9\u95dc\uf997\uff0c\u5247\u4ea6\u7121\u6cd5\u6c7a\u5b9a\u6700 \u5927\u5e73\u5747\u91cb\u7fa9\u95dc\uf997\u8a5e\uff0c\u6545\u4ee5\"| x |\"\u8868\u793a\u3002\u800c\u5728\u6700\u5f8c\u4e00\uf91d\u7684\u8a5e\u7d44\u662f\uf90f\uf99c\u5171\u540c\u64c1\u6709\u7684\u91cb\u7fa9\u6587\u5b57\u6b0a\u91cd Top20 \u7684\u8a5e\u5f59\uff0c\u4e26 \u4f9d\u6b0a\u91cd\u9ad8\u4f4e\u9806\u5e8f\u6392\uf99c\uff0c\u96d6\u7136\uf967\ufa0a\u5f97\u5c31\u80fd\u69cb\u6210\u70ba\u53ef\uf95a\u7684\uf906\u5b50\uff0c\u5982\u8868 6 \u4e4b\u4e2d\u7684 Bf06B01=\"\u96fb\uff02 \u8207 Dk16B01= \"\u96fb\u5831\uff02\uf978\u8a5e\u7d44\u5728\u5171\u6709\u91cb\u7fa9\u8a5e\u5f59 Top20 \u7684\u91cd\u8986\u7a0b\ufa01\u592a\u9ad8\uff0c\u56e0\u800c\u96e3\u4ee5\u5340\u5206\u3002 \u4f46\u53ef\u63d0\u4f9b\u8f14\u52a9\u8a6e\u91cb\u8a72\u540c\u7fa9\u8a5e\u7d44\uf901\u591a\u8cc7\u8a0a\uff0c\u5728\u5171\u6709\u91cb\u7fa9\u8a5e\u5f59 Top20 \u7684\u4e2d\u5c31\u53ef\u4ee5\u5f88\u660e\u986f\u7684\u5340 \u5225\u51fa\u3002Ga17B01= \"\u611f\u52d5\uff02\u53ca\u5176\u5171\u6709\u91cb\u7fa9\u8a5e\u5f59 Top20 \u4e2d\u7368\u6709\u7684\u516b\u500b\u8a5e\u5f59\"\u4f7f\uff02\u3001\"\u7269\uff02\u3001 \"\u611f\u61c9\uff02\u3001\"\u7269\u9ad4\uff02\u3001\"\u500b\uff02\u3001\"\u73fe\u8c61\uff02\u3001\"\u96fb\uff02\u3001\"\u611f\u52d5\uff02\uff0c\u76f8\u5c0d\u6bd4\u8f03 Df01B01=\"\u611f \u89f8\uff02\u53ca\u5176\u7368\u6709\u7684\u516b\u500b\u5171\u6709\u91cb\u7fa9\u8a5e\u5f59\uff1a\"\u5fc3\uff02\u3001\"\u5167\uff02\u3001\"\u611f\uff02\u3001\"\u4e09\uff02\u3001\"\u4e4b\u4e00\uff02\u3001\"\u601d \u60f3\uff02\u3001\"\u5167\u5fc3\uff02\u3001\"\u4e0a\uff02\uff0c\u4ea6\u53ef\u4ee5\uf96f\u660e\u300a\u8a5e\uf9f4\u300b\u7684\u5927\u5206\uf9d0\u4e2d D \u5927\uf9d0\u6307\u662f\u62bd\u8c61\u4e8b\u7269\uff0c\u800c G \u5927\uf9d0\u662f\u5fc3\uf9e4\u7684\u7d50\u679c\u3002 \u7d9c\u5408\u524d\u8ff0\u7684\u8cc7\uf9be\u7d50\u679c\uff0c\u6211\u5011\u5c07\u300a\u540c\u7fa9\u8a5e\u8a5e\uf9f4-\u64f4\u5c55\u7248\u300b\u7684\u539f\u59cb\u8cc7\uf9be\uff0c\u9644\u52a0\u672c\u6b21\u7814\u7a76\u7684 \u5167 \u5bb9 \u7d50 \u679c \u653e \u5728 Google Code \u7684 tw-synonyms-chilin \u7684 \u5c08 \u6848 \u4e4b \u4e2d \uff0c \u7db2 \u5740 \u70ba http://code.google.com/p/tw-synonyms-chilin/(\u6216\u4f7f\u7528 http://goo.gl/H6YRK \u76f4\u63a5\u4e0b\u8f09\u8655\uf9e4\u5f8c \u7684\u6587\u672c)\u4f9b\u5176\u5b83\u7814\u7a76\u4eba\u54e1\u5728\u7db2\u7ad9\u7684\u8edf\u9ad4\u5eab\u5b58\u5c08\u6848\u4e2d\u4e0b\u8f09\u3002\u70ba\uf9ba\u8655\uf9e4\u4e0a\u8ff0\u8cc7\uf9be\uff0c\u6211\u5011\u4f7f\u7528\uf9ba ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet\u548c\u4e2d\u6587\u8a5e\u7db2(CWN)", "sec_num": "3." }, { "text": "Sketch Engine (http://the.sketchengine.co.uk) (Kilgarriff et al., 2004 )\u662f\u8a9e\uf9be\u5eab\u8655\uf9e4\u7cfb\u7d71\uff0c\u4e3b \u8981\u7684\u529f\u80fd\u662f\u4f7f\u7528 KWIC (key word in context) \u51fa\u73fe\u983b\uf961\u53ca\u5206\u4f48\uff0c\u7d50\u5408\u4e2d\u6587\u8a9e\u6cd5\u95dc\uf997 (grammatical relations, gramrels) \u5206\u6790\uff0c\u8a08\u7b97\u6587\u5b57\u5171\u540c\u51fa\u73fe\ufa08\u70ba\u7684\u7d71\u8a08\u7d50\u679c\uff0c\u4ee5\u63d0\u4f9b\uf96a\u5f15 (concordance)\u3001\u8a5e\u5f59\uf99c\u8868 (word list)\u3001\u8a5e\u5f59\u901f\u63cf (word sketch)\u3001\u540c\u8fd1\u7fa9\u8a5e (thesaurus) \u7b49 \u529f\u80fd (Huang et al., 2005) Kilgarriff, A., Rychly, P., Smrz, P., & Tugwell, D.(2004) ", "cite_spans": [ { "start": 46, "end": 70, "text": "(Kilgarriff et al., 2004", "ref_id": null }, { "start": 266, "end": 286, "text": "(Huang et al., 2005)", "ref_id": "BIBREF14" }, { "start": 287, "end": 344, "text": "Kilgarriff, A., Rychly, P., Smrz, P., & Tugwell, D.(2004)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "\u8207 Sketch Engine \u6bd4\u8f03", "sec_num": "5.5" }, { "text": "For the past few decades, a large body of research has been trying to touch on the basic core in the mind. Some studies (e.g., Wierzbicka, 1996) have aimed to figure out how a large number of concepts in the mind can be neatly organized with a basic set of concepts, leading us to the realm of human cognition. Furthermore, some studies have identified a set of base concepts that have had a wide range of computational applications. 1 WordNet (Miller et al., 1990) , for instance, is organized around a set of base concepts (i.e., SuperSenses), with which a large number of lexical items are associated through lexical relations. There have been many approaches to exploring what is basic in the mind, but there has been no consensus as to what constitutes a set of base concepts universal to all human languages.", "cite_spans": [ { "start": 127, "end": 144, "text": "Wierzbicka, 1996)", "ref_id": "BIBREF49" }, { "start": 434, "end": 435, "text": "1", "ref_id": "BIBREF53" }, { "start": 444, "end": 465, "text": "(Miller et al., 1990)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "This study aims at providing a new perspective to identify a candidate set of base concepts in Chinese. Our data consist of the glosses in the Chinese Wordnet. Since the glosses in the Chinese Wordnet use basic words, words that occur frequently in the glosses of the Chinese Wordnet can be assumed to be reflective of a candidate set of base concepts. After data extraction and introspection, the resulting set of base concepts in the present study is compared with the set of Base Concepts proposed in the EuroWordNet project (Vossen et al., 1998) . In selecting a set of base concepts, our method is based on the frequencies of words used in the glosses of the Chinese Wordnet, whereas the method adopted in the EuroWordNet project is based on the relations between synsets. It is thus noted that the set of Base Concepts in EuroWordNet is not seen as de facto, but as a reference. We use the Base Concepts in EuroWordNet as our reference because on the one hand, the Chinese Wordnet and EuroWordNet both derive from the WordNet framework, and on the other hand, the set of Base Concepts from EuroWordNet is based on many European languages. It is hoped that both the overlap and the difference between different sets of base concepts identified by different approaches can deepen our understanding of the basic core in the mind. Additionally, it is also hoped that the set of base concepts identified in the present study can have computational as well as pedagogical applications in the future. This paper is organized as follows. Section 2 provides a comprehensive review of different approaches to the notion of basicness in the mind. Section 3 reviews the significance of glosses in different contexts. Section 4 introduces our experiment method and presents the set of base concepts identified in the present study. Section 5 discusses how our proposed set of base concepts in Chinese is different from that of EuroWordNet. Section 6 concludes the paper.", "cite_spans": [ { "start": 528, "end": 549, "text": "(Vossen et al., 1998)", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Over the past few decades, there have been various approaches to the notion of basicness in the mental landscape. Some have created lists of lexical items as basic words, mainly for pedagogical purposes. Some, from a cognitive perspective, have selected different sets of basic concepts at different levels of abstraction (e.g., semantic primitives, base concepts, basic-level categories, and basic domains).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Defining the Core Lexicon in Language and the Mind", "sec_num": "2." }, { "text": "The present study focuses on base concepts, which have contributed to the establishment of lexical resources (e.g., WordNet, EuroWordNet, and BalkaNet). Compared with basic words, base concepts have more computational applications than pedagogical ones. Compared with semantic primitives and basic domains, base concepts are selected in a more scientific procedure. Compared with basic-level categories, base concepts are hierarchically higher. A comprehensive review of different approaches to the notion of basicness in the mind will be given in the following.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Defining the Core Lexicon in Language and the Mind", "sec_num": "2." }, { "text": "One of the earliest efforts to address the notion of basicness in the lexicon is to identify a list of basic words, which is motivated by pedagogical needs. 2 Many basic vocabulary lists have been proposed, ranging from 300 words to more than 2,000 words (e.g., Dolch, 1936; Gates, 1926; Hindmarsh, 1980; Lee, 2001; McCarthy, 1999; McCarthy & O'Dell, 1999; Ogden, 1930; West, 1953; Wheeler & Howell, 1930) . With the rapid development of computational analyses, such lists are mostly based on frequency counts. They can serve as useful references for pedagogical purposes, such as the design of a syllabus and the development of a language proficiency test. The main problem with most basic vocabulary lists is that the raw data on which the frequency counts are based may not be representative enough. Additionally, since what counts as a word is an issue in itself, an insight is needed when it comes to word forms and lexicalized phrases (McCarthy, 1999).", "cite_spans": [ { "start": 157, "end": 158, "text": "2", "ref_id": "BIBREF54" }, { "start": 262, "end": 274, "text": "Dolch, 1936;", "ref_id": "BIBREF22" }, { "start": 275, "end": 287, "text": "Gates, 1926;", "ref_id": "BIBREF23" }, { "start": 288, "end": 304, "text": "Hindmarsh, 1980;", "ref_id": null }, { "start": 305, "end": 315, "text": "Lee, 2001;", "ref_id": "BIBREF33" }, { "start": 316, "end": 331, "text": "McCarthy, 1999;", "ref_id": "BIBREF36" }, { "start": 332, "end": 356, "text": "McCarthy & O'Dell, 1999;", "ref_id": "BIBREF37" }, { "start": 357, "end": 369, "text": "Ogden, 1930;", "ref_id": "BIBREF39" }, { "start": 370, "end": 381, "text": "West, 1953;", "ref_id": "BIBREF46" }, { "start": 382, "end": 405, "text": "Wheeler & Howell, 1930)", "ref_id": "BIBREF47" } ], "ref_spans": [], "eq_spans": [], "section": "Basic Words", "sec_num": "2.1" }, { "text": "In the discussion of basicness in the mind, more abstract than basic words are semantic primitives, or semantic primes, which are pursued mainly in the theory of Natural Semantic Metalanguage (Goddard, 2002; Wierzbicka, 1972 Wierzbicka, , 1996 . 3 A semantic primitive is basic in the sense that it is lexicalized in every language and that it cannot be defined or paraphrased in simpler terms. From a cognitive perspective, it is suggested that there is an innate set of semantic primitives representing \"a universal set of fundamental human concepts\" (Wierzbicka, 1996:13) . Such a set is argued to be sufficient to define or paraphrase the entire vocabulary of a language. For example, the word envy can be defined as what follows (Wierzbicka, 1996:161 Specifically, Goddard (2002:14) has presented 58 \"atoms of meaning\", such as I, YOU, SOMEONE, PEOPLE, SOMETHING/THING, and BODY. Unfortunately, this line of research is open to valid criticisms due to a lack of a sound method of identifying semantic primitives (e.g., Riemer, 2006 ).", "cite_spans": [ { "start": 192, "end": 207, "text": "(Goddard, 2002;", "ref_id": "BIBREF24" }, { "start": 208, "end": 224, "text": "Wierzbicka, 1972", "ref_id": "BIBREF48" }, { "start": 225, "end": 243, "text": "Wierzbicka, , 1996", "ref_id": "BIBREF49" }, { "start": 246, "end": 247, "text": "3", "ref_id": "BIBREF55" }, { "start": 553, "end": 574, "text": "(Wierzbicka, 1996:13)", "ref_id": null }, { "start": 734, "end": 755, "text": "(Wierzbicka, 1996:161", "ref_id": null }, { "start": 770, "end": 787, "text": "Goddard (2002:14)", "ref_id": null }, { "start": 1024, "end": 1036, "text": "Riemer, 2006", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Primitives", "sec_num": "2.2" }, { "text": "The notion of basicness has played a vital role in many lexical resources, such as English WordNet (Miller et al., 1990 ), 4 EuroWordnet (Vossen et al., 1998) , and BalkaNet (Cristea et al., 2002) . In the architecture of English WordNet, synonyms are assembled in a set called synset (synonymous set). During the development of WordNet, synsets are organized into 45 lexicographical files based on the criteria of syntactic category and logical groupings. The 45 names of lexicographical files (e.g., noun.feeling and verb.cognition) are also called SuperSenses, which reveal the base concepts from the developer's perspectives. 5 As an extension of the wordnet model, EuroWordnet further proposes a set of 1,024 core synsets -called Base Conceptsthat are extracted from four wordnets and translated into the closest WordNet 1.5 synsets. To keep the set balanced and shared among these wordnets, 164 core base concepts of them were selected in terms of their (more) relations with other concepts and (higher) position in the hierarchy. 6 Based on the Base Concepts identified for EuroWordNet, the BalkaNet project adopts a similar approach and selects a set of Base Concepts by focusing on five Balkan languages, including Bulgarian, Greek, Romanian, Serbian, and Turkish. 7 ", "cite_spans": [ { "start": 99, "end": 119, "text": "(Miller et al., 1990", "ref_id": "BIBREF38" }, { "start": 123, "end": 124, "text": "4", "ref_id": "BIBREF56" }, { "start": 137, "end": 158, "text": "(Vossen et al., 1998)", "ref_id": "BIBREF45" }, { "start": 174, "end": 196, "text": "(Cristea et al., 2002)", "ref_id": "BIBREF18" }, { "start": 630, "end": 631, "text": "5", "ref_id": "BIBREF57" }, { "start": 1037, "end": 1038, "text": "6", "ref_id": "BIBREF58" }, { "start": 1274, "end": 1275, "text": "7", "ref_id": "BIBREF59" } ], "ref_spans": [], "eq_spans": [], "section": "Base Concepts in WordNets", "sec_num": "2.3" }, { "text": "In the context of cognitive linguistics, many experiments have shown that in taxonomies of concrete objects, there is one level of abstraction that is regarded as basic which distinguishes them from higher and lower-level categories (Cruse, 1977 (Cruse, , 2000 Rosch et al., 1976) . For instance, in answering the question what's that in the garden, most speakers choose to say a dog rather than its hypernym an animal or its hyponym an Alsatian (Cruse, 1977:153-154) .", "cite_spans": [ { "start": 233, "end": 245, "text": "(Cruse, 1977", "ref_id": "BIBREF19" }, { "start": 246, "end": 260, "text": "(Cruse, , 2000", "ref_id": "BIBREF20" }, { "start": 261, "end": 280, "text": "Rosch et al., 1976)", "ref_id": "BIBREF43" }, { "start": 446, "end": 467, "text": "(Cruse, 1977:153-154)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Basic-level Concepts", "sec_num": "2.4" }, { "text": "Compared with the ANIMAL concept and the ALSATIAN concept, the DOG concept is seen as a basic-level concept in that both its internal homogeneity and its distinctness from neighboring concepts are greater. The presumption of basic-level concepts has been also supported by language acquisition studies, which reveal a large percentage of children's early words are basic-level terms (Ungerer & Schmid, 2006). 8 Some recent computational approaches have attempted to use algorithms to automatically extract the basic-level concepts. Izquierdo et al. (2008) automatically select basic-level concepts from WordNet based on the relations between synsets, while Lin (2010) proposes an algorithm that can automatically identify the cognitive level of a noun in WordNet based on the ability of the noun to form compounds and the position of the noun in a hierarchical chain.", "cite_spans": [ { "start": 383, "end": 410, "text": "(Ungerer & Schmid, 2006). 8", "ref_id": null }, { "start": 532, "end": 555, "text": "Izquierdo et al. (2008)", "ref_id": "BIBREF28" }, { "start": 657, "end": 667, "text": "Lin (2010)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Basic-level Concepts", "sec_num": "2.4" }, { "text": "A relevant discussion with regard to basic conceptualization in the study of language and the mind has been focused on basic domains, which derive directly from human embodied experience (e.g., sensory and subjective experience). Cognitive Grammar argues that a concept should be understood in terms of another more general, inclusive concept (Langacker, 1987:148) . For example, the concept RADIUS makes sense only when it is viewed against the concept CIRCLE. Such a relationship can form a chain (i.e., the concept CIRCLE should be understood in terms of the concept SPACE), but the chain cannot be endless. Some concepts of a general nature, such as SPACE, TIME, and QUANTITY, are basic domains because they are characterized by a high degree of inclusiveness.", "cite_spans": [ { "start": 343, "end": 364, "text": "(Langacker, 1987:148)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Basic-level Concepts", "sec_num": "2.4" }, { "text": "Defining a word can be as easy as pointing to something the word refers to, but it can be as difficult as formulating \"an ideal hypothetical norm which is a sort of compromise between the generalization of inadequate experiential reality and a projected reality which is yet to be attained in its entirety\" (Bernard, 1941:510) . In different contexts, definitions and glosses play different roles, which will be reviewed in the following.", "cite_spans": [ { "start": 307, "end": 326, "text": "(Bernard, 1941:510)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Definitions and Glosses in Different Contexts", "sec_num": "3." }, { "text": "When it comes to the meaning of a word, people may first think of looking up its definition in a dictionary. A good understanding of word meaning relies thus upon how the word can be defined. In the discussion of linguistic semantics, there are many ways to define the meaning of a word (Riemer, 2010:65-79) . A definition can be ostensive, relational, or extensional, and it sometimes combines different approaches.", "cite_spans": [ { "start": 287, "end": 307, "text": "(Riemer, 2010:65-79)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Definitions in Linguistic Semantics", "sec_num": "3.1" }, { "text": "First, perhaps the most obvious, people often define a word in terms of ostension, i.e., by pointing out the objects a word denotes. Though an ostensive definition is useful for concrete nouns, it may cause many difficulties when used to define verbs, adjectives, adverbs, and function words (e.g., prepositions).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definitions in Linguistic Semantics", "sec_num": "3.1" }, { "text": "Second, a definition can place a word in relation to other words or events. For example, a word can be defined by its synonyms. However, since there are few absolute synonyms, the identity between a word and its synonyms can be challenged. A word can also be defined through an event, which is regarded as a typical context for the word. For instance, the verb scratch can be defined as \"the type of thing you do when you are itchy\" (Riemer, 2010:66) .", "cite_spans": [ { "start": 433, "end": 450, "text": "(Riemer, 2010:66)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Definitions in Linguistic Semantics", "sec_num": "3.1" }, { "text": "The weakness of such a definition is that it works only when the addressee of the definition can accurately infer the intended meaning on the basis of the given cue. That is, someone may not get the correct meaning of scratch if he or she does not scratch when feeling itchy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definitions in Linguistic Semantics", "sec_num": "3.1" }, { "text": "Third, a definition can be extensional, and one of the commonest strategies is to define by a broad class (i.e., genus) and some distinguishing features (i.e., differentia). For example, man (in the sense of \"human being\") can be loosely defined as \"rational animal\" (Riemer, 2010:67) . One of the main problems of a genus-differentia definition is that it can be too abstract to its addressee (Landau, 2001:167) .", "cite_spans": [ { "start": 267, "end": 284, "text": "(Riemer, 2010:67)", "ref_id": null }, { "start": 394, "end": 412, "text": "(Landau, 2001:167)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Definitions in Linguistic Semantics", "sec_num": "3.1" }, { "text": "In summary, there are many strategies to define the meaning of a word, and all of them have their limitations. More generally, the difficulty of a definitional approach to semantics is that defining the meaning of a piece of language with more language in the same system will inevitably end up circular (Portner, 2005:4) .", "cite_spans": [ { "start": 304, "end": 321, "text": "(Portner, 2005:4)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Definitions in Linguistic Semantics", "sec_num": "3.1" }, { "text": "Explaining what words mean (thus the concepts they encode) is the central function of a dictionary. While the mental lexicon is a \"theoretical exercise\", a dictionary can be seen as a \"practical work\" (Landau, 2001:153 ). On the one hand, a dictionary simulates the mental lexicon, offering the phonological, syntactic, and semantic information of a lexical item. On the other hand, a dictionary cannot be as detailed as the mental lexicon, and lexicographers need to decide what to include in a dictionary. Compiling a dictionary is seen as a craft, for lexicographers aim to make the most of their limited resources to cater for the communicative and pedagogical needs of dictionary users.", "cite_spans": [ { "start": 201, "end": 218, "text": "(Landau, 2001:153", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Definitions in Lexicography", "sec_num": "3.2" }, { "text": "One of the most challenging and contentious aspects of the compilation of a dictionary is the creation of definitions for a dictionary entry. The term 'definition' would be a misnomer if it implies that word's meaning can be precisely pinned down. There are many strategies to define a word in a dictionary (Lew & Dziemianko, 2006) . The most traditional definition in a dictionary is the analytical model, i.e., the genus-differentia definition. A definition composed in this way typically consists of two elements: the genus expression that locates the definiendum in the proper semantic category, and the differentia (or plural form differentiae) that indicates the information which makes the word differ from other words of the same semantic category. For example, appraisal is defined as \"a statement or opinion judging the worth, value or condition of something\" (taken from Longman Dictionary of Contemporary English), where 'a statement or opinion' is the genus expression and the postmodifying expression 'judging the worth, value or condition of something' is the differentia. In many cases, it is not an easy task to produce a genus-differentia definition, and such a definition can be difficult for a dictionary user to understand. Another way to define a word in a dictionary is to adopt a contextual definition. A contextual definition of 'appraisal', for example, is stated as \"if you make an appraisal of something, you consider it carefully and form an opinion about it\" (taken from Collins COBUILD Advanced Dictionary of English).", "cite_spans": [ { "start": 307, "end": 331, "text": "(Lew & Dziemianko, 2006)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Definitions in Lexicography", "sec_num": "3.2" }, { "text": "Our concern here is not to deal with the issue of 'what makes a good definition', or search for the underlying necessary and sufficient conditions, but to evaluate the way the principle of maximal economy is reflected in a definition sentence. Zgusta (1971) proposed a list of criteria, one of which states that the lexical definition \"should not contain words more difficult to understand than the word defined\" (cited in Landau, 2001:157) . In addition, the effectiveness of dictionary definitions can be evaluated from the user's viewpoint (Cumming et al., 1994; Lew & Dziemianko, 2006) . For example, language learners have been found to prefer contextual definitions to analytical ones (Cumming et al., 1994) . An interim conclusion thus worth drawing is that a definition should contain no more words than necessary, consistent with the demands of intelligibility and information-transfer (Atkins & Rundell, 2008) .", "cite_spans": [ { "start": 244, "end": 257, "text": "Zgusta (1971)", "ref_id": "BIBREF50" }, { "start": 423, "end": 440, "text": "Landau, 2001:157)", "ref_id": null }, { "start": 543, "end": 565, "text": "(Cumming et al., 1994;", "ref_id": "BIBREF21" }, { "start": 566, "end": 589, "text": "Lew & Dziemianko, 2006)", "ref_id": "BIBREF34" }, { "start": 691, "end": 713, "text": "(Cumming et al., 1994)", "ref_id": "BIBREF21" }, { "start": 895, "end": 919, "text": "(Atkins & Rundell, 2008)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Definitions in Lexicography", "sec_num": "3.2" }, { "text": "The reviews so far naturally lead us to the glosses (definitions of word senses) in lexical and ontological resources developed in recent years. Glosses and example sentences are two essential components in the construction of lexical resources like WordNet, for they have been proved to be highly useful in discovering semantic relations and word sense disambiguation tasks (Kulkarni et al., 2010) . In the design of WordNet, word lemmas are grouped into synsets (synonymous sets), which are organized as a lexical network by a wide range of lexical relations (e.g., hyponymy and antonymy). The role of glosses is thus to explain explicitly the meaning of synsets which lexically encode the human concepts.", "cite_spans": [ { "start": 375, "end": 398, "text": "(Kulkarni et al., 2010)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Glosses in Lexical Resources", "sec_num": "3.3" }, { "text": "Most of the lexical relations that connect synsets are conceptually inclusive relations, such as hypernymy-hyponymy and holonymy-meronymy, which make the wordnet architecture a hierarchical conceptual structure, or a lexicalized ontology. 9 In connection with ontology studies, Jarrar (2006) suggests that glosses can be of great use in an ontology. For example, glosses are easier to understand than formal representations, so ontology developers from different fields can rely on glosses to a certain degree when they communicate. However, as Jarrar (2006) further suggests, a gloss in an ontology is not intended to provide some general comments about a concept, as a traditional definition in a dictionary does. Instead, a gloss in an ontology functions in an auxiliary manner, providing some factual knowledge that is critical to the understanding of a concept but can be difficult to formalize explicitly and logically. As a consequence, glosses in a wordnet as a lexical ontology are different from dictionary definitions. Jarrar (2006) provides some guidelines for writing a gloss in an ontology. First, an ontology gloss should start with the upper type of the concept being defined. Second, an ontology gloss should be in the form of a proposition. Third, an ontology gloss should emphasize the distinguishing features of the concept being defined. Fourth, an ontology gloss can include some examples. Fifth, an ontology gloss should be consistent with the formal representation of the concept being defined. Sixth, an ontology gloss should be sufficient and clear. Generally, the glosses in the Chinese Wordnet fulfill the above criteria. Here is an example taken from the Chinese Wordnet:", "cite_spans": [ { "start": 239, "end": 240, "text": "9", "ref_id": null }, { "start": 278, "end": 291, "text": "Jarrar (2006)", "ref_id": "BIBREF29" }, { "start": 545, "end": 558, "text": "Jarrar (2006)", "ref_id": "BIBREF29" }, { "start": 1030, "end": 1043, "text": "Jarrar (2006)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Glosses in Lexical Resources", "sec_num": "3.3" }, { "text": "(1) \u66f8\uff1a\u6709 \u6587\u5b57 \u6216 \u5716\u756b \u7684 \u51fa\u7248\u54c1 shu you wenzi huo tuhua DE chubanpin 'book: a publication with words or pictures' While the gloss looks like a genus-differentia definition in a dictionary, they are different in essence. The definition techniques used by lexicographers to indicate differentiation come from various conventions, while the ontology gloss aims to make a minimal commitment to conceptualization, which meets the need of logical conciseness. The study of the basic lexicon is crucially different from other tasks of lexical acquisition in that unlike the latter where the broad coverage is at issue, the former requires instead fine-grained data to be explored. In summary, we propose that glosses in lexical resources are the best source to study the core component of the basic lexicon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Glosses in Lexical Resources", "sec_num": "3.3" }, { "text": "In this section, we introduce the method of how we used gloss data from the Chinese Wordnet to touch on base concepts. 10 The glosses in the Chinese Wordnet can be seen as a sample corpus with fine-grained lexical information. Figure 1 shows the similar type frequency distribution of 46 part-of-speeches (proposed by the Sinica Corpus) in the Sinica Corpus and the Chinese Wordnet, respectively.", "cite_spans": [ { "start": 119, "end": 121, "text": "10", "ref_id": null } ], "ref_spans": [ { "start": 227, "end": 235, "text": "Figure 1", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "Glosses in the Chinese Wordnet", "sec_num": "4." }, { "text": "In our first experiment, we extracted a set of frequently-occurring words from the glosses of the Chinese Wordnet. Since a gloss in the Chinese Wordnet uses basic words instead of giving a scientific definition that can be incomprehensible to the user (Huang, 2008:22) , the frequently-occurring words extracted from our experiment may reflect a certain degree of basicness in Chinese and even be considered to constitute a candidate set of base concepts in Chinese. Our method and the results will be presented in the following.", "cite_spans": [ { "start": 252, "end": 268, "text": "(Huang, 2008:22)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Extracting a Set of Frequently-occurring Words from the Glosses of the Chinese Wordnet", "sec_num": "4.1" }, { "text": "Our first step was to extract all the glosses from the Chinese Wordnet. For glosses containing more than one period (i.e., the Chinese period \u3002), we discarded words preceding the first period because what precedes the first period in a gloss only provides grammatical properties. Next, what remained in the glosses was segmented by a segmentation system developed by Chinese Knowledge and Information Processing (CKIP). Consider the following example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting a Set of Frequently-occurring Words from the Glosses of the Chinese Wordnet", "sec_num": "4.1" }, { "text": "(2) \u5b78\u751f\uff1a \u666e\u901a\u540d\u8a5e\u3002 \u5728 \u5b78\u6821 \u7cfb\u7d71 \u5167 \uf95a\u66f8 \u5b78\u7fd2 \u7684 \u4eba\u3002", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting a Set of Frequently-occurring Words from the Glosses of the Chinese Wordnet", "sec_num": "4.1" }, { "text": "xuesheng putongmingci zai xuexiao xitong nei dushu xuexi DE ren 'student: someone who studies and learns in a school system'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting a Set of Frequently-occurring Words from the Glosses of the Chinese Wordnet", "sec_num": "4.1" }, { "text": "In the example 2, putong mingci 'common noun' would be discarded, and then the remaining part of the definition would be segmented as shown in the example. With all the glosses segmented, a frequency wordlist with 19,852 words was created.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting a Set of Frequently-occurring Words from the Glosses of the Chinese Wordnet", "sec_num": "4.1" }, { "text": "We manually checked the wordlist for meta-linguistic terms (e.g., xingrong 'modify') and mis-chunked words (e.g., *dedanwei 'DE + unit'). Only the first 1,000 words on the wordlist were checked both because our resources were limited and because it was assumed that core base concepts should be at the top of the frequency wordlist. For meta-linguistic terms, we chose to exclude them because it is obvious that they do not represent base concepts. For mis-chunked words, we either manually segmented them further (*dedanwei \u2192 de danwei) or simply excluded them if they were not comprehensible (e.g., dejian 'DE-simple'). 11 In such cases as dedanwei, the resulting words together with their frequencies were added to the wordlist if they had not been listed there, or the frequencies of the resulting words were revised. Take de danwei as an example. There were 328 de danwei in the data, and both de and danwei had been on the wordlist before dedanwei was further segmented. The frequencies of de and danwei were revised to be 15,653 and 1,178, respectively. 12 To demonstrate how our new approach to identifying a set of base concepts is different from others, we decided to compare the resulting set in the present study with the set from EuroWordNet. Since all the Base Concepts in EuroWordNet are nouns and verbs, we focus on only nouns and verbs in the present study. 13 Therefore, words that were not tagged with V or", "cite_spans": [ { "start": 1036, "end": 1063, "text": "and 1,178, respectively. 12", "ref_id": null }, { "start": 1375, "end": 1377, "text": "13", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Extracting a Set of Frequently-occurring Words from the Glosses of the Chinese Wordnet", "sec_num": "4.1" }, { "text": "In 3, words such as da and xiao usually function as adjectives, and zhuyao and rongyi can be adjectives or adverbs. The word meiyou, originally tagged as a noun, functions as a polarity operator rather than as a noun or as a verb. 14 Another issue in the selection of the top 130 words from the glosses of the Chinese Wordnet was near-synonymy. For example, both yong 'use' and shiyong 'use' were high on our wordlist, and so were wuti 'object' and wupin 'object'. In deciding whether two words did represent the same concept, the present study counted on the Chinese Wordnet rather than on our own introspection or on further analyses. In the former case, yong 'use' and shiyong 'use' bear the relation of synonymy in the Chinese Wordnet. Therefore, the two words were considered to represent the same concept, and the frequencies of the two words were added together. In the latter case (i.e., wuti and wupin), the two words do not bear the relation of synonymy in the Chinese Wordnet. As a consequence, the two words were listed separately on our wordlist (cf. Table 1 ).", "cite_spans": [], "ref_spans": [ { "start": 1064, "end": 1071, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Extracting a Set of Frequently-occurring Words from the Glosses of the Chinese Wordnet", "sec_num": "4.1" }, { "text": "Finally, five words had two tags and were listed separately. They were gaibian 'change', shiyong 'use', jisuan 'calculate', chansheng 'produce, generate', and fasheng 'happen'. They are verbs in their literal sense, but they can be nominalized. For the five words, the frequencies of the verbal use and the nominal use were added together, and each word was listed only once in our wordlist since both the verbal use and the nominal use represent the same concept.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting a Set of Frequently-occurring Words from the Glosses of the Chinese Wordnet", "sec_num": "4.1" }, { "text": "When words were excluded or merged with another word, another word immediately lower on the wordlist went up until we got 130 words. The final set of base concepts extracted from the glosses of the Chinese Wordnet on the basis of the frequencies will be presented and discussed in the following section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting a Set of Frequently-occurring Words from the Glosses of the Chinese Wordnet", "sec_num": "4.1" }, { "text": "14 In the glosses of the Chinese Wordnet, a typical context where guding occurs is as follows: In this example, guding is used to modify gongzuo 'job'. We decided to exclude guding because it functions neither as a typical noun nor as a typical verb, but typically functions as a modifier in the glosses of the Chinese Wordnet. Additionally, the tag automatically assigned to guding (i.e., Nv) is problematic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting a Set of Frequently-occurring Words from the Glosses of the Chinese Wordnet", "sec_num": "4.1" }, { "text": "\u8077\u696d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting a Set of Frequently-occurring Words from the Glosses of the Chinese Wordnet", "sec_num": "4.1" }, { "text": "Back to the Basic: Exploring Base Concepts from the Wordnet Glosses 69", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting a Set of Frequently-occurring Words from the Glosses of the Chinese Wordnet", "sec_num": "4.1" }, { "text": "By extracting words that occur frequently in the glosses of the Chinese Wordnet, we obtained a candidate set of words representing base concepts in Chinese. We attempted to map each word extracted in the present study to a Base Concept in EuroWordNet, either concrete or abstract. Note that if a word has no equivalent in the set of Base Concepts in EuroWordNet, we simply translated the word into English. Moreover, those without an equivalent in the set of Base Concepts in EuroWordNet were classified on the basis of their semantic characteristics. Table 1 summarizes the results. Following Table 1 , each category will be presented. Our method extracted more words denoting organizations and institutes than the EuroWordNet project. However, some words extracted in our experiment are not hierarchically high. For example, daxue is just a subcategory of the educational institute. Measurement is an important dimension of semantic primitives. Wierzbicka (1996:44-47) has identified a few quantifiers as semantic primitives. Our experiment identified five words that are not included in the Base Concepts of EuroWordNet: yi and liang are quantifiers, and both are also identified in Wierzbicka (1996) (i.e., ONE and TWO) ; ge and zhong are common classifiers in Chinese; jisuan is a typical verb in the measurement domain.", "cite_spans": [ { "start": 947, "end": 970, "text": "Wierzbicka (1996:44-47)", "ref_id": null }, { "start": 1186, "end": 1223, "text": "Wierzbicka (1996) (i.e., ONE and TWO)", "ref_id": null } ], "ref_spans": [ { "start": 552, "end": 559, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 594, "end": 601, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "We could further categorize the remaining 28 nouns that are not in the set of Base Concepts of EuroWordNet but were extracted in our design. However, that would be of no more significance than creating a miscellaneous category like this, for the remaining subcategories might contain as few as one or two members. For instance, we could create a category for perception, which is intuitively an important dimension. However, in the present study, a category for perception may include no more than shengyin and wundu. Almost all of the members in this category are abstract concepts. The only exception is shengwu. Its literal translation would be \"creature\", so shengwu can seemingly be mapped to the synset {animal 1; animate being 1; beast 1; brute 1; creature 1; fauna 1}. Actually, the two concepts are not the same. In English, creature refers to a living organism that can move voluntarily, as the gloss in WordNet states. On the other hand, shengwu in Chinese refers to any living organism, whether it can move voluntarily or not. Therefore, we decided not to map the two concepts together. For a similar reason as in the case of nouns, a miscellaneous category is also created for the remaining 20 verbs. Additionally, as in the case of nouns, all the verbs here represent abstract concepts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measurement", "sec_num": null }, { "text": "Generally, as Table 1 shows, 72 words (55.4%) extracted from the glosses of the Chinese Wordnet have no equivalent in the set of Base Concepts in EuroWordNet. This suggests that our gloss-based approach can yield a very different set of base concepts from the set in EuroWordNet.", "cite_spans": [], "ref_spans": [ { "start": 14, "end": 21, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Discussion", "sec_num": "5." }, { "text": "On the one hand, the 58 words that were identified in our experiment and could be mapped to an equivalent in EuroWordNet may be considered to represent concepts at the core of the mental landscape. These concepts can be singled out by different approaches, and they are prominent not only in the languages in EuroWordNet but also in Chinese. Therefore, the concepts represented by the 58 words may be regarded as basic in the mind.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5." }, { "text": "On the other hand, words that were identified in our experiment but could not be mapped to any equivalent in EuroWordNet also reflect a certain degree of basicness in the mind. Like the Base Concepts in EuroWordNet, most of them are abstract and represent concepts hierarchically higher than basic level categories (cf. Section 2). Additionally, many of them (e.g., chengdu 'extent', fanwei 'range') are like basic domains (cf. Section 2), exhibiting a high degree of inclusiveness. Nevertheless, our gloss-based approach did obtain a few words representing sister concepts that are hierarchically lower, such as shang/xia 'up/down' and nanzi/nuzi 'male/female'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5." }, { "text": "In effect, it is natural that base concept sets vary from approach to approach. The number of the concepts in the lexicon is considerably larger than the number of base concepts. Take the present study for example. There are 17,018 candidate words in our frequency wordlist, and we only identify 130 words as potential base concepts in Chinese. The potential base concepts scatter around the mental lexicon; when we take a different perspective, adopt a different method, and have a different focus, we are very likely to extract a completely different set of concepts. That is why a study like the present one is of great significance. To really touch on the basic core of the mental landscape, we need to try a wide variety of approaches. Concepts surviving in different approaches can be seen as basic in the mind. On the other hand, since the pool is always much larger than the target set, concepts identified only by a certain approach are still significant rather than random and can reflect a certain extent of basicness from a certain perspective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5." }, { "text": "The limitations of this study are as follows. First, the design of EuroWordNet and the Chinese Wordnet is a key concern in the present study. As Vossen et al. (1998) admit, the data of some local wordnets were not well-structured when the base concepts were selected from each of the local wordnets. Also, the coverage of the Chinese Wordnet may not be comprehensive enough, for the project starts with words with a mid frequency. When EuroWordNet and the Chinese Wordnet are further updated, the resulting sets of base concepts and their comparison may give a different picture accordingly. Second, the gloss language is an issue in a gloss-based study like the present one. As a matter of fact, many words in our frequency wordlist have a low frequency, and many of such words can be replaced by other words with a higher frequency (An, 2009:172-182) . If that is done, there will be fewer words in our wordlist, and the frequencies of some words will become higher. Therefore, a different set of base concepts in Chinese could be yielded.", "cite_spans": [ { "start": 145, "end": 165, "text": "Vossen et al. (1998)", "ref_id": "BIBREF45" }, { "start": 834, "end": 852, "text": "(An, 2009:172-182)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5." }, { "text": "Intriguingly, our method identified 58 words that could be mapped to an equivalent in EuroWordNet. This number is exactly the same as that of Goddard's (2002:14) \"atoms of meaning\". Additionally, this number is not far from that of the SuperSenses in WordNet (i.e., 48). Though the contents of the sets vary from approach to approach and need further examination, there appears to exist a certain range regarding the number of base concepts in the lexicon.", "cite_spans": [ { "start": 142, "end": 161, "text": "Goddard's (2002:14)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5." }, { "text": "Alternatively, in previous research, the most commonly used words are determined by word occurrence frequency, but frequency is heavily dependent on the corpus selected. If the corpus is not large enough, or not balanced, the result will not be accurate enough. Recent developments of distributional models in semantics have shown success in this aspect. For example, Zhang et al. (2004) propose a metric for the distribution of words in a corpus. This will be left for future research.", "cite_spans": [ { "start": 368, "end": 387, "text": "Zhang et al. (2004)", "ref_id": "BIBREF52" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5." }, { "text": "Identifying the basic words that represent the core concepts is a crucial issue in lexicography, psycholinguistics, and language pedagogy. Recent NLP applications as well as ontologies also recognize the urgent need for the methodology for extracting and measuring the core concepts. In this paper, we have illustrated how glosses in a wordnet can be used to extract base concepts and provide evidence for basic conceptual underpinnings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "There is scope for the research to be extended in the direction of empirically-grounded evaluation of the results. We are also interested in putting the analysis in the contexts of multilingual wordnets. These are left as items for our future studies. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "Yu-Ta Chen et al.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In previous studies, the terms \"basic vocabulary\", \"sight vocabulary\", \"core vocabulary\", and the like are sometimes interchangeable.3 For others who have adopted a similar approach in languages other than English, seeGoddard (2002:12).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "WordNet is open to the general public at http://wordnet.princeton.edu.5 For the format of the lexicographical files, see wninput(5WN) at http://wordnet.princeton.edu/wordnet/man/lexnames.5WN.html.6 The 164 Base Concepts in EuroWordnet consist of 66 concrete synsets (nouns) and 98 abstract synsets (nouns and verbs). For more details, refer to http://www.globalwordnet.org/gwa/ewn_to_bc/ConcreteInfo.html and http://www.globalwordnet.org/gwa/ewn_to_bc/AbstractInfo.htm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For more information about the BalkaNet project, refer to http://www.dblab.upatras.gr/balkanet/ and http://nlp.lsi.upc.edu/web/index.php?option=com_content&task=view&id=53 for similar works (e.g.,Atserias et al., 2003).8 Note that basic-level concepts should not be confused with Base Concepts. While a Base Concept occupies a high position in a hierarchy, a basic-level concept occurs in the middle of a hierarchy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "According toGruber (1995:908), an ontology is \"an explicit specification of a conceptualization\", and a wordnet can be thought of as a lexical ontology because of its lexical implementation of conceptualization, in comparison with other formal ontologies (e.g., SUMO) where the focus is put on logical constrains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The Chinese Wordnet (CWN) has been released as an open-source project, and is freely available at http://lope.linguistics.ntu.edu.tw/cwn", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The morpheme jian does not stand alone in Modern Chinese.12 Originally, there were 15,325 tokens of de and 850 tokens of danwei in the data.13 For which synsets in EuroWordNet were merged in the present study, see the appendix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was supported in part by the National Science Council of Taiwan under Grant NSC99-2511-S-415-007-MY2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "We would like to express our gratitude to Chih-Yao Lee for his technical support in the data analysis. Our gratitude is also extended to the reviewers for their insightful suggestions. Our appreciation also goes out to Harvey Hsin-chang Ho for his meticulous reading of the entire manuscript and his suggestions for its improvement. However, any weaknesses that remain are our own.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "N were removed from our wordlist. In the end, the frequency wordlist based on the glosses of the Chinese Wordnet contained 17,018 words.In EuroWordNet, there are 98 abstract Base Concepts and 66 concrete Base Concepts. However, as Vossen et al. (1998) have admitted, some synsets appear to represent almost the same concepts (e.g., {form 1; shape 1} and {form 6; pattern 5; shape 5}), so the number of the Base Concepts in EuroWordNet can be reduced. In such cases, we merged the two (or more) synsets into one. Finally, we retained 130 Base Concepts, i.e., 75 abstract concepts and 55 concrete concepts. Therefore, we also selected the top 130 words from our wordlist to be a candidate set of base concepts in Chinese.When we examined the 130 words high on our wordlist, we found that some words needed to be replaced. First, two proper nouns were unsurprisingly high on the wordlist based on the Chinese Wordnet, i.e., Zhongguo 'China' (32th) and Taiwan 'Taiwan' (67th). The two words were excluded from the candidate set of base concepts. Second, since we focused on typical nouns and verbs, words typically not functioning as nouns or as verbs were excluded from our wordlist, regardless of their tags. Words discarded at this stage included: The seven words do not have an equivalent in the set of Base Concepts of EuroWordNet though their potential hypernyms such as weizhi and difang can be mapped to synsets such as {location 1} and {place 13; spot 10; topographic point 1}. We suggest that the seven concepts may be regarded as a set of basic locative concepts in Chinese. Generally, the set exhibits a degree of symmetry in the sense that some words (i.e., shang and xia; zhengmian and hou) form pairs.It is noted that the word yishang is ambiguous. It can mean 'above' or 'more than', and the latter sense is not locative. However, since we assume that the 'more than' sense might metaphorically derive from the 'above' sense, yishang is assigned to the present category. Though ren 'human' can be mapped to the synset {human 1; individual 1; mortal 1; person 1; someone 1; soul 1}, in the candidate set of base concepts in Chinese are still some other words that denote people. As in the set of locative words, this set also exhibits a degree of symmetry (i.e., the self/other distinction: taren/duifang and ziji; the gender distinction: nanzi and nuzi). Such distinctions appear to be basic, and that is captured in our experiment.", "cite_spans": [ { "start": 231, "end": 251, "text": "Vossen et al. (1998)", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "The appendix provides the Base Concepts in EuroWordNet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix", "sec_num": null }, { "text": "Concrete Synsets", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I.", "sec_num": null }, { "text": "Please send application to:The ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "To Register\uff1a", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Readability assessment for text simplification", "authors": [ { "first": "S", "middle": [], "last": "Aluisio", "suffix": "" }, { "first": "L", "middle": [], "last": "Specia", "suffix": "" }, { "first": "C", "middle": [], "last": "Gasperin", "suffix": "" }, { "first": "C", "middle": [], "last": "Scarton", "suffix": "" } ], "year": 2010, "venue": "NAACL-HLT 2010: The 5th Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aluisio, S., Specia, L., Gasperin, C., & Scarton, C. (2010). Readability assessment for text simplification. In NAACL-HLT 2010: The 5th Workshop on Innovative Use of NLP for Building Educational Applications.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Modeling local coherence: An entity-based approach", "authors": [ { "first": "R", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "M", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics", "volume": "34", "issue": "1", "pages": "1--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barzilay, R., & Lapata, M. (2008). Modeling local coherence: An entity-based approach. Computational Linguistics, 34(1), 1-34.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Readability revisited: The new Dale-Chall readability formula", "authors": [ { "first": "J", "middle": [ "S" ], "last": "Chall", "suffix": "" }, { "first": "E", "middle": [], "last": "Dale", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chall, J. S., & Dale, E. (1995). Readability revisited: The new Dale-Chall readability formula. Cambridge, MA: Brookline Books.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "LIBSVM: A library for support vector machines", "authors": [ { "first": "C.-C", "middle": [], "last": "Chang", "suffix": "" }, { "first": "C.-J", "middle": [], "last": "Lin", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang, C.-C., & Lin, C.-J. (n.d.). LIBSVM: A library for support vector machines. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Chinese readability assessment using tf-idf and svm", "authors": [ { "first": "Y.-H", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Y.-H", "middle": [], "last": "Tsai", "suffix": "" }, { "first": "Y.-T", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2011, "venue": "International Conference on Machine Learning and Cybernetics (ICMLC2011)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, Y.-H., Tsai, Y.-H., & Chen, Y.-T. (2011). Chinese readability assessment using tf-idf and svm. In International Conference on Machine Learning and Cybernetics (ICMLC2011), Guilin, China.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Lexical semantic representation and semantic composition -An introduction to E-HowNet", "authors": [ { "first": "", "middle": [], "last": "Ckip Group", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "CKIP Group. (n.d.). A Chinese word segmentation system, http://ckipsvr.iis.sinica.edu.tw/ CKIP Group. (2009). Lexical semantic representation and semantic composition -An introduction to E-HowNet. (Technical Report), Institute of Information Science, Academia Sinica.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "WordNet: An Electronic Lexical Database", "authors": [ { "first": "C", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fellbaum, C. (1998). WordNet: An Electronic Lexical Database. Cambridge: MIT Press.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "WordNet Based Comparison of Language Variation -A study based on CCD and CWN. Presented at Global WordNet (GWC-06)", "authors": [ { "first": "J.-F", "middle": [], "last": "Hong", "suffix": "" }, { "first": "C.-R", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "61--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hong, J.-F., & Huang, C.-R. (2006). WordNet Based Comparison of Language Variation -A study based on CCD and CWN. Presented at Global WordNet (GWC-06). 61-68. January 22-26. Jeju Island, Korea.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Sinica BOW: A bilingual ontological wordnet", "authors": [ { "first": "C.-R", "middle": [], "last": "Huang", "suffix": "" }, { "first": "R.-Y", "middle": [], "last": "Chang", "suffix": "" }, { "first": "S", "middle": [], "last": "Li", "suffix": "" } ], "year": 2010, "venue": "Eds. Ontology and the Lexicon. Cambridge Studies in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, C.-R., Chang, R.-Y., & Li, S.-b. (2010). Sinica BOW: A bilingual ontological wordnet. In: Chu-Ren Huang et al. Eds. Ontology and the Lexicon. Cambridge Studies in Natural Language Processing. Cambridge: Cambridge University Press.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Cross-lingual Portability of Semantic relations: Bootstrapping Chinese WordNet with English WordNet Relations. Languages and Linguistics", "authors": [ { "first": "C.-R", "middle": [], "last": "Huang", "suffix": "" }, { "first": "E", "middle": [ "I J" ], "last": "Tseng", "suffix": "" }, { "first": "D", "middle": [ "B S" ], "last": "Tsai", "suffix": "" }, { "first": "B", "middle": [], "last": "Murphy", "suffix": "" } ], "year": 2003, "venue": "", "volume": "4", "issue": "", "pages": "509--532", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, C.-R., Tseng, E. I. J., Tsai, D. B. S., & Murphy, B. (2003). Cross-lingual Portability of Semantic relations: Bootstrapping Chinese WordNet with English WordNet Relations. Languages and Linguistics. 4(3), 509-532.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Chinese Word Sketches. ASIALEX 2005: Words in Asian Cultural Context. June 1-3. Singapore. Lexical Data Consortium. 2005. Chinese Gigaword Corpus", "authors": [ { "first": "A", "middle": [], "last": "Kilgarriff", "suffix": "" }, { "first": "C.-R", "middle": [], "last": "Huang", "suffix": "" }, { "first": "P", "middle": [], "last": "Rychly", "suffix": "" }, { "first": "S", "middle": [], "last": "Smith", "suffix": "" }, { "first": "D", "middle": [], "last": "Tugwell", "suffix": "" } ], "year": 2005, "venue": "", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kilgarriff, A., Huang, C.-R., Rychly, P., Smith, S., & Tugwell, D. (2005). Chinese Word Sketches. ASIALEX 2005: Words in Asian Cultural Context. June 1-3. Singapore. Lexical Data Consortium. 2005. Chinese Gigaword Corpus 2.5.: http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC2005T14.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Introduction to WordNet: An On-line Lexical Database", "authors": [ { "first": "G", "middle": [ "A" ], "last": "Miller", "suffix": "" }, { "first": "R", "middle": [], "last": "Beckwith", "suffix": "" }, { "first": "C", "middle": [], "last": "Fellbaum", "suffix": "" }, { "first": "D", "middle": [], "last": "Gross", "suffix": "" }, { "first": "K", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the fifteenth International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miller, G. A., Beckwith, R., Fellbaum, C., Gross, D., & Miller, K. (1993). Introduction to WordNet: An On-line Lexical Database. In Proceedings of the fifteenth International Joint Conference on Artificial Intelligence.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Chinese Sketch Engine and the Extraction of Grammatical Collocations", "authors": [ { "first": "C", "middle": [ "R" ], "last": "Huang", "suffix": "" }, { "first": "A", "middle": [], "last": "Kilgarriff", "suffix": "" }, { "first": "Y", "middle": [], "last": "Wu", "suffix": "" }, { "first": "C", "middle": [ "M" ], "last": "Chiu", "suffix": "" }, { "first": "S", "middle": [], "last": "Smith", "suffix": "" }, { "first": "P", "middle": [], "last": "Rychly", "suffix": "" }, { "first": "M", "middle": [ "H" ], "last": "Bai", "suffix": "" }, { "first": "K", "middle": [ "J" ], "last": "Chen", "suffix": "" } ], "year": 2005, "venue": "Studies of Chinese Gloss Language: Theories and Applications. Shanghai: Xuelin. (\u5b89\u83ef\uf9f4\u3002\u300a\u6f22\u8a9e\u91cb\u7fa9\u5143\u8a9e\u8a00\uff1a\uf9e4\uf941\u8207\u61c9\u7528\u7814\u7a76\u300b\u3002\u4e0a\u6d77\uff1a\u5b78\uf9f4\u51fa\u7248\u793e\u3002)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, C.R., Kilgarriff, A., Wu, Y., Chiu, C.M., Smith, S., Rychly, P., Bai, M.H., & Chen, K.J. (2005). Chinese Sketch Engine and the Extraction of Grammatical Collocations, In References An, H.-L. (2009). Studies of Chinese Gloss Language: Theories and Applications. Shanghai: Xuelin. (\u5b89\u83ef\uf9f4\u3002\u300a\u6f22\u8a9e\u91cb\u7fa9\u5143\u8a9e\u8a00\uff1a\uf9e4\uf941\u8207\u61c9\u7528\u7814\u7a76\u300b\u3002\u4e0a\u6d77\uff1a\u5b78\uf9f4\u51fa\u7248\u793e\u3002)", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The Oxford Guide to Practical Lexicography", "authors": [ { "first": "B", "middle": [ "T S" ], "last": "Atkins", "suffix": "" }, { "first": "M", "middle": [], "last": "Rundell", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Atkins, B. T. S., & Rundell, M. (2008). The Oxford Guide to Practical Lexicography. Oxford: Oxford University Press.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Integrating and porting knowledge across languages", "authors": [ { "first": "J", "middle": [], "last": "Atserias", "suffix": "" }, { "first": "L", "middle": [], "last": "Villarejo", "suffix": "" }, { "first": "G", "middle": [], "last": "Rigau", "suffix": "" }, { "first": "J", "middle": [ "G" ], "last": "Salgado", "suffix": "" }, { "first": "E", "middle": [ "H" ], "last": "Unibertsitatea", "suffix": "" } ], "year": 2003, "venue": "Proceeding of Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "31--37", "other_ids": {}, "num": null, "urls": [], "raw_text": "Atserias, J., Villarejo, L., Rigau, G., Salgado, J. G., & Unibertsitatea, E. H. (2003). Integrating and porting knowledge across languages. In Proceeding of Recent Advances in Natural Language Processing, 31-37.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The definition of definition", "authors": [ { "first": "L", "middle": [ "L" ], "last": "Bernard", "suffix": "" } ], "year": 1941, "venue": "Social Forces", "volume": "19", "issue": "", "pages": "500--510", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bernard, L. L. (1941). The definition of definition. Social Forces, 19, 500-510.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Definition of the Local Base Concepts and Their Mapping with the ILI Records", "authors": [ { "first": "D", "middle": [], "last": "Cristea", "suffix": "" }, { "first": "G", "middle": [], "last": "Puscasu", "suffix": "" }, { "first": "O", "middle": [], "last": "Postolache", "suffix": "" }, { "first": "E", "middle": [], "last": "Galiotou", "suffix": "" }, { "first": "M", "middle": [], "last": "Grigoriadou", "suffix": "" }, { "first": "A", "middle": [], "last": "Charcharidou", "suffix": "" }, { "first": "E", "middle": [], "last": "Papakitsos", "suffix": "" }, { "first": "S", "middle": [], "last": "Selimis", "suffix": "" }, { "first": "S", "middle": [], "last": "Stamou", "suffix": "" }, { "first": "C", "middle": [], "last": "Krstev", "suffix": "" }, { "first": "G", "middle": [], "last": "Pavlovic-Lazetic", "suffix": "" }, { "first": "I", "middle": [], "last": "Obradovic", "suffix": "" }, { "first": "D", "middle": [], "last": "Vitas", "suffix": "" }, { "first": "O", "middle": [], "last": "Cetinoglu", "suffix": "" }, { "first": "D", "middle": [], "last": "Tufis", "suffix": "" }, { "first": "K", "middle": [], "last": "Pala", "suffix": "" }, { "first": "T", "middle": [], "last": "Pavelek", "suffix": "" }, { "first": "P", "middle": [], "last": "Smrz", "suffix": "" }, { "first": "S", "middle": [], "last": "Koeva", "suffix": "" }, { "first": "G", "middle": [], "last": "Totkov", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cristea, D., Puscasu, G., Postolache, O., Galiotou, E., Grigoriadou, M., Charcharidou, A., Papakitsos, E., Selimis, S., Stamou, S., Krstev, C., Pavlovic-Lazetic, G., Obradovic, I., Vitas, D., Cetinoglu, O., Tufis, D., Pala, K., Pavelek, T., Smrz, P., Koeva, S., & Totkov, G. (2002). Definition of the Local Base Concepts and Their Mapping with the ILI Records. Deliverable D.4.1, WP4, BalkaNet, IST-2000-29388.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "The pragmatics of lexical specificity", "authors": [ { "first": "A", "middle": [], "last": "Cruse", "suffix": "" } ], "year": 1977, "venue": "Journal of Linguistics", "volume": "13", "issue": "", "pages": "153--164", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cruse, A. (1977). The pragmatics of lexical specificity. Journal of Linguistics, 13, 153-164.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Meaning in Language: An Introduction to Semantics and Pragmatics", "authors": [ { "first": "A", "middle": [], "last": "Cruse", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cruse, A. (2000). Meaning in Language: An Introduction to Semantics and Pragmatics. Oxford: Oxford University Press.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "On-line lexical resources for language learners: Assessment of some approaches to word formation", "authors": [ { "first": "G", "middle": [], "last": "Cumming", "suffix": "" }, { "first": "S", "middle": [], "last": "Cropp", "suffix": "" }, { "first": "R", "middle": [], "last": "Sussex", "suffix": "" } ], "year": 1994, "venue": "System", "volume": "22", "issue": "", "pages": "369--377", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cumming, G., Cropp, S., & Sussex, R. (1994). On-line lexical resources for language learners: Assessment of some approaches to word formation. System, 22, 369-377.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A basic sight vocabulary", "authors": [ { "first": "E", "middle": [ "W" ], "last": "Dolch", "suffix": "" } ], "year": 1936, "venue": "The Elementary School Journal", "volume": "36", "issue": "", "pages": "456--460", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dolch, E. W. (1936). A basic sight vocabulary. The Elementary School Journal, 36, 456-460.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A Reading Vocabulary for the Primary Grades", "authors": [ { "first": "A", "middle": [ "I" ], "last": "Gates", "suffix": "" } ], "year": 1926, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gates, A. I. (1926). A Reading Vocabulary for the Primary Grades. New York: Teachers College, Columbia University.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "The search for the shared semantic core of all languages", "authors": [ { "first": "C", "middle": [], "last": "Goddard", "suffix": "" } ], "year": 2002, "venue": "Meaning and Universal Grammar: Theory and Empirical Findings", "volume": "", "issue": "", "pages": "5--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Goddard, C. (2002). The search for the shared semantic core of all languages. Meaning and Universal Grammar: Theory and Empirical Findings (Vol. I), ed. by C. Goddard & A. Wierzbicka, 5-40. Amsterdam: John Benjamins. 5-40.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Toward principles for the design of ontologies used for knowledge sharing", "authors": [ { "first": "T", "middle": [ "R" ], "last": "Gruber", "suffix": "" } ], "year": 1995, "venue": "International Journal of Human-Computer Studies", "volume": "43", "issue": "", "pages": "907--928", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gruber, T. R. (1995). Toward principles for the design of ontologies used for knowledge sharing. International Journal of Human-Computer Studies, 43, 907-928.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Principles of Distinguishing and Describing Word Senses in Chinese", "authors": [ { "first": "C.-R", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2008, "venue": "Taipei: Academia Sinica. (\u9ec3\u5c45\u4ec1\u3002\u300a\u610f\u7fa9\u8207\u8a5e\u7fa9\u300b\u7cfb\uf99c\u300a\u4e2d\u6587\u8a5e\u5f59\u610f\u7fa9\u7684\u5340\u8fa8 \u8207\u63cf\u8ff0\u539f\u5247\u300b\u7b2c\u4e94\u7248\u3002\u81fa\uf963\uff1a\u4e2d\u592e\u7814\u7a76\u9662\u3002)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, C.-R. (2008). Principles of Distinguishing and Describing Word Senses in Chinese (5 th ed.). Taipei: Academia Sinica. (\u9ec3\u5c45\u4ec1\u3002\u300a\u610f\u7fa9\u8207\u8a5e\u7fa9\u300b\u7cfb\uf99c\u300a\u4e2d\u6587\u8a5e\u5f59\u610f\u7fa9\u7684\u5340\u8fa8 \u8207\u63cf\u8ff0\u539f\u5247\u300b\u7b2c\u4e94\u7248\u3002\u81fa\uf963\uff1a\u4e2d\u592e\u7814\u7a76\u9662\u3002)", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Exploring the automatic selection of basic level concepts", "authors": [ { "first": "R", "middle": [], "last": "Izquierdo", "suffix": "" }, { "first": "A", "middle": [], "last": "Su\u00e1rez", "suffix": "" }, { "first": "G", "middle": [], "last": "Rigau", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the International Conference on Recent Advances on Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Izquierdo, R., Su\u00e1rez, A., & Rigau, G. (2008). Exploring the automatic selection of basic level concepts. In Proceedings of the International Conference on Recent Advances on Natural Language Processing.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Towards the notion of gloss, and the adoption of linguistic resources in formal ontology engineering", "authors": [ { "first": "M", "middle": [], "last": "Jarrar", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 15th International World Wide Web Conference", "volume": "", "issue": "", "pages": "497--503", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jarrar, M. (2006). Towards the notion of gloss, and the adoption of linguistic resources in formal ontology engineering. In Proceedings of the 15th International World Wide Web Conference, 497-503.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Gloss in sanskrit wordnet. Sanskrit Computational Linguistics", "authors": [ { "first": "M", "middle": [], "last": "Kulkarni", "suffix": "" }, { "first": "I", "middle": [], "last": "Kulkarni", "suffix": "" }, { "first": "C", "middle": [], "last": "Dangarikar", "suffix": "" }, { "first": "P", "middle": [], "last": "Bhattacharyya", "suffix": "" } ], "year": 2010, "venue": "", "volume": "6465", "issue": "", "pages": "190--197", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kulkarni, M., Kulkarni, I., Dangarikar, C., & Bhattacharyya, P. (2010). Gloss in sanskrit wordnet. Sanskrit Computational Linguistics, 6465, 190-197.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Dictionaries: The Art and Craft of Lexicography", "authors": [ { "first": "S", "middle": [ "I" ], "last": "Landau", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Landau, S. I. (2001). Dictionaries: The Art and Craft of Lexicography (2 nd ed.). Cambridge: Cambridge University Press.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Foundations of Cognitive Grammar", "authors": [ { "first": "R", "middle": [ "W" ], "last": "Langacker", "suffix": "" } ], "year": 1987, "venue": "", "volume": "I", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Langacker, R. W. (1987). Foundations of Cognitive Grammar (Vol. I). Stanford: Stanford University Press.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Defining core vocabulary and tracking its distribution across spoken and written genres", "authors": [ { "first": "D", "middle": [ "Y W" ], "last": "Lee", "suffix": "" } ], "year": 2001, "venue": "Journal of English Linguistics", "volume": "29", "issue": "", "pages": "250--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lee, D. Y. W. (2001). Defining core vocabulary and tracking its distribution across spoken and written genres. Journal of English Linguistics, 29, 250-278.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "A new type of folk-inspired definition in English monolingual learners' dictionaries and its usefulness for conveying syntactic information", "authors": [ { "first": "R", "middle": [], "last": "Lew", "suffix": "" }, { "first": "A", "middle": [], "last": "Dziemianko", "suffix": "" } ], "year": 2006, "venue": "International Journal of Lexicography", "volume": "19", "issue": "", "pages": "225--242", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lew, R., & Dziemianko, A. (2006). A new type of folk-inspired definition in English monolingual learners' dictionaries and its usefulness for conveying syntactic information. International Journal of Lexicography, 19, 225-242.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "A Computational Study of the Basic Level Nouns in English. Unpublished doctoral dissertation", "authors": [ { "first": "S.-Y", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, S.-Y. (2010). A Computational Study of the Basic Level Nouns in English. Unpublished doctoral dissertation, National Taiwan Normal University.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "What constitutes a basic vocabulary for spoken communication?", "authors": [ { "first": "M", "middle": [ "J" ], "last": "Mccarthy", "suffix": "" } ], "year": 1999, "venue": "Studies in English Language and Literature", "volume": "1", "issue": "", "pages": "233--249", "other_ids": {}, "num": null, "urls": [], "raw_text": "McCarthy, M. J. (1999). What constitutes a basic vocabulary for spoken communication? Studies in English Language and Literature, 1, 233-249.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "English Vocabulary in Use: Elementary", "authors": [ { "first": "M", "middle": [ "J" ], "last": "Mccarthy", "suffix": "" }, { "first": "F", "middle": [], "last": "Dell", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "McCarthy, M. J., & O'Dell, F. (1999). English Vocabulary in Use: Elementary. Cambridge: Cambridge University Press.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Introduction to WordNet: An on-line lexical database", "authors": [ { "first": "G", "middle": [ "A" ], "last": "Miller", "suffix": "" }, { "first": "R", "middle": [], "last": "Beckwith", "suffix": "" }, { "first": "C", "middle": [], "last": "Fellbaum", "suffix": "" }, { "first": "D", "middle": [], "last": "Gross", "suffix": "" }, { "first": "K", "middle": [ "J" ], "last": "Miller", "suffix": "" } ], "year": 1990, "venue": "International Journal of Lexicography", "volume": "3", "issue": "", "pages": "235--244", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miller, G. A., Beckwith, R., Fellbaum, C., Gross, D., & Miller, K. J. (1990). Introduction to WordNet: An on-line lexical database. International Journal of Lexicography, 3, 235-244.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Basic English: A General Introduction with Rules and Grammar", "authors": [ { "first": "C", "middle": [ "K" ], "last": "Ogden", "suffix": "" } ], "year": 1930, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ogden, C. K. (1930). Basic English: A General Introduction with Rules and Grammar. London: Paul Treber.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "What Is Meaning? Fundamentals of Formal Semantics", "authors": [ { "first": "P", "middle": [ "H" ], "last": "Portner", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Portner, P. H. (2005). What Is Meaning? Fundamentals of Formal Semantics. Malden: Blackwell.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Reductive paraphrase and meaning: A critique of Wierzbickian semantics", "authors": [ { "first": "N", "middle": [], "last": "Riemer", "suffix": "" } ], "year": 2006, "venue": "Linguistics and Philosophy", "volume": "29", "issue": "", "pages": "347--379", "other_ids": {}, "num": null, "urls": [], "raw_text": "Riemer, N. (2006). Reductive paraphrase and meaning: A critique of Wierzbickian semantics. Linguistics and Philosophy, 29, 347-379.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Introducing Semantics", "authors": [ { "first": "N", "middle": [], "last": "Riemer", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Riemer, N. (2010). Introducing Semantics. Cambridge: Cambridge University Press.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Basic objects in natural categories", "authors": [ { "first": "E", "middle": [], "last": "Rosch", "suffix": "" }, { "first": "C", "middle": [], "last": "Mervis", "suffix": "" }, { "first": "W", "middle": [], "last": "Gray", "suffix": "" }, { "first": "D", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "P", "middle": [], "last": "Boyes-Braem", "suffix": "" } ], "year": 1976, "venue": "Cognitive Psychology", "volume": "8", "issue": "", "pages": "382--439", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rosch, E., Mervis, C., Gray, W., Johnson, D., & Boyes-Braem, P. (1976). Basic objects in natural categories. Cognitive Psychology, 8, 382-439.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "An Introduction to Cognitive Linguistics", "authors": [ { "first": "F", "middle": [], "last": "Ungerer", "suffix": "" }, { "first": "H", "middle": [ "J" ], "last": "Schmid", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ungerer, F., & Schmid, H. J. (2006). An Introduction to Cognitive Linguistics. New York: Longman.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "The EuroWordNet Base Concepts and Top-Ontology. Deliverable D017D034D036 EuroWordNet", "authors": [ { "first": "P", "middle": [], "last": "Vossen", "suffix": "" }, { "first": "L", "middle": [], "last": "Bloksma", "suffix": "" }, { "first": "H", "middle": [], "last": "Rodriguez", "suffix": "" }, { "first": "S", "middle": [], "last": "Climent", "suffix": "" }, { "first": "N", "middle": [], "last": "Calzolari", "suffix": "" }, { "first": "A", "middle": [], "last": "Roventini", "suffix": "" }, { "first": "F", "middle": [], "last": "Bertagna", "suffix": "" }, { "first": "A", "middle": [], "last": "Alonge", "suffix": "" }, { "first": "W", "middle": [], "last": "Peters", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "2--4003", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vossen, P., Bloksma, L., Rodriguez, H., Climent, S., Calzolari, N., Roventini, A., Bertagna, F., Alonge, A., & Peters, W. (1998). The EuroWordNet Base Concepts and Top-Ontology. Deliverable D017D034D036 EuroWordNet LE2-4003.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "A General Service List of English Words", "authors": [ { "first": "M", "middle": [], "last": "West", "suffix": "" } ], "year": 1953, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "West, M. (1953). A General Service List of English Words. London: Longman.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "A first-grade vocabulary study", "authors": [ { "first": "H", "middle": [ "E" ], "last": "Wheeler", "suffix": "" }, { "first": "E", "middle": [ "A" ], "last": "Howell", "suffix": "" } ], "year": 1930, "venue": "The Elementary School Journal", "volume": "31", "issue": "", "pages": "52--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wheeler, H. E., & Howell, E. A. (1930). A first-grade vocabulary study. The Elementary School Journal, 31, 52-60", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Semantic Primitives. (Translated by A", "authors": [ { "first": "A", "middle": [], "last": "Wierzbicka", "suffix": "" } ], "year": 1972, "venue": "Wierzbicka & J. Besemeres)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wierzbicka, A. (1972). Semantic Primitives. (Translated by A. Wierzbicka & J. Besemeres) Frankfurt: Athen\u00e4um Verlag.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Semantics: Primes and Universals", "authors": [ { "first": "A", "middle": [], "last": "Wierzbicka", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wierzbicka, A. (1996). Semantics: Primes and Universals. Oxford: Oxford University Press.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Manual of Lexicography. The Hague: Mouton", "authors": [ { "first": "L", "middle": [], "last": "Zgusta", "suffix": "" } ], "year": 1971, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zgusta, L. (1971). Manual of Lexicography. The Hague: Mouton.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Distributional consistency: A general method for defining a core lexicon", "authors": [ { "first": "H", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "C", "middle": [], "last": "Huang", "suffix": "" }, { "first": "S", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 4th International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "1119--1222", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang, H., Huang, C., & Yu, S. (2004). Distributional consistency: A general method for defining a core lexicon. Proceedings of the 4th International Conference on Language Resources and Evaluation, 1119-1222.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Holding the Republic of China Computational Linguistics Conference (ROCLING) annually", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Holding the Republic of China Computational Linguistics Conference (ROCLING) annually.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Facilitating and promoting academic research, seminars, training, discussions, comparative evaluations and other activities related to computational linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Facilitating and promoting academic research, seminars, training, discussions, comparative evaluations and other activities related to computational linguistics.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Collecting information and materials on recent developments in the field of computational linguistics, domestically and internationally", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collecting information and materials on recent developments in the field of computational linguistics, domestically and internationally.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Publishing pertinent journals, proceedings and newsletters", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Publishing pertinent journals, proceedings and newsletters.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Setting of the Chinese-language technical terminology and symbols related to computational linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Setting of the Chinese-language technical terminology and symbols related to computational linguistics.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Maintaining contact with international computational linguistics academic organizations", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maintaining contact with international computational linguistics academic organizations.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Dealing with various other matters related to the development of computational linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dealing with various other matters related to the development of computational linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "Chinese Readability using Term Frequency and Lexical Chain Top level definition of a word sense in E-HowNet." }, "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "A general lexical chaining algorithm." }, "FIGREF2": { "type_str": "figure", "num": null, "uris": null, "text": "An example of lexical chaining result." }, "FIGREF3": { "type_str": "figure", "num": null, "uris": null, "text": "Result of classifier for lower grade." }, "FIGREF4": { "type_str": "figure", "num": null, "uris": null, "text": "Morris, J., & Hirst, G. (1991). Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics, 17(1),21-48. National Center for Education Statistics. (2002).Adult literacy in America (3rd ed.). Washington, D. C.: U.S. Dept. of Education. Petersen, S. E., & Ostendorf, M. (2009). A machine learning approach to reading level assessment. Computer Speech and Language, 23, 89-106. Pitler, E., & Nenkova, A. (2008). Revisiting readability: A unified framework for predicting text quality. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 186-195. Schwarm, S. E., & Ostendorf, M. (2005). Reading level assessment using support vector machines and statistical language models. In Proceedings of the 43rd Annual Meeting of the ACL, 523-530. Stenner, A. J. (1996). Measuring reading comprehension with the Lexile framework. In Fourth North American Conference on Adolescent/Adult Literacy. Stokes, N. (2004). Applications of lexical cohesion analysis in the topic detection and tracking domain. (Ph.D. Thesis), Department of Computer Science, National University of Ireland, Dublin. Sung, Y.-T., Chang, T.H., Chen, J.-L., Cha, J.-H., Huang, C.-H., Hu, M.-K., & Hsu, F.-Y. (2011). The construction of Chinese readability index explorer and the analysis of text readability. In 21th Annual Meeting of Society for Text and Discourse Process, Poitiers, France. Yang, Y., & Pedersen, J. O. (1997). A comparative study on feature selection in text categorization. In Proceedings of the Fourteenth International Conference on Machine Learning, 412-420." }, "FIGREF5": { "type_str": "figure", "num": null, "uris": null, "text": "Unix \u96fb\u8166(Unbuntu: IntelCore2 Duo 2.80GHz; FreeBSD: AMD Sempron 1.8GHz)\uff0c\u8f14\u4ee5 NLTK(Loper & Bird, 2002)\u5de5\u5177\u958b\u767c\u8a08\u7b97\u5de5\u5177\uff0c\u9032\ufa08\u540c\u7fa9\u8a5e\u7d44\u7684\u591a\u968e\u5c64\u91cb\u7fa9\u95dc\uf997\u8a08\u7b97\uff0c\u8207 \u8fad\u7d44\u4e4b\u9593\u591a\u91cb\u8a5e\u5f59\u4e4b\u9593\u7684\u4ea4\u4e92\u6bd4\u8f03\uff0c\u7e3d\u8655\uf9e4\u6642\u9593\u5927\u7d04 32 \u5c0f\u6642\uff0c\u7e3d\u8a08\u8655\uf9e4\u540c\u7fa9\u8a5e\uf9d0 8311 \uf9d0\uff0c\u7d50\u679c\u6587\u4ef6\u7d04\u70ba 2" }, "FIGREF6": { "type_str": "figure", "num": null, "uris": null, "text": "thinks something like this: something good happened to this other person it didn't happen to me I want things like this to happen to me because of this, this person feels something bad X feels something like this" }, "FIGREF7": { "type_str": "figure", "num": null, "uris": null, "text": "To conduct research in computational linguistics. 2. To promote the utilization and development of computational linguistics. 3. To encourage research in and development of the field of Chinese computational linguistics both domestically and internationally. 4. To maintain contact with international groups who have similar goals and to cultivate academic exchange." }, "TABREF2": { "content": "
CodeFeature
lc-1Number of lexical chains
lc-2Average length of lexical chains
lc-3Average span of lexical chains
lc-4Number of long lexical chains
lc-5Average number of active chains per word
", "num": null, "html": null, "type_str": "table", "text": "" }, "TABREF5": { "content": "
Reading LevelGrade LevelMandarinSocial StudiesLife ScienceNo. of Articles
lower1st grade 2nd grade42 560 073 55115 111
middle3rd grade 4th grade61 6753 500 0114 117
higher5th grade 6th grade83 8858 540 0141 142
Total397215128740
", "num": null, "html": null, "type_str": "table", "text": "" }, "TABREF6": { "content": "
Feature setPrecisionRecallF-measureAccuracy
lc-1-2-3-4-50.760.570.650.81
Feature setPrecisionRecallF-measureAccuracy
lc-1-2-3-4-50.700.830.760.68
", "num": null, "html": null, "type_str": "table", "text": "" }, "TABREF7": { "content": "
Feature setPrecisionRecallF-measureAccuracy
tf-top500.780.870.820.88
tf-top1000.810.860.830.89
tf-top2000.800.890.840.90
tf-top3000.820.890.850.90
tf-top4000.860.890.870.92
tf-top5000.840.890.870.92
", "num": null, "html": null, "type_str": "table", "text": "" }, "TABREF8": { "content": "
Assessing Chinese Readability using Term Frequency and Lexical Chain15
Feature setPrecisionRecallF-measureAccuracy
tf-top500.810.880.840.79
tf-top1000.810.900.850.81
tf-top2000.830.920.870.83
tf-top3000.860.900.880.84
tf-top4000.820.920.870.83
tf-top5000.820.950.880.84
Feature setPrecisionRecallF-measureAccuracy
lc-1-2-3-4-5 + tf-top50TF-IDF0.82Lexical chain + TF-IDF 0.87 0.840.80
0.94 lc-1-2-3-4-5 + tf-top1000.840.890.860.82
0.92 lc-1-2-3-4-5 + tf-top2000.870.880.880.84
lc-1-2-3-4-5 + tf-top3000.890.870.880.85
0.86 0.88 0.90 lc-1-2-3-4-5 + tf-top400 F-measure lc-1-2-3-4-5 + tf-top5000.83 0.830.93 0.930.88 0.880.84 0.84
0.84TF-IDFLexical chain + TF-IDF
0.82 0.940100200300400500
0.92Number of terms
F-measure0.86 0.88 0.90
0.84
0.82
0100200300400500
Number of terms
Figure 6. Result of classifier for middle grade.
", "num": null, "html": null, "type_str": "table", "text": "" }, "TABREF9": { "content": "
CKIP Group. (2010). Academia Sinica Balanced Corpus (Version 3.1). Institute of
Information Science, Academia Sinica.Computational
Collins-Thompson, K., & Callan, J. (2005). Predicting reading difficulty with statistical Linguistics and Chinese Language Processing, 14(1), 45-84.
language models. Journal of the American Society for Information Science and McLaughlin, G. H. (1969). SMOG grading -A new readability formula. Journal of Reading,
Technology (JASIST), 56(13), 1448-1462. 12(8), 639-646.
Dai, L., Liu, B., Xia, Y., & Wu, S. (2008). Measuring semantic similarity between words
using HowNet. In International Conference on Computer Science and Information
Technology 2008, 601-605.
Dong, Z. (n.d.). HowNet knowledge database. http://www.keenage.com/
Duan, K.-B., & Keerthi, S. S. (2005). Which is the best multiclass SVM method? An
empirical study. In Proceedings of the Sixth International Workshop on Multiple
Classifier Systems.
Eickhoff, C., Serdyukov, P., & de Vries, A. P. (2010). Web page classification on child
suitability. In Proceedings of the 19th ACM International Conference on Information
and Knowledge Management.
Fellbaum, C. (Ed.). (1998). WordNet: An electronic lexical database and some of its
applications. Cambridge, MA: MIT Press.
Feng, L., Elhadad, N., & Huenerfauth, M. (2009). Cognitively motivated features for
readability assessment. In Proceedings of the 12th Conference of the European Chapter
of the ACL, 229-237.
Feng, L., Jansche, M., Huenerfauth, M., & Elhadad, N. (2010). A comparison of features for
automatic readability assessment. In The 23rd International Conference on
Computational Linguistics (COLING 2010): Poster Volume, 276-284.
Halliday, M. A. K., & Hasan, R. (1976). Cohesion in English. Longman.
Hasan, R. (1984). Coherence and cohesive harmony. In J. Flood (Ed.), Understanding reading
comprehension: Cognition, language and the structure of prose (pp. 184-219), Newark,
DE: International Reading Association.
", "num": null, "html": null, "type_str": "table", "text": "Lin, S.-Y., Su, C.-C., Lai, Y.-D., Yang, L.-C., & Hsieh, S.-K. (2009). Assessing text readability using hierarchical lexical relations retrieved from WordNet." }, "TABREF10": { "content": "
20\u6d2a\u5609\u99a1\u3001\u9ec3\u5c45\u4ec1
\u6458\u8981
\u8fd1\u5e7e\uf98e\uf92d\uff0c\u7531\u65bc\uf978\u5cb8\u4ea4\uf9ca\u983b\u7e41\uff0c\uf978\u5cb8\u4f7f\u7528\u7684\u8a5e\u5f59\uff0c\u4e5f\u56e0\u6b64\u4e92\u76f8\u5f71\u97ff\u751a\u91cd\uff0c\u8a9e\u8a00
\u5b78\u754c\u5c0d\u65bc\u6f22\u8a9e\u8a5e\u5f59\u7684\u7814\u7a76\uff0c\uf967\uf941\u5728\u8a9e\u97f3\u3001\u8a9e\u7fa9\u6216\u8a9e\u7528\u4e0a\u7684\u63a2\u8a0e\uff0c\u767c\u73fe\uf978\u5cb8\u5c0d\u4f7f
\u7528\u76f8\u540c\u6f22\u8a9e\u6642\u7684\u8a5e\u5f59\u5dee\uf962\u6709\u8457\u5fae\u5999\u6027\u7684\u5340\u5225\u3002\u800c\uf978\u5cb8\u537b\u53c8\u7684\u78ba\u662f\u4f7f\u7528\u6f22\u5b57\u9ad4\u7cfb
\u7684\u66f8\u5beb\u7cfb\u7d71\uff0c\u53ea\u6709\u5b57\u5f62\u4e0a\u6709\u53ef\u9810\u6e2c\u7684\u898f\uf9d8\u6027\u5c0d\u61c9\u3002\u672c\u6587\u5728\u4ee5\uf978\u5cb8\u7686\u4f7f\u7528\u4e2d\u6587\u6587
\u5b57\u7684\u539f\u5247\u4e0a\uff0c\u5728\u7e41\u9ad4\u4e2d\u6587\u8207\u7c21\u9ad4\u4e2d\u6587\u7684\u4f7f\u7528\uf9fa\u6cc1\uf92d\u6bd4\u5c0d\uf978\u5cb8\u4f7f\u7528\u8a5e\u5f59\u7684\u7279\u6027\u8207
\u73fe\u8c61\uff0c\u4ee5\u63a2\u7a76\u8207\u8a9e\u7fa9\u5c0d\u61c9\u8207\u6f14\u8b8a\u7b49\u76f8\u95dc\u7684\u8b70\u984c\u3002
\u9996\u5148\uff0c\u5728 Hong \u548c Huang (2006) \u7684\u5c0d\u61c9\u4e0a\uff0c\u85c9\u4ee5\u82f1\u6587 WordNet \u70ba\u6bd4\u5c0d\u6a19\u6e96\uff0c
\u85c9\u7531\u6bd4\u8f03\uf963\u4eac\u5927\u5b78\u7684\u4e2d\u6587\u6982\uf9a3\u8fad\u5178(Chinese Concept Dictionary (CCD))\u8207\u4e2d\u592e
\u7814\u7a76\u9662\u8a9e\u8a00\u6240\u7684\u4e2d\u6587\u8a5e\u7db2(Chinese Wordnet (CWN))\uf978\u500b WordNet \u4e2d\u6587\u7248\u6240\u4f7f
\u7528\u7684\u8a5e\u5f59\uff0c\u63a2\u8a0e\uf978\u5cb8\u5c0d\u65bc\u76f8\u540c\u6982\uf9a3\u8a5e\u5f59\u7684\u4f7f\u7528\uf9fa\u6cc1\u3002\u672c\u6587\u9032\u4e00\u6b65\u4f7f\u7528\u4e2d\u6587\u6982\uf9a3
\u8fad\u5178\u8207\u4e2d\u6587\u8a5e\u7db2\u6240\u4f7f\u7528\u7684\u8a5e\u5f59\uff0c\u5728 Gigaword Corpus \u4e2d\u7e41\u9ad4\u8a9e\uf9be\u8207\u7c21\u9ad4\u8a9e\uf9be\u7684
\u76f8\u5c0d\u4f7f\u7528\uf961\uff0c\u63a2\u7a76\uf978\u5cb8\u5c0d\u65bc\u4f7f\u7528\u76f8\u540c\u8a5e\u5f59\uff0c\u6216\u4f7f\u7528\uf967\u540c\u8a5e\u5f59\u7684\u73fe\u8c61\u8207\u5206\u4f48\u60c5
\u5f62\uff0c\u4e26\u4ee5 Google \u7db2\u9801\u4e2d\u6240\u641c\u5c0b\u5230\u7684\u7e41\u9ad4\u8cc7\uf9be\u8207\u7c21\u9ad4\u8cc7\uf9be\u9032\ufa08\u6bd4\u5c0d\u3001\u9a57\u8b49\u3002
\u95dc\u9375\u8a5e\uff1aCCD, CWN, WordNet, Gigaword Corpus, Google, \uf978\u5cb8\u8a5e\u5f59, \u8a5e\u7fa9, \u6982
\uf9a3
", "num": null, "html": null, "type_str": "table", "text": "Vol. 18, No. 2, June 2013, pp. 19-34 19 \u00a9 The Association for Computational Linguistics and Chinese Language Processing \u4ee5\u4e2d\u6587\u5341\u5104\u8a5e\u8a9e\uf9be\u5eab\u70ba\u57fa\u790e\u4e4b\uf978\u5cb8\u8a5e\u5f59\u5c0d\u6bd4\u7814\u7a76" }, "TABREF11": { "content": "
26 30\u4ee5\u4e2d\u6587\u5341\u5104\u8a5e\u8a9e\uf9be\u5eab\u70ba\u57fa\u790e\u4e4b\uf978\u5cb8\u8a5e\u5f59\u5c0d\u6bd4\u7814\u7a76 \u4ee5\u4e2d\u6587\u5341\u5104\u8a5e\u8a9e\uf9be\u5eab\u70ba\u57fa\u790e\u4e4b\uf978\u5cb8\u8a5e\u5f59\u5c0d\u6bd4\u7814\u7a76 \u4ee5\u4e2d\u6587\u5341\u5104\u8a5e\u8a9e\uf9be\u5eab\u70ba\u57fa\u790e\u4e4b\uf978\u5cb8\u8a5e\u5f59\u5c0d\u6bd4\u7814\u7a76 \u4ee5\u4e2d\u6587\u5341\u5104\u8a5e\u8a9e\uf9be\u5eab\u70ba\u57fa\u790e\u4e4b\uf978\u5cb8\u8a5e\u5f59\u5c0d\u6bd4\u7814\u7a76 \u4ee5\u4e2d\u6587\u5341\u5104\u8a5e\u8a9e\uf9be\u5eab\u70ba\u57fa\u790e\u4e4b\uf978\u5cb8\u8a5e\u5f59\u5c0d\u6bd4\u7814\u7a76 Vol. 18, No. 2, June 2013, pp. 35-56 \u57fa\u65bc\u5b57\u5178\u91cb\u7fa9\u95dc\uf997\u65b9\u6cd5\u7684\u540c\u7fa9\u8a5e\u6982\uf9a3\u64f7\u53d6:23 \u6d2a\u5609\u99a1\u3001\u9ec3\u5c45\u4ec1 25 \u6d2a\u5609\u99a1\u3001\u9ec3\u5c45\u4ec1 27 \u6d2a\u5609\u99a1\u3001\u9ec3\u5c45\u4ec1 29 \u6d2a\u5609\u99a1\u3001\u9ec3\u5c45\u4ec1 31 \u6d2a\u5609\u99a1\u3001\u9ec3\u5c45\u4ec1 \u6d2a\u5609\u99a1\u3001\u9ec3\u5c45\u4ec1 35 37
\u3002\u5728 WordNet \u4e2d\uff0c \u540d\u8a5e\u3001\u52d5\u8a5e\u3001\u5f62\u5bb9\u8a5e\u3001\u526f\u8a5e\uff0c\u9019\u56db\u500b\uf967\u540c\u7684\u8a5e\uf9d0\uff0c\u5206\u5225\u8a2d\u8a08\u3001\u7d44\u5408\u6210\u540c\u7fa9\u8a5e\u96c6(synsets) \u7684\u683c\u5f0f\uff0c\u5448\u73fe\u51fa\u6700\u57fa\u672c\u7684\u8a5e\u5f59\u6982\uf9a3\uff0c\u5728\u9019\u7576\u4e2d\uff0c\u4ee5\uf967\u540c\u7684\u8a9e\u7fa9\u95dc\u4fc2\uf99a\u7d50\u5404\u7a2e\uf967\u540c\u7684\u540c\u7fa9 \u8a5e\u96c6\uff0c\uf905\u6210\uf9ba WordNet \u7684\u6574\u500b\u67b6\u69cb\uff0c\u4e5f\u5448\u73fe\uf9ba WordNet \u6574\u500b\u5168\u8c8c\u3002 \u81ea\u5f9e Miller \u7b49\u4eba (1993)\u3001Fellaum (1998) \u767c\u5c55 WordNet \u4ee5\uf92d\uff0cWordNet \u5c31\u6301\u7e8c\uf967\u65b7 \u5730\uf901\u65b0\u7248\u672c\uff0c\u76ee\u524d\u6700\u65b0\u7684\u7248\u672c\u662f WordNet 3.0 \u7248\uff0c\u9019\u4e9b\u7248\u672c\u9593\u7684\u5dee\uf962\uff0c\u5305\u62ec\uf9ba\u540c\u7fa9\u8a5e\u96c6 \u7684\uf97e\u548c\u4ed6\u5011\u7684\u8a5e\u5f59\u5b9a\u7fa9\u3002\u7136\u800c\uff0c\u5c0d\u65bc\u62ff WordNet \uf92d\u505a\u7814\u7a76\u8a9e\uf9be\u7684\u5b78\u8005\uff0c\u591a\uf969\u9084\u662f\u4ee5 WordNet 1.6 \u7248\u70ba\u6700\u591a\uff0c\u56e0\u70ba\u9019\u500b\u7248\u672c\u662f\u76ee\u524d\u6700\u591a\u8a08\u7b97\u8a9e\u8a00\u5b78\u5b78\u8005\u4f7f\u7528\u7684\u3002\u5728 WordNet 1.6 \u7248\uf9e8\uff0c\u6709\u5c07\u8fd1 100,000 \u7684\u540c\u7fa9\u8a5e\u96c6\u3002 \u6211\u5011\u77e5\u9053\uff0c\u96d9\u8a9e\uf9b4\u57df\u5206\uf9d0\uff0c\u53ef\u4ee5\u589e\u52a0\u6211\u5011\u5404\u7a2e\uf9b4\u57df\u8a5e\u5f59\u5eab\u7684\u767c\u5c55\uff0c\u540c\u6a23\u7684\uff0c\u5728\u4e0a\u4e00 \u6bb5\u7684\u5167\u5bb9\uff0c\u6211\u5011\u4e5f\u63d0\u5230\u95dc\u65bc\u4ee5 WordNet \u70ba\u57fa\u790e\uff0c\u767c\u5c55\u51fa\u7e41\u9ad4\u4e2d\u6587\u7cfb\u7d71(Chinese Wordnet, CWN)\u8207\u7c21\u9ad4\u4e2d\u6587\u7cfb\u7d71(Chinese Concept Dictionary, CCD)\u7684\u5c0d\u8b6f\uff0c\u6211\u5011\u4f7f\u7528\u96d9\u8a9e\u8a5e\u7db2\uff0c\u4f5c \u70ba\u8a5e\u5f59\u77e5\uf9fc\u8cc7\uf9be\u5eab\uf92d\u5be6\u73fe\u3001\u652f\u6301\u6211\u5011\u5728\u8a5e\u5f59\u6982\uf9a3\u4e0a\u7684\u7814\u7a76\u3002 \u5728\u4e2d\u82f1\u96d9\u8a9e\u8a5e\u7db2\u4e2d\uff0c\u6bcf\u4e00\u500b\u82f1\u6587\u7684\u540c\u7fa9\u8a5e\u96c6\uff0c\u6211\u5011\u90fd\u6703\u7d66\u4e88\u4e09\u500b\u6700\u9069\u5408\u4e14\u5c0d\u7b49\u4e2d\u6587 \u7ffb\u8b6f\uff0c\u800c\u9019\u4e9b\u7ffb\u8b6f\uff0c\u5982\u679c\uf967\u5c6c\u65bc\u771f\u6b63\u7684\u540c\u7fa9\u8a5e\uff0c\u6211\u5011\u4e5f\u6703\u6a19\u8a3b\u4ed6\u5011\u7684\u8a9e\u7fa9\u95dc\u4fc2(Huang, Tseng, Tsai & Murphy, 2003)\uff0c\u53c8\u9019\u4e9b\u96d9\u8a9e\u8a5e\u7db2\uff0c\u4e5f\u5728\u4e2d\u7814\u9662\u8a9e\u8a00\u6240\u8a5e\u7db2\u5c0f\u7d44\u5718\u968a\u7684\u767c\u5c55\uff0c \u5c07\u6bcf\u4e00\u500b\u540c\u7fa9\u8a5e\u96c6\u90fd\u8207 SUMO \u6982\uf9a3\u7bc0\u9ede\uf99a\u7d50\uff0c\u9032\u800c\u958b\u767c\u51fa Academia Sinica Bilingual Ontological Wordnet (Sinica BOW) (Huang, Chang & Li, 2010)\u3002\u7576\u6211\u5011\u7121\u6cd5\u76f4\u63a5\u53d6\u5f97\u4e2d\u82f1 \u76f8\u5c0d\u61c9\u7684\u8a5e\u5f59\uff0c\u6211\u5011\u5728\u96d9\u8a9e\u8a5e\u7db2\u7684\u8cc7\uf9be\u5eab\uf9e8\uff0c\u53ef\u4ee5\uf9dd\u7528\u9019\u4e9b\u8a9e\u7fa9\u95dc\u4fc2\uff0c\u9032\u800c\u767c\u5c55\u4e26\u9810\u6e2c \uf9b4\u57df\u5206\uf9d0\u3002 4. WordNet\u548c\u4e2d\u6587\u6982\uf9a3\u8fad\u5178(CCD) CCD\uff0c\u4e2d\u6587\u6982\uf9a3\u8fad\u5178(Chinese Concept Dictionary)\uff0c\u662f\u4e00\u500b\u4e2d\u82f1\u96d9\u8a9e\u7684\u8a5e\u7db2\uff0c\u6574\u500b\u67b6\u69cb\u767c \u5c55\u4e5f\u662f\uf92d\u81ea\u65bc WordNet (\u4e8e\u6c5f\u751f\u8207\u4fde\u58eb\u6c76\uff0c2004\uff1b\u4e8e\u6c5f\u751f\u3001\uf9c7\u63da\u8207\u4fde\u58eb\u6c76\uff0c2003\uff1b\uf9c7\u63da\u3001 \u4fde\u58eb\u6c76\u8207\u4e8e\u6c5f\u751f\uff0c2003)\u3002\u5728 CCD \u7684\u767c\u5c55\u624b\u518a\uf9e8\u8a18\u8f09\uff0c\u7814\u7a76\u5718\u968a\u63cf\u8ff0\u9019\u4e9b\u8a5e\u7fa9\u7684\u9996\u8981\u689d \u4ef6\uff0c\u662f\uf967\u53ef\u4ee5\u7834\u58de\u539f\u672c WordNet \u5c0d\u65bc\u540c\u7fa9\u8a5e\u96c6\u5b9a\u7fa9\u6982\uf9a3\u8207\u5176\u8a9e\u7fa9\u95dc\u4fc2\u7684\u67b6\u69cb\u3002\u53e6\u4e00\u65b9\u9762\uff0c CCD \u7684\u7814\u7a76\u5718\u968a\u8003\uf97e\u5230\u53ef\u4ee5\u5b58\u5728\u8a31\u591a\u5728\u4e2d\u6587\u8207\u82f1\u6587\u7684\uf967\u540c\u63cf\u8ff0\u67b6\u69cb\uff0c\u6240\u4ee5\uff0c\u4ed6\u5011\uf967\u6b62\u8868 \u73fe\u5c0d\u4e2d\u6587\u8a5e\u5f59\u5167\u6db5\u7684\u8868\u9054\uff0c\u4e5f\u767c\u5c55\uf9ba\u4e2d\u6587\u8a5e\u5f59\u8a9e\u7fa9\u8207\u6982\uf9a3\u7684\u95dc\u4fc2\u6027\uff0c\u4ee5\uf9dd\u65bc\u5f37\u8abf\u4e2d\u6587\u7684 \u7279\u8cea\u3002 CCD \u7684\u7814\u7a76\u5718\u968a\u5c08\u6ce8\u5728\u6574\u500b CCD \u7684\u67b6\u69cb\uff0c\u63d0\u51fa\u540c\u4e00\u6982\uf9a3\u7684\u540c\u7fa9\u8a5e\u96c6\u7684\u5b9a\u7fa9\uff0c\u5176\u6240 \u5448\u73fe\u7684\u6982\uf9a3\u3001\u5b9a\u7fa9\u548c\u6982\uf9a3\u7db2\u7684\u4e0a\u4e0b\u4f4d\u8a9e\u7fa9\u95dc\u4fc2\uff0c\u6bcf\u4e00\u500b\u540c\u7fa9\u8a5e\u96c6\u90fd\u6709\u5176\u57fa\u672c\u95dc\u4fc2\uff0c\u5f7c\u6b64 \u4e4b\u9593\u4ea6\u6709\u8a9e\u7fa9\u95dc\u4fc2\u7684\u5b58\u5728\u3002\u81f3\u65bc CCD \u7684\uf913\u8f2f\u63a8\u6f14\u539f\u5247\u5728\u8a9e\u7fa9\u7db2\u4e0a\u7684\u5448\u73fe\uff0c\u662f\u904b\u7528\u5230\uf969\u5b78 \u7684\u5f62\u5f0f\u800c\uf92d\u7684\uff0c\u662f\u53ef\u4ee5\u5e6b\u52a9\u7814\u7a76\u8005\u5728\u4e2d\u6587\u8a9e\u7fa9\u5206\u6790\u4e0a\u7684\u4f7f\u7528\u3002 \u81ea\u5f9e 2000/09 \u958b\u59cb\uff0c\uf963\u4eac\u5927\u5b78\u8a08\u7b97\u8a9e\u8a00\u5b78\u7814\u7a76\u6240\u5c31\u5df2\u7d93\u958b\u59cb\u8457\u624b\u4ee5 WordNet \u70ba\u57fa \u6e96\uff0c\u7814\u7a76 CCD\uff0c\u4e26\u5efa\uf9f7\u4e00\u500b\u4e2d\u82f1\u96d9\u8a9e\u7684\u8a5e\u7db2\uff0c\u4e00\u500b\u53ef\u4ee5\u63d0\u4f9b\u5404\u7a2e\uf967\u540c\u7814\u7a76\u7684\u8a5e\u7db2\uff0c\u5982\u6a5f \u5668\u7ffb\u8b6f(MT)\uff0c\u8a0a\u606f\u64f7\u53d6(IE)\u2026\u7b49\u7b49\u3002 \u57fa\u65bc WordNet \u82f1\u6587\u6982\uf9a3\u8207 CCD \u4e2d\u6587\u6982\uf9a3\u662f\u5c6c\u65bc\uf978\u500b\uf967\u540c\u77e5\uf9fc\u80cc\u666f\uff0c\u4e5f\u56e0\u6b64 CCD \u4e2d\uff0c\u4ed6\u5011\uf978\u8005\u9593\u7684\u76f8\u4e92\u95dc\u4fc2\u8207\u6982\uf9a3\uff0c\u662f\u975e\u5e38\u8907\u96dc\u3001\u7e41\u7463\u7684\u3002CCD \u5305\u62ec\uf9ba\u5927\uf97e\u4e14\u7e41\u96dc\u7684\u6210 \u5c0d\u3001\u6210\u7d44\u7684\u5c0f\u7db2\u7d61\uff0c\u5927\u81f4\u4e0a\uff0c\u5dee\uf967\u591a\u6709 10 5 \u7684\u6982\uf9a3\u7bc0\u9ede\u548c 10 6 \u7684\u6210\u7d44\u5c0f\u7db2\u7d61\u7684\u6982\uf9a3\u95dc\u4fc2\uff0c \u4ed6\u5011\u7684\u95dc\u4fc2\uff0c\u5448\u73fe\u5982\u4e0b\u5716\uff1a \u5716 1. WordNet \u5c0f\u7db2\u7d61\u4e2d\u8907\u96dc\u7684\u95dc\u4fc2\u7d50\u69cb 5. \u6587\u737b\u63a2\u8a0e \u5c0d\u65bc\uf978\u5cb8\u8a5e\u5f59\u5c0d\u6bd4\u7684\u63a2\u8a0e\uff0c\u904e\u53bb\u7684\u7814\u7a76\uff0c\u591a\u534a\u8457\u91cd\u5728\u8868\u9762\u8a9e\u8a00\u7279\u5fb5\u7684\u5340\u5225\u3002\u5982\uf99c\u8209\u8a9e\u97f3 \u65b9\u9762\u3001\u8a5e\u5f59\u65b9\u9762\u7684\u5c0d\u6bd4(\u5357\u4eac\u8a9e\u8a00\u6587\u5b57\u7db2\uff0c2004)\uff1b\u6216\u4ee5\u8a9e\u97f3\u3001\u8a5e\u5f59\u3001\u8a9e\u6cd5\u53ca\u8868\u9054\u65b9\u5f0f\u7b49 \u65b9\u9762\uf92d\u5206\u6790\u8a9e\u8a00\u5dee\uf962\u7684\u73fe\u8c61 (\u5982\uff1a\u738b\u9435\u6606\u8207\uf9e1\ufa08\u5065\uff0c1996\uff1b\u59da\u69ae\u677e\uff0c1997\uff1b\u8a31\u6590\u7d62\uff0c1999\uff1b \u6234\u51f1\u5cf0\uff0c1996)\u3002 \u8fd1\uf98e\uf92d\uff0c\u5c0d\u65bc\uf978\u5cb8\u8a5e\u5f59\u5c0d\u6bd4\u7684\u7814\u7a76\uff0c\u6bd4\u8f03\u65b0\u7684\u7814\u7a76\u65b9\u6cd5\uff0c\u662f\u4ee5 WordNet \u70ba\u57fa\u790e\uff0c\u53d6 \uf978\u5cb8\u8a9e\uf9be\u5eab\u8cc7\uf9be\u4f5c\u6bd4\u8f03\uff0c\u9032\u800c\u5206\u6790\uf978\u5cb8\u8a5e\u5f59\u7684\u5c0d\u6bd4(\u5982\uff1aHong & Huang, 2006)\uff1b\u6216\u4ee5 Chinese Gigaword Corpus (2005)\u70ba\u57fa\u790e\uff0c\u63a2\uf96a\uf978\u5cb8\u5c0d\u65bc\u6f22\u8a9e\u8a5e\u5f59\u5728\u4f7f\u7528\u4e0a\u7684\u5dee\uf962\u73fe\u8c61\uff0c\uf9b5 \u5982\uff1a\u76f8\u95dc\u5171\u73fe\u8a5e\u5f59(collocation)\u7684\u5dee\uf962\u3001\u53f0\u7063\u6216\u5927\uf9d3\u7368\u7528\u7684\u5dee\uf962\u3001\u7279\u5b9a\u8a9e\u5883\u4e0b\u7684\u7279\u6b8a\u7528\u6cd5 \u7684\u5dee\uf962\u3001\u8a9e\u8a00\u4f7f\u7528\u7fd2\u6163\u7684\u5dee\uf962\u7b49\u7b49(\u5982\uff1a\u6d2a\u5609\u99a1\u8207\u9ec3\u5c45\u4ec1\uff0c2008)\u3002 6. \u7814\u7a76\u65b9\u6cd5 \u672c\u7814\u7a76\u4ee5\u82f1\u6587\u7684 WordNet\u3001\u7e41\u9ad4\u4e2d\u6587\u7cfb\u7d71\u7684\u4e2d\u6587\u8a5e\u7db2(CWN)\u3001\u4ee5\u53ca\u7c21\u9ad4\u4e2d\u6587\u7cfb\u7d71\u7684\u4e2d\u6587 \u6982\uf9a3\u8fad\u5178(CCD)\u7b49\u4e09\u5927\u8cc7\uf9be\u5eab\u70ba\u4e3b\uff0c\u5c0d\u65bc\u7e41\u9ad4\u4e2d\u6587\u7cfb\u7d71\u7684\u82f1\u4e2d\u5c0d\u8b6f\u8207\u7c21\u9ad4\u4e2d\u6587\u7cfb\u7d71\u7684\u82f1 \u4e2d\u5c0d\u8b6f\uff0c\u6211\u5011\u5148\u9032\ufa08\u6bd4\u5c0d\uff0c\u8a66\u5716\u5728\u6bd4\u5c0d\u4e2d\uff0c\u5c0b\u627e\u51fa\uf978\u8005\u4e4b\u9593\u7684\u5dee\u5225\u8207\u4f7f\u7528\u5206\u4f48\u3002 \u76f8\u540c\u7684\u6982\uf9a3\uff0c\u672c\u6b78\u5c6c\u65bc\u4e00\u500b\u540c\u7fa9\u8a5e\u96c6\uff0c\u4f46\u56e0\uf978\u5cb8\u5728\u8a5e\u5f59\u4f7f\u7528\u4e0a\u7684\u5dee\uf962\uff0c\u800c\u6709\u6240\uf967\u540c\uff0c \u5118\u7ba1\u5982\u6b64\uff0c\u4ecd\u820a\u6709\u4e00\u4e9b\uf978\u5cb8\u4f7f\u7528\u76f8\u540c\u7684\u8a5e\u5f59\uf92d\u8868\u9054\u76f8\u540c\u7684\u6982\uf9a3\u8a9e\u7fa9\u3002\u672c\u6587\u4e2d\u5c07\u5f9e\u7e41\u9ad4\u4e2d \u6587\u7cfb\u7d71\u8207\u7c21\u9ad4\u4e2d\u6587\u7cfb\u7d71\u7684\u82f1\u4e2d\u5c0d\u8b6f\u8cc7\uf9be\uf9e8\uff0c\u96c6\u4e2d\u63a2\u7a76\u540c\u4e00\u500b\u540c\u7fa9\u8a5e\u96c6\uff0c\u5728\uf978\u5cb8\u4f7f\u7528\u7684\u8a5e \u5f59\u662f\u5b8c\u5168\u76f8\u540c\u3001\u5b8c\u5168\uf967\u540c\u7684\uf9fa\u6cc1\u3002\u7136\u5f8c\uff0c\u518d\u5c07\u9019\u4e9b\u5b8c\u5168\u76f8\u540c\u3001\u5b8c\u5168\uf967\u540c\u7684\u8a5e\u5f59\uff0c\u4ee5 Gigaword Corpus \u70ba\u57fa\u790e\uff0c\u5206\u6790\u9019\u4e9b\u8a5e\u5f59\u5728\u9019\u500b\u8a9e\uf9be\u5eab\uf9e8\uff0c\u6240\u5448\u73fe\u51fa\uf978\u5cb8\u4f7f\u7528\u7684\uf9fa\u6cc1\u3002 \u63a5\u8457\uff0c\u672c\u6587\u518d\u4ee5\u8a9e\uf9be\u5eab\u70ba\u7814\u7a76\u51fa\u767c\u9ede\uff0c\u662f\u4ee5\u7d04\u5341\u56db\u5104\u5b57\u7684 Chinese Gigaword Corpus \u70ba\u4e3b\u8981\u8a9e\uf9be\uf92d\u6e90\uff0c\u4ee5\u4e2d\u6587\u8a5e\u5f59\u901f\u63cf\u70ba\u641c\u5c0b\u8a9e\uf9be\u5de5\u5177 Chinese Gigaword Corpus (2005)\u3001 Chinese Word Sketch Engine\u3001Kilgarriff et al. (2005)\u3002Chinese Gigaword Corpus \u5305\u542b\uf9ba\u5206 \u5225\uf92d\u81ea\u5927\uf9d3\u3001\u81fa\u7063\u3001\u65b0\u52a0\u5761\u7684\u5927\uf97e\u8a9e\uf9be\uff0c\u5305\u62ec\u7d04 5 \u5104\u5b57\u65b0\u83ef\u793e\u8cc7\uf9be(XIN)\u3001\u7d04 8 \u5104\u5b57\u4e2d\u592e \u793e\u8cc7\uf9be(CNA)\uff0c\u53ca\u7d04 3 \u5343\u842c\u5b57\u65b0\u52a0\u5761\uf997\u5408\u65e9\u5831\u8cc7\uf9be(Zaobao)\u3002\u672c\u7814\u7a76\uff0c\u50c5\u5c31\u5927\uf9d3\u65b0\u83ef\u793e\u8cc7 \uf9be\u8207\u81fa\u7063\u4e2d\u592e\u793e\u8cc7\uf9be\u9032\ufa08\u6bd4\u5c0d\uff0c\u56e0\u6b64\uff0c\u672c\u6587\u7814\u7a76\u53ef\u4ee5\u63d0\u4f9b\uf978\u5cb8\u8a5e\u5f59\u5dee\uf962\u7684\u5927\uf97e\u8a5e\u5f59\u8b49\u64da\u3002 \u6700\u5f8c\uff0c\u672c\u6587\u4ea6\u8996 Google \u70ba\u4e00\u500b\u64c1\u6709\u5927\uf97e\u7e41\u9ad4\u4e2d\u6587\u3001\u7c21\u9ad4\u4e2d\u6587\u7684\u8a9e\uf9be\u5eab\uff0c\u8a66\u5716\u6839\u64da Google \u6240\u641c\u5c0b\u5230\u7684\u7e41\u9ad4\u4e2d\u6587\u7db2\u9801\u8207\u7c21\u9ad4\u4e2d\u6587\u7db2\u9801\u7684\u8cc7\uf9be\uff0c\u9032\ufa08\u4e26\u9a57\u8b49\uf978\u5cb8\u5728\u8a5e\u5f59\u4f7f\u7528\u5dee \uf962\u4e0a\u7684\u5be6\u969b\u4f7f\u7528\u8b49\u64da\u3002 \u70ba\uf9ba\u53ef\u4ee5\u6bd4\u8f03 Chinese Gigaword Corpus \u7684\u7e41\u9ad4\u4e2d\u6587\u8207\u7c21\u9ad4\u4e2d\u6587\uff0c\u53ca\uf978\u8005\u7684\u4f7f\u7528\u5dee\uf962 \u6027\uff0c\u6211\u5011\u63a1\u7528\u4e2d\u6587\u8a5e\u5f59\u901f\u63cf\u7cfb\u7d71(Chinese Word Sketch)\u9032\ufa08\u6aa2\u9a57\u3002\u4e2d\u6587\u8a5e\u5f59\u901f\u63cf\u7cfb\u7d71\uf9e8\uff0c \u6709\u56db\u5927\u641c\u5c0b\u529f\u80fd\uff0c\u5206\u5225\u70ba\uff1aconcordance\u3001word sketch\u3001Thesaurus\u3001Sketch-Diff\uff0c\u5176\u4e2d \u300cSketch-Diff\u300d\u9019\u500b\u529f\u80fd\u5c31\u662f\u6bd4\u8f03\u8a5e\u5f59\u5dee\uf962\u7684\u5de5\u5177\uff0c\u53ef\u4ee5\u770b\u51fa\uf978\u5cb8\u5c0d\u65bc\u540c\u4e00\u6982\uf9a3\u800c\u4f7f\u7528 \uf967\u540c\u8a5e\u5f59\u7684\u5be6\u969b\uf9fa\u6cc1\u8207\u5206\u4f48\uff0c\u4e5f\u53ef\u4ee5\u770b\u51fa\u540c\u4e00\u8a9e\u7fa9\u8a5e\u5f59\u5728\uf978\u5cb8\u7684\u5be6\u969b\u8a9e\uf9be\u4e2d\uff0c\u6240\u5448\u73fe\u7684 \u76f8\u540c\u9ede\u8207\u5dee\uf962\u6027\u3002\u6211\u5011\u4e3b\u8981\uf9dd\u7528\u4e2d\u6587\u8a5e\u5f59\u901f\u63cf\u4e2d\u8a5e\u5f59\u901f\u63cf\u5dee\uf962(word sketch difference)\u7684 \u529f\u80fd\u3002\u8a5e\u5f59\u901f\u63cf\u5dee\uf962\u7684\u5be6\u969b\u64cd\u4f5c\u4ecb\u9762\uff0c\u5982\u5716 2\uff1a \u5716 2. \u4e2d\u6587\u8a5e\u5f59\u901f\u63cf\u7cfb\u7d71\u7684\u8a5e\u5f59\u63cf\u7d20\u5c0d\u6bd4 \u5728\u6b64\u529f\u80fd\u4e0b\uff0c\u6211\u5011\u5c07\u5df2\u7d93\u6bd4\u5c0d\u904e CCD \u8207 CWN \u5c0d\u61c9\uf967\u540c\u5c0d\u8b6f\u7684\u8a5e\u5f59\uff0c\u9032\u4e00\u6b65\u63a2\u7a76\uf978 \u8a5e\u5f59\u7684\u4f7f\u7528\uf9fa\u6cc1\u8207\u5206\u4f48\u3002\u5728\u672c\u6587\u4e2d\uff0c\u4e3b\u8981\u662f\u4ee5\u6bd4\u5c0d\uf978\u5cb8\u8a5e\u5f59\u8a5e\u983b\u70ba\u4e3b\uff0c\u5018\uf974\u5728 CCD \u8207 CWN \u7684\u5c0d\u61c9\u4e2d\uff0c\u78ba\u5be6\u662f\u76f8\u540c\u8a9e\u7fa9\uff0c\u537b\u5728\uf978\u5cb8\u4f7f\u7528\u5b8c\u5168\u76f8\u540c\u6216\u5b8c\u5168\uf967\u540c\u7684\u8a5e\u5f59\uff0c\u90a3\u9ebc\u5176\u5404 \u81ea\u4f7f\u7528\u7684\u8a5e\u5f59\uff0c\u5728 Gigaword Corpus \uf9e8\u7e41\u9ad4\u8a9e\uf9be\u8207\u7c21\u9ad4\u8a9e\uf9be\u4ea4\u53c9\u6bd4\u5c0d\u5f8c\u6240\u5f97\u7684\u8a5e\u983b\uff0c\u4e5f \u61c9\u7576\u6703\u6709\u8fd1\u4f3c\u7684\u5206\u4f48\u73fe\u8c61\uff0c\u85c9\u6b64\uf969\u64da\uff0c\uf967\u4f46\u53ef\u4ee5\u8b49\u660e CCD \u548c CWN \u5728\u82f1\u4e2d\u5c0d\u8b6f\u4e0a\uff0c\u7e41\u9ad4 \u4e2d\u6587\u7cfb\u7d71\u8207\u7c21\u9ad4\u4e2d\u6587\u7cfb\u7d71\uff0c\u662f\u6709\u5dee\u5225\u7684\uff0c\u4e5f\u53ef\u4ee5\u8b49\u660e\uff0c\u78ba\u5be6\u6709\uf978\u5cb8\u4f7f\u7528\uf967\u540c\u8a5e\u5f59\uf92d\u8868\u9054 \u76f8\u540c\u6982\uf9a3\u8a9e\u7fa9\u7684\u7528\u6cd5\uff0c\u9032\u800c\uf9ba\u89e3\uf978\u5cb8\u8a5e\u5f59\u7684\u5be6\u969b\u73fe\u8c61\uff0c\u4ee5\u9032\ufa08\u672c\u7814\u7a76\u7684\u5206\u6790\u3002 7. CCD\u8207CWN\u8a9e\uf9be\u5206\u6790 \u7e41\u9ad4\u4e2d\u6587\u7cfb\u7d71\u7684\u82f1\u4e2d\u5c0d\u8b6f(CWN)\u8207\u7c21\u9ad4\u4e2d\u6587\u7cfb\u7d71\u7684\u82f1\u4e2d\u5c0d\u8b6f(CCD)\uff0c\u4f9d\uf967\u540c\u8a5e\uf9d0\uff0c\u5340\u5206 \u6210\uff1a\u540d\u8a5e\u3001\u52d5\u8a5e\u3001\u5f62\u5bb9\u8a5e\u548c\u526f\u8a5e\u56db\u5927\uf9d0\uf92d\u9032\ufa08\u5c0d\u6bd4\uff0c\u4ee5 WordNet \u70ba\u4e3b\uff0c\u6aa2\u6e2c\u5728\u540c\u4e00\u500b\u540c \u7fa9\u8a5e\u96c6\u4e2d(Synset)\uff0c\u7e41\u9ad4\u4e2d\u6587\u7cfb\u7d71\u7684\u5c0d\u8b6f\u8a5e\u5f59\u548c\u7c21\u9ad4\u4e2d\u6587\u7cfb\u7d71\u7684\u5c0d\u8b6f\u8a5e\u5f59\uff0c\u7136\u5f8c\u518d\u9032\ufa08\u6bd4 \u5c0d\u3002 \u5728\u56db\u5927\u8a5e\uf9d0\u4e2d\uff0c\u6211\u5011\u53ef\u4ee5\u6e05\u695a\u5f97\u77e5\uff0c\u5728\u540c\u4e00\u500b\u540c\u7fa9\u8a5e\u96c6\u4e2d(Synset)\uff0c\u7e41\u9ad4\u4e2d\u6587\u7cfb\u7d71\uff0c \u53ef\u80fd\u6709\u591a\u500b\u76f8\u5c0d\u61c9\u7684\u5c0d\u8b6f\u8a5e\u5f59\uff0c\u540c\u6a23\u5730\uff0c\u7c21\u9ad4\u4e2d\u6587\u7cfb\u7d71\u4e5f\u53ef\u80fd\u6709\u500b\u76f8\u5c0d\u61c9\u7684\u5c0d\u8b6f\u8a5e\u5f59\u3002 \u5728\u9019\u4e9b\u5c0d\u8b6f\u8a5e\u5f59\uf9e8\uff0c\u53c8\u6709\u53ef\u80fd\u662f\uf978\u908a\u4f7f\u7528\u7684\u5c0d\u8b6f\u8a5e\u5f59\u5b8c\u5168\u4e00\u6a23\uff0c\u7a31\u4e4b\u300c\u5b8c\u5168\u76f8\u540c\u300d\uff1b\u5982 \u679c\uff0c\uf978\u908a\u4f7f\u7528\u7684\u5c0d\u8b6f\u8a5e\u5f59\uff0c\u6c92\u6709\u4e00\u500b\u76f8\u540c\u7684\uff0c\u7a31\u4e4b\u300c\u5b8c\u5168\uf967\u540c\u300d\uff0c\u4e5f\u5c31\u662f\u300c\u771f\u6b63\uf967\u540c\u300d\uff1b \u6216\u8005\uff0c\u53ea\u6709\u4f7f\u7528\u5176\u4e2d\u4e00\u500b\u6216\u4e00\u500b\u4ee5\u4e0a\u5c0d\u8b6f\u8a5e\u5f59\uff0c\u9019\u500b\uf9fa\u6cc1\uff0c\u7a31\u4e4b\u300c\u90e8\u4efd\u76f8\u540c\u300d\uff0c\u800c\u5728\u300c\u90e8 \u4efd\u76f8\u540c\u300d\u7684\u5c0d\u8b6f\u8a5e\u5f59\uff0c\u5982\u679c\uf978\u908a\u7684\u5c0d\u8b6f\u8a5e\u5f59\u4f7f\u7528\u7684\u8a5e\u9996\u76f8\u540c\uff0c\u7a31\u4e4b\u300c\u8a5e\u9996\u76f8\u540c\u300d\uff0c\u5982\u679c \u53ea\u662f\u4f7f\u7528\u5230\u76f8\u540c\u7684\u5b57\uff0c\u5247\u7a31\u4e4b\u300c\u90e8\u4efd\u5b57\u5143\u76f8\u540c\u300d\uff0c\u5982\uff1a \u8868 1. CCD \u548c CWN \u5c0d\u8b6f\u7684\u5404\u7a2e\u5206\u4f48\uf9fa\u6cc1 Synset CCD \u5c0d\u8b6f\u8a5e\u5f59 CWN \u5c0d\u8b6f\u8a5e\u5f59 bookshelf \u66f8\u67b6\u3001\u66f8\u6ac3\u3001\u66f8\u6ae5 \u66f8\u67b6\u3001\u66f8\u6ac3\u3001\u66f8\u6ae5 \u5b8c\u5168\u76f8\u540c lay off \u4e0b\u5d17 \u89e3\u96c7 \u5b8c\u5168\uf967\u540c immediately \uf9f7\u5373 \uf9f7\u523b \u8a5e\u9996\u76f8\u540c according \u64da\u5831 \u6839\u64da \u90e8\u5206\u5b57\u5143\u76f8\u540c \u5c0d\u65bc CWN \u8207 CCD \u7684\u5c0d\u6bd4\uff0c\u7e3d\u5171\u6709 70744 \u500b Synset \u662f\u5c0d\u8b6f\u76f8\u540c\u7684\uff0c\u5206\u5c6c\u65bc\u5f62\u5bb9\u8a5e\u3001 \u526f\u8a5e\u3001\u540d\u8a5e\u548c\u52d5\u8a5e\u9019\u56db\u500b\u8a5e\uf9d0\u7576\u4e2d\uff0c\u5176\u4e2d\uff0c\u4ee5\u540d\u8a5e\u5728 CWN \u8207 CCD \u7684\u5b8c\u5168\u76f8\u540c\u5c0d\u8b6f\u4e2d\uff0c \u6240\u4f54\u6bd4\uf9b5\u6700\u9ad8\uff0c\u6709 66.79%\uff1b\u53cd\u4e4b\uff0c\u52d5\u8a5e\u6240\u4f54\u6bd4\uf9b5\u6700\u4f4e\uff0c\u50c5\u6709 4.05%\uff0c\u5176\u8a73\u7d30\u7684\u5206\u4f48\u60c5\u6cc1\uff0c \u5982\u4e0b\u5716\u986f\u793a\uff1a \u540c\u7fa9\u8a5e\u96c6 0 20 40 60 \u5f62\u5bb9\u8a5e 24.59% 17394 \u526f\u8a5e 4.57% 3231 \u540d\u8a5e 66.79% 47253 \u52d5\u8a5e 4.05% 2866 % \u540c\u7fa9\u8a5e\u96c6 \u5f9e\u8868 1 \u5230\u8868 3\uff0c\u6211\u5011\u53ef\u4ee5\u6e05\u695a\u77e5\u9053\u5c0d\u65bc\u5404\u8a5e\uf9d0\uff0cCCD \u548c CWN \u5728\u5c0d\u8b6f\uf967\u540c\u7684\u8a5e\u5f59\uf9e8\uff0c \u4ecd\u7136\u6709\u4e9b\u7b97\u662f\u8a9e\u7fa9\u76f8\u8fd1\u7684\u76f8\u95dc\u8a5e\u5f59\uff0c\u6263\u9664\u9019\u4e9b\u76f8\u95dc\u8a5e\u5f59\u5f8c\uff0c\uf978\u5cb8\u8a5e\u5f59\u5728\u4f7f\u7528\u4e0a\u7684\u771f\u6b63\uf967 \u540c\uff0c\u5c31\u53ef\u6e05\u695a\u5448\u73fe\u3002\u81f3\u65bc\uff0c\u4e0a\u6587\u4e2d\uff0c\u6240\u63d0\u53ca\u95dc\u65bc\u300c\u52d5\u8a5e\u300d\u662f\uf978\u5cb8\u8a5e\u5f59\u4e2d\uff0c\u4f7f\u7528\u6700\u591a\uf967\u540c \u7684\uf9fa\u6cc1\uff0c\u6211\u5011\u5f9e\u8868 3 \u7684\u5206\u6790\u5f97\u77e5\uff0c\u5728\u52d5\u8a5e\u7684\u4f7f\u7528\u4e0a\uff0c\u56e0\u70ba\u8f03\u5e38\u51fa\u73fe\u540c\uf9d0\u8fd1\u7fa9\u8a5e\u6216\u8a9e\u7fa9\u76f8 \u8fd1\u76f8\u95dc\u8a5e\uf92d\u53d6\u4ee3\u539f\u672c\u7684\u8a5e\u5f59\u7684\uf9fa\u6cc1\uff0c\u6240\u4ee5\u300c\u8a5e\u9996\u76f8\u540c\u300d\u548c\u300c\u90e8\u5206\u5b57\u5143\u76f8\u540c\u300d\u9019\uf978\uf9d0\u4f54\uf9ba Synset type \u5716 4. \uf978\u5cb8\u4f7f\u7528\u5b8c\u5168\u76f8\u540c\u8a5e\u5f59\u7684\u5206\u4f48\u60c5\u6cc1 \u5f9e\u5716 4 \uf92d\u770b\uff0c\u66f2\u7dda\u5f4e\u66f2\u7684\u524d\u5f8c\uf978\u7aef\uff0c\u4ee3\u8868\uf978\u8005\u7684\u5dee\u8ddd\u8f03\u5927\uff0c\u9760\u5de6\u908a\u7684\u5f4e\u66f2\u66f2\u7dda\u90e8\u4efd\uff0c \u662f\u53f0\u7063\u5448\u73fe\u5f37\u52e2\u8a5e\u5f59\u7684\u73fe\u8c61\uff0c\u9760\u53f3\u908a\u7684\u5f4e\u66f2\u66f2\u7dda\u90e8\u4efd\uff0c\u5247\u662f\u5927\uf9d3\u5448\u73fe\u5f37\u52e2\u8a5e\u5f59\u7684\u73fe\u8c61\u3002 \u5728\u4f7f\u7528\u5b8c\u5168\u76f8\u540c\u8a5e\u5f59\u4e2d\uff0c\u5728\u5206\u6790\uf969\u64da\u5448\u73fe\u4e0a\u4ecd\u6709\u4e9b\u4f7f\u7528\u5dee\uf962\u7684\u73fe\u8c61\uff0c\u9019\u662f\u503c\u5f97\u6211\u5011\u6df1\u5165 CNA (\u7e41\u9ad4\u4e2d\u6587) XIN (\u7c21\u9ad4\u4e2d\u6587) \u9152\u6876 32 (0.157\u03bc) 20 (0.155\u03bc) \u4f7f\u7528\uf9fa\u6cc1\u975e\u5e38\u63a5\u8fd1 \u7d72\u74dc 1380 (6.78\u03bc) 96 (0.748\u03bc) \u4f7f\u7528\uf9fa\u6cc1\u6709\u5dee\uf962 \uf9c9\uf96e\u5200 \u96d9\u5203\u5c0f\u5200 2 (0.00982\u03bc) 273 (2.13\u03bc) \u4f7f\u7528\uf9fa\u6cc1\u6709\u986f\u8457\u5dee\uf962 -0.1 -0.05 0 1 8 15 22 29 36 43 50 57 64 71 78 85 Synset type \u5dee\u8ddd \uf978\u8005\u5dee\u8ddd Synset type (3) \u51fa\u79df\uf902 \u6240\u6709\u4e2d\u6587\u7db2\u9801\uff1a\u7d04\u6709 141,000,000 \u9805\u7d50\u679c \u7e41\u9ad4\u4e2d\u6587\u7db2\u9801\uff1a\u7d04\u6709 4,330,000 \u9805\u7d50\u679c \u7c21\u9ad4\u4e2d\u6587\u7db2\u9801\uff1a\u7d04\u6709 136,670,000 \u9805\u7d50\u679c \"\u7fa9\u8a13\u8005\uff0c\u89c0\uf9a3\u76f8\u540c\uff0c\u754c\uf96f\u76f8\u540c\uff0c\u7279\uf967\uf96f\uf978\u5b57\u4e4b\u88fd\u9020\u53ca\u5176\u767c\u97f3\uff02(\u9ec3\u4f83\u8ff0\u8207\u9ec3\u60bc\uff0c1983)\u3002 \u6c11\u773e\u5c0d\u65bc\u67d0\u4e9b\u8a5e\u5f59\u7684\u4f7f\u7528\uf9fa\u6cc1\uff0c\u800c\u7121\u6cd5\u771f\u6b63\u63d0\u4f9b\uf978\u5cb8\u8a5e\u5f59\u6216\u4e16\u754c\u83ef\u8a9e\u5c0d\u6bd4\u7684\u7814\u7a76\u3002 \u53cd\u4e09\u3002\u5728\u8a5e\u5f59\u540c\u7fa9\u7684\u6b78\u7d0d\u4e0a\uff0c\u5927\u6982\u53ef\u5340\u5206\u70ba\uf978\u7a2e\u9032\ufa08\u65b9\u5411\uff1a\u8a5e\u7fa9\u8a13\u8a41\u8207\u6a5f\u5668\u5b78\u7fd2\u3002\u7b2c\u4e00\uff0c \u7528\u6cd5\u662f\u6108\uf92d\u6108\u591a\u3002\u9019\u4e5f\uf96f\u660e\uff0cGoogle \u6240\u641c\u5c0b\u5230\u7684\u8cc7\uf9be\uff0c\u50c5\u53ef\u4ee5\u7576\u4f5c\u76ee\u524d\u53f0\u7063\u8207\u5927\uf9d3\u4e00\u822c \uf9be\uf967\u50c5\u80fd\u8b93\u6a5f\u5668\u5b78\u7fd2\u65b9\u6cd5\uf901\u77ad\u89e3\u4f7f\u7528\u8005\u6240\u95e1\u8ff0\u7684\u6587\u5b57\u5167\u5bb9\u6982\uf9a3\uff0c\u4ea6\u53ef\u4ee5\u5c0d\u63d0\u554f\u6587\u5b57\u8209\u4e00 \u6b64\u5916\uff0c\uf978\u5cb8\u76ee\u524d\u4ea4\uf9ca\u8f03\u983b\u7e41\uff0c\u5e38\u6709\u4e92\u76f8\u5f15\u7528\uff0c\u7121\u6cd5\u6392\u9664\uff0c\u76ee\u524d\uff0c\u5373\u53ef\u770b\u51fa\u5927\uf9d3\u7528\u300c\u8b66\u5bdf\u300d \u540c\u7fa9\u8a5e\u8a9e\uf9be\u5728\u81ea\u7136\u8a9e\u8a00\u8655\uf9e4\u8207\u8cc7\u8a0a\u64f7\u53d6\u6280\u8853\uf9b4\u57df\u4e0a\u662f\u5f88\u91cd\u8981\u7684\uf96b\u8003\u8cc7\u6e90\u3002\u900f\u904e\u540c\u7fa9\u8a5e\u8a9e \u9801\u8a0a\u606f\u5f97\u77e5\uf978\u5cb8\u7db2\u9801\u7e3d\uf969\u5404\uf974\u5e72\uff0c\u6240\u4ee5\uff0c\u6309\u5e38\uf9e4\u63a8\u65b7\uff0c\u5927\uf9d3\u7684\u7db2\u9801\u61c9\u8a72\u6bd4\u53f0\u7063\u591a\u5f88\u591a\u3002 \u5716 8. \uf978\u5cb8\u4f7f\u7528\u5b8c\u5168\uf967\u540c\u7684\u52d5\u8a5e\u8a5e\u5f59\u7684\u5206\u4f48\u60c5\u6cc1 \u6cc1\uff0c\u4f46\u662f\uff0c\u7562\u7adf\u7db2\uf937\u4e0a\u7684\u8cc7\u6e90\u662f\u6bd4\u8f03\u591a\u5143\u5316\u3001\u4e5f\u6bd4\u8f03\u5177\u6709\u8907\u96dc\u6027\uff0c\u800c\u4e14\uff0c\u6211\u5011\u7121\u6cd5\u5f9e\u7db2 1. \u7c21\u4ecb \u7db2\u9801\u300d\u7684\u7b46\uf969\uff0c\uf9b5\u5982\uff1a \u85c9\u7531 Google \u641c\u5c0b\u7db2\u9801\u7684\u8cc7\uf9be\uff0c\u96d6\u7136\u53ef\u4ee5\u5448\u73fe\u51fa\u53f0\u7063\u3001\u5927\uf9d3\u7684\uf978\u5cb8\u8a5e\u5f59\u5c0d\u6bd4\u4f7f\u7528\uf9fa 80 100 \u4ee5 WordNet \u70ba\u57fa\u790e\uff0cCWN \u6240\u5c0d\u8b6f\u7684\u7e41\u9ad4\u4e2d\u6587\u8207 CCD \u6240\u5c0d\u8b6f\u7684\u7c21\u9ad4\u4e2d\u6587\uff0c\uf978\u8005\u4f7f\u7528 \u5b8c\u5168\uf967\u76f8\u540c\u7684\u60c5\u6cc1\uff0c\u4f9d\u5404\u8a5e\uf9d0\u7684\u5206\u4f48\u60c5\u5f62\uff0c\u5982\u4e0b\u5716\u986f\u793a\uff1a \u8868 2. CCD \u548c CWN \u5c0d\u8b6f\uf967\u540c\u7684\u5206\u4f48\uf9fa\u6cc1 \u5f62\u5bb9\u8a5e \u526f\u8a5e \u540d\u8a5e \u52d5\u8a5e \u7e3d\uf969 \u540c\u7fa9\u8a5e\u96c6\uf969\uf97e 17915 3575 66025 12127 99642 \uf967\u540c\u5c0d\u8b6f\uf969\uf97e 521 344 18772 9261 28898 2.91% 9.62% 28.43% 76.37% 29.99% \u6700\u5c11 \u6700\u591a \u503c\u5f97\u4e00\u63d0\u7684\u662f\uff0c\u5728 CCD \u548c CWN \u7ffb\u8b6f\uf967\u540c\u7684\u5206\u4f48\uf9fa\u6cc1\uf9e8\uff0c\u5f88\u6e05\u695a\u5f97\u770b\u5230\uff0c\u300c\u52d5\u8a5e\u300d \u5728\uf978\u5cb8\u7684\u4f7f\u7528\uf9fa\u6cc1\uff0c\u6709\u6975\u5927\u7684\u5dee\uf962\u6027\uff0c\uf967\u904e\uff0c\u7531\u65bc\u5728\u6211\u5011\u5be6\u969b\u4f7f\u7528\u6f22\u8a9e\u6642\uff0c\u5e38\u6703\u4ee5\u540c\uf9d0 \u8fd1\u7fa9\u8a5e\u6216\u8a9e\u7fa9\u76f8\u8fd1\u76f8\u95dc\u8a5e\uf92d\u53d6\u4ee3\u539f\u672c\u7684\u8a5e\u5f59\uff0c\u6240\u4ee5\uff0c\u6211\u5011\u53c8\uf901\u9032\u4e00\u6b65\uff0c\uf901\u4ed4\u7d30\u5730\u5206\u6790\uff0c \u5e0c\u671b\u5c07\u6bcf\u4e00\u500b\u8a5e\uf9d0\u4e2d\uff0c\u6709\u9019\u6a23\u7684\u4f7f\u7528\u60c5\u5f62\u5206\uf9d0\u51fa\uf92d\uff0c\u4ee5\u5f97\u5230\u771f\u6b63\uf978\u5cb8\u4f7f\u7528\uf967\u540c\u8a5e\u5f59\u7684\u73fe \u8c61\u3002 \u6211\u5011\u4ee5\u8a5e\u5f59\u7684\u300c\u8a5e\u9996\u76f8\u540c\u300d\u3001\u300c\u90e8\u4efd\u5b57\u5143\u76f8\u540c\u300d\u548c\u300c\u771f\u6b63\uf967\u540c\u300d\u9019\u4e09\u5927\uf9d0\u70ba\u4e3b\uff0c\u5206 \u6790 CCD \u548c CWN \u5728\u5f62\u5bb9\u8a5e\u3001\u526f\u8a5e\u3001\u540d\u8a5e\u548c\u52d5\u8a5e\u9019\u56db\u500b\u8a5e\uf9d0\u7576\u4e2d\uff0c\u7ffb\u8b6f\uf967\u540c\u7684\u5206\u4f48\uf9fa\u6cc1\uff0c \u5982\u4e0b\u8868\u986f\u793a\uff1a \u8868 3. CCD \u548c CWN \u5728\u5404\u8a5e\uf9d0\u4e2d\uff0c\u5c0d\u8b6f\uf967\u540c\u7684\u5206\u4f48\uf9fa\u6cc1 Gigaword Corpus \u70ba\u4f9d\u64da\uff0c\u6aa2\u6e2c\u5be6\u969b\u8a9e\uf9be\u4e2d\u6240\u5448\u73fe\u7684\uf9fa\u6cc1\uff0c\u540c\u6642\uff0c\u4e5f\u4ee5\u76ee\u524d\u5728\u7db2\uf937\u4e0a\u641c\u5c0b \u529f\u80fd\u76f8\u7576\u5f37\u5927\u7684 Google \u4f5c\u70ba\u9a57\u8b49\u7684\u5c0d\u8c61\uff0c\u6bd4\u5c0d\uf9dd\u7528\u5728 Google \u6240\u641c\u5c0b\u7684\u8cc7\uf9be\uf92d\u9a57\u8b49\uf978\u5cb8 \u8a5e\u5f59\u7684\u5c0d\u6bd4\u3002 8.1 Gigaword Corpus \u9996\u5148\uff0c\u6211\u5011\u5148\u53d6\uf978\u5cb8\u4f7f\u7528\u8a5e\u5f59\u300c\u5b8c\u5168\u76f8\u540c\u300d\u7684\u8cc7\uf9be\uff0c\u6aa2\u6e2c\u9019\u4e9b\u8cc7\uf9be\u5728 Gigaword Corpus \u4e2d\uff0c\u5206\u5c6c\u5728\u7e41\u9ad4\u4e2d\u6587\u8207\u7c21\u9ad4\u4e2d\u6587\u7684\u4f7f\u7528\u983b\uf961\uff0c\u518d\u8a08\u7b97\u6bcf\u500b\u8a5e\u5f59\u7684\u983b\uf961\u5728\u7e41\u9ad4\u4e2d\u6587\u8cc7\uf9be\u8207 \u7c21\u9ad4\u4e2d\u6587\u8cc7\uf9be\uf9e8\u6240\u4f54\u7684\u6bd4\uf9b5\uff0c\u5982\u6b64\uff0c\u5373\u53ef\u77e5\u9053\u6bcf\u4e00\u500b\u8a5e\u5f59\uff0c\u5728\u7e41\u9ad4\u4e2d\u6587\u8207\u7c21\u9ad4\u4e2d\u6587\uf9e8\uff0c \u51fa\u73fe\u548c\u4f7f\u7528\u7684\u60c5\u5f62\u3002\uf9e4\u60f3\u7684\u60f3\u6cd5\uff0c\u5982\u679c\u4e00\u500b\u8a5e\u5f59\u5728\uf978\u5cb8\u4f7f\u7528\u7684\u60c5\u6cc1\u662f\u975e\u5e38\u63a5\u8fd1\u7684\uff0c\u5176\uf978 \u8005\u8a5e\u983b\u6bd4\uf9b5\u7684\u5dee\u8ddd\uff0c\u61c9\u8a72\u662f\u975e\u5e38\u5c0f\u7684\u3002\u6211\u5011\u8a66\u8457\u5c07\u540c\u4e00\u8a5e\u5f59\u5728\uf978\u5cb8\u4f7f\u7528\u7684\u8a5e\u983b\u6bd4\uf9b5\u76f8\u6e1b\uff0c \u4ee5\uf965\u6aa2\u6e2c\u9019\u4e9b\u4f7f\u7528\u4e0a\u5b8c\u5168\u76f8\u540c\u7684\u8a5e\u5f59\uff0c\u53c8\u56e0\u5176\u5dee\u8ddd\u7684\uf969\u503c\u904e\u5c0f\uff0c\u6240\u4ee5\u6211\u5011\u4ee5\u653e\u5927 100000 \u500d\u5f8c\u7684\uf969\u503c\uf92d\u5448\u73fe\uff0c\u5176\u5206\u4f48\u60c5\u5f62\uff0c\u5982\u4e0b\u5716\u6240\u793a\uff1a \u5b8c\u5168\u76f8\u540c -100 -80 -60 -40 -20 0 20 40 60 80 100 1 896 1791 2686 3581 4476 5371 6266 \u5dee\u8ddd \uf978\u8005\u5dee\u8ddd \u63a2\u8a0e\u7684\u8b70\u984c\u3002\u5728 \u500b\u8a5e\u5f59)\uff0c\u5c07\u5dee\u8ddd\u518d\u653e\u5927\u5448\u73fe\u5982\u5716 5\u3002\u5f9e\u5716 5 \u7684\u5dee\u8ddd\uf969\u503c\u986f\u793a\uff0c\u662f\u975e\u5e38\u975e\u5e38\u5c0f\u7684\u3002\u4e00\u65b9 \u9762\u8b49\u660e\uf978\u5cb8\u4f7f\u7528\u9019\u4e9b\u8a5e\u5f59\u7684\u60c5\u5f62\uff0c\u662f\u975e\u5e38\u975e\u5e38\u63a5\u8fd1\u7684\u3002\u53e6\u4e00\u65b9\u9762\uff0c\u7576\u9593\u8ddd\u653e\u5927\u5f8c\uff0c\u6211\u5011 \u770b\u5230\u5dee\uf962\u5206\u4f48\u5448\u5e73\uf904\u7684 S \u5b57\u578b\uff0c\u9019\u4e5f\u8207\u9810\u671f\u4e2d\u81ea\u7136\u8a9e\uf9be\u5206\u4f48\u7684\uf9fa\u6cc1\u76f8\u7b26\u3002 \u5b8c\u5168\u76f8\u540c\u4e2d\u768430% -0.03 -0.02 -0.01 0 0.01 0.02 0.03 1 234 467 700 933 1166 1399 1632 1865 synset type \u5dee\u8ddd \uf978\u8005\u5dee\u8ddd \u5716 5.\uf978\u5cb8\u4f7f\u7528\u5b8c\u5168\u76f8\u540c\u8a5e\u5f59\u4e2d\uff0c\u5dee\u8ddd\u6700\u5c0f\u7684 30%\u7684\u5206\u4f48\u60c5\u6cc1 \u4e0b\u9762\u8868 4\uff0c\uf96f\u660e\uf978\u5cb8\u5c0d\u65bc\u76f8\u540c\u8a5e\u5f59\uff0c\u5728 Gigaword Corpus \u4e2d CAN \u7684\u7e41\u9ad4\u4e2d\u6587\u8207 XIN \u7684\u7c21\u9ad4\u4e2d\u6587\uff0c\uf978\u8005\u4f7f\u7528\uf9fa\u6cc1\u662f\u975e\u5e38\u63a5\u8fd1\u7684\u3002 \u8868 4. \uf978\u5cb8\u4f7f\u7528\u5b8c\u5168\u76f8\u540c\u8a5e\u5f59\u7684\u5206\u4f48\uf9fa\u6cc1\u793a\uf9b5 \u8a5e\u5f59 \u8a5e\u983b \u9644\u8a3b \u63a5\u8457\uff0c\u6211\u5011\u4ee5\u76f8\u540c\u7684\u5be6\u9a57\u65b9\u6cd5\u8207\u6b65\u9a5f\uf92d\u6aa2\u6e2c\uf978\u5cb8\u4f7f\u7528\u5b8c\u5168\uf967\u540c\u7684\u8a5e\u5f59\uff0c\u6aa2\u6e2c\u9019\u4e9b\u8cc7 \uf9be\u5728 \u5b8c\u5168\uf967\u540c\u7684\u540d\u8a5e\u6bd4\u5c0d -300 -200 -100 0 100 200 1 32 63 94 125 156 187 218 249 280 Synset type \u5dee\u8ddd \uf978\u8005\u5dee\u8ddd \u5716 6. \uf978\u5cb8\u4f7f\u7528\u5b8c\u5168\uf967\u540c\u7684\u540d\u8a5e\u8a5e\u5f59\u7684\u5206\u4f48\u60c5\u6cc1 \u5728\uf978\u5cb8\u4f7f\u7528\u5b8c\u5168\uf967\u540c\u7684\u540d\u8a5e\u8a5e\u5f59\uf9e8\uff0c\u5171\u8a08\u6709 302 \u7b46\u8cc7\uf9be\uff0c\u9760\u53f3\u908a\u7684\u5f4e\u66f2\u66f2\u7dda\u90e8\u4efd\uff0c \u662f\u53f0\u7063\u5448\u73fe\u5f37\u52e2\u8a5e\u5f59\u7684\u73fe\u8c61\uff0c\u9760\u5de6\u908a\u7684\u5f4e\u66f2\u66f2\u7dda\u90e8\u4efd\uff0c\u5247\u662f\u5927\uf9d3\u5448\u73fe\u5f37\u52e2\u8a5e\u5f59\u7684\u73fe\u8c61\u3002 \u6211\u5011\u4e00\u6a23\u63a1\u53d6\uf978\u8005\u5dee\u8ddd\u6700\u5c0f\u7684 30%\uf92d\u6aa2\u6e2c\uff0c\u5176\u8a08\u6709 91 \u7b46\u8cc7\uf9be\uff0c\u5f9e\u5716 7 \u7684\u5dee\u8ddd\uf969\u503c\uf92d\u770b\uff0c \u53ef\u4ee5\u8b49\u660e\uff0c\u9019\u4e9b\uf967\u540c\u7684\u8a5e\u5f59\uff0c\u5728\u6240\u5c6c\u7684\u8a9e\u8a00\u7cfb\u7d71\uf9e8\uff0c\u5176\u4f7f\u7528\uf9fa\u6cc1\u7684\u7368\u7279\u6027\uff0c\u63db\u8a00\u4e4b\uff0c\u540c \u53cd\u4e4b\u4ea6\u7136\uff0c\u800c\u5448\u73fe\u76f8\u5c0d\u4e4b\u5206\u4f48\uf9fa\u614b\uff0c\u9019\u6a23\u7684\u60c5\u5f62\uff0c\u5728\u5716 7 \u7684\u5dee\u8ddd\uf969\u503c\u548c\u8868 5 \uf9b5\u5b50\u4e2d\u5f97\u5230 \u9a57\u8b49\u3002 \u5b8c\u5168\uf967\u540c\u4e2d\u540d\u8a5e\u6bd4\u5c0d\u768430% 0.05 0.1 \u8a71\uf96f\uff0c\u300c\u7c21\u9ad4\u4e2d\u6587\u7db2\u9801\u300d\u7684\u7b46\uf969\u8207\u7d50\u679c\uff0c\u5c31\u662f\u300c\u6240\u6709\u4e2d\u6587\u7db2\u9801\u300d\u7684\u7b46\uf969\u6263\u6389\u300c\u7e41\u9ad4\u4e2d\u6587 -300 -200 -100 0 100 200 300 400 1 48 95 142 189 236 283 330 377 424 \u5dee\u8ddd \uf978\u8005\u5dee\u8ddd \u5c0d\u65bc\uf978\u5cb8\u8a5e\u5f59\u7684\u4f7f\u7528\uf9fa\u6cc1\u3002\u56e0\u6b64\uff0c\u6211\u5011\u9078\u5b9a\u4ee5 Google \u641c\u5c0b\u5f15\u64ce\u6240\u627e\u5230\u7684\u8cc7\uf9be\u505a\u70ba\u5c0d\u65bc\uf978 \u5cb8\u8a5e\u5f59\u5c0d\u6bd4\u7814\u7a76\u7684\u5c0d\u8c61\uff0c\u9032\ufa08\u641c\u5c0b\u5f8c\u6240\u5f97\u5230\u7684\u7d50\u679c\uff0c\u5373\u53ef\u89c0\u5bdf\u5230\u300c\u6240\u6709\u4e2d\u6587\u7db2\u9801\u300d\u8207\u300c\u7e41 \u9801\u300d\u7684\u7b46\uf969\u8207\u7d50\u679c\uff0c\u53ef\u4ee5\u770b\u5230\u5305\u542b\u300c\u7e41\u9ad4\u4e2d\u6587\u7db2\u9801\u300d\u8207\u300c\u7c21\u9ad4\u4e2d\u6587\u7db2\u9801\u300d\u7684\u8a0a\u606f\uff0c\u63db\uf906 \u7c21\u9ad4\u4e2d\u6587\u7db2\u9801\uff1a\u7d04\u6709 310,500,000 \u9805\u7d50\u679c \u4ee5\u300a\u540c\u7fa9\u8a5e\u8a5e\uf9f4(\u64f4\u5c55\u7248)\u300b\u70ba\uf9b5\uff0c\u5f9e\u91cb\u7fa9\u7fa9\u6db5\u7684\u89d2\ufa01\uf99c\u8209\u51fa\u9069\u5408\u8a6e\u91cb\u8a72\u8a5e\u7d44 \u9ad4\u4e2d\u6587\u7db2\u9801\u300d\u7684\u8a0a\u606f\uff0c\u96d6\u7136\u6c92\u6709\u76f4\u63a5\u986f\u793a\u300c\u7c21\u9ad4\u4e2d\u6587\u7db2\u9801\u300d\u7684\u8cc7\u8a0a\uff0c\u4f46\u5728\u300c\u6240\u6709\u4e2d\u6587\u7db2 \u8a08\u7b97\u5171\u540c\u64c1\u6709\u7684\u91cb\u7fa9\u6587\u5b57\u51fa\u73fe\u6bd4\uf961\uff0c\u4ee5\u89e3\u6790\uf978\u8a5e\u5f59\u9593\u6240\u5305\u6db5\u4e4b\u91cb\u7fa9\u6982\uf9a3\u3002\u4e26\u4e14 \u7e41\u9ad4\u4e2d\u6587\u7db2\u9801\uff1a\u7d04\u6709 24,500,000 \u9805\u7d50\u679c \u540c\u7fa9\u8a5e\u7d44\u7684\u6982\uf9a3\u5167\u6db5\uff0c\u672c\u7814\u7a76\u63d0\u51fa\u57fa\u65bc\u8fad\u5178\u91cb\u7fa9\u6587\u5b57\u7684\u95dc\uf997\u8a08\u7b97\u539f\u5247\uff0c\u8a66\u900f\u904e \u6240\u6709\u4e2d\u6587\u7db2\u9801\uff1a\u7d04\u6709 335,000,000 \u9805\u7d50\u679c \u6a5f\u5668\u5b78\u7fd2\u65b9\u6cd5\u4f7f\u7528\u7d71\u8a08\u65b9\u6cd5\uf92d\u8fa8\u5225\u76f8\u4f3c\u8a5e\u5f59\uff0c\u5247\u6703\u7f3a\u4e4f\u8a9e\u7fa9\u7684\u8fa8\u5225\u3002\u70ba\uf9ba\u77ad\u89e3 \u6211\u5011\u4e5f\uf9dd\u7528\u4e00\u822c\u6c11\u773e\u6bcf\u5929\u90fd\u6703\u4f7f\u7528\u7684\u7db2\uf937\u8cc7\uf9be\uf92d\u9032\ufa08\u5c0d\u6bd4\uff0c\u8a66\u5716\uf9ba\u89e3\u6c11\u773e\u5728\u65e5\u5e38\u751f\u6d3b\u4e2d (5) \u8b66\u5bdf \uf974\u6c92\u6709\u5c0d\u540c\u7fa9\u8a5e\u7684\uf9d0\u5225\u539f\u5247\u52a0\u4ee5\u5b9a\u7fa9\uff0c\u5247\u5f8c\u4eba\uf965\u6703\u7522\u751f\u5c0d\u540c\u7fa9\u8a5e\u7684\u6df7\u6dc6\u3002\u5f8c\u8005 \u5c0d\u65bc\uf978\u5cb8\u8a5e\u5f59\u5c0d\u6bd4\u7814\u7a76\u800c\u8a00\uff0c\u9664\uf9ba\u6839\u64da\u5177\u6709\u5b78\u8853\u6027\u8cea\u7684\u8a9e\uf9be\u5eab\u7684\u8cc7\uf9be\uf92d\u9032\ufa08\u5c0d\u6bd4\u4e4b\u5916\uff0c \u4ee5\u6a5f\u5668\u5b78\u7fd2\u65b9\u6cd5\uf92d\u8a08\u7b97\u540c\u7fa9\u8a5e\u76f8\u4f3c\ufa01\u3002\u7136\u800c\u524d\u8005\u5c08\u5bb6\u5206\uf9d0\u539f\u5247\u662f\u900f\u904e\u8a9e\u611f\u9032\ufa08\uff0c 8.2 Google\u641c\u5c0b\u5f15\u64ce \u4f46\u662f\uff0c\u5728 Google \u641c\u5c0b\u7db2\u9801\u7684\u8cc7\uf9be\u537b\u767c\u73fe\uff0c\u300c\u8b66\u5bdf\u300d\u4e00\u8a5e\u4ea6\u5df2\u5728\u5927\uf9d3\u5730\u5340\u5ee3\u6cdb\u88ab\u4f7f\u7528\uf9ba\u3002 \u800c\u5f9e\u8a08\u7b97\u8a9e\u8a00\u5b78\u65b9\u6cd5\uf92d\uf96f\uff0c\u540c\u7fa9\u8a5e\u95dc\uf997\u9700\u8981\uf96b\u8003\u8a9e\uf9be\u5eab\u4e2d\u8a5e\u7d44\u7684\u51fa\u73fe\u983b\uf961\uff0c\u8f14 \u4e00\u500b\u8a5e\u5f59\uff0c\u5728\u7e41\u9ad4\u4e2d\u6587\u7cfb\u7d71\uf9e8\uff0c\u4f7f\u7528\u7684\u983b\uf961\u8f03\u9ad8\uff0c\u5728\u7c21\u9ad4\u4e2d\u6587\u7cfb\u7d71\uf9e8\uff0c\u4f7f\u7528\u7684\u983b\uf961\u8f03\u4f4e\uff0c \u4e0b\u9762\u8868 5\uff0c\uf96f\u660e\uf978\u5cb8\u5c0d\u65bc\u5b8c\u5168\uf967\u540c\u7684\u540d\u8a5e\u8a5e\u5f59\uff0c\u5728 Gigaword Corpus \u4e2d CAN \u7684\u7e41\u9ad4 \u4e2d\u6587\u8207 XIN \u7684\u7c21\u9ad4\u4e2d\u6587\uff0c\uf978\u8005\u4f7f\u7528\u7684\u5206\u4f48\uf9fa\u6cc1\u3002 \u8868 5. \uf978\u5cb8\u4f7f\u7528\u5b8c\u5168\uf967\u540c\u7684\u540d\u8a5e\u8a5e\u5f59\u7684\u5206\u4f48\uf9fa\u6cc1\u793a\uf9b5 \u8a5e\u5f59 \u8a5e\u983b \u9644\u8a3b CCD CWN CCD CWN XIN CNA CNA XIN \u98a8\u5e3d \u982d\u7f69 10 (0.0779\u03bc) 2 (0.0098\u03bc) 101 (0.4963\u03bc) 37 (0.2882\u03bc) \u4f7f \u7528 \uf9fa \u6cc1 \u5c0d\u6bd4\u660e\u78ba \u96d9\u4f11\u65e5 \u9031\u672b 1383 (10.7736\u03bc) 25 (0.1228\u03bc) 17194 (84.4908\u03bc) 6105 (47.558\u03bc) \u4f7f \u7528 \u5c0d \u6bd4 \u8f03\uf967\u660e\u78ba \u5c4f\u5e55 CRT \u5c4f\u5e55 \u6620\u50cf\u7ba1 3086 (24.04\u03bc) 118 (0.5798\u03bc) 427 (2.0983\u03bc) 1 (0.0078\u03bc) \u4f7f \u7528 \u5c0d \u6bd4 \u8f03\uf967\u660e\u78ba \u81f3\u65bc\u5728\uf978\u5cb8\u4f7f\u7528\u5b8c\u5168\uf967\u540c\u7684\u52d5\u8a5e\u8a5e\u5f59\uf9e8\uff0c\u5171\u8a08\u6709 461 \u7b46\u8cc7\uf9be(\u5982\u5716 8 \u6240\u793a)\uff0c\u9760\u53f3\u908a \u7684\u5f4e\u66f2\u66f2\u7dda\u90e8\u4efd\uff0c\u662f\u53f0\u7063\u5448\u73fe\u5f37\u52e2\u8a5e\u5f59\u7684\u73fe\u8c61\uff0c\u9760\u5de6\u908a\u7684\u5f4e\u66f2\u66f2\u7dda\u90e8\u4efd\uff0c\u5247\u662f\u5927\uf9d3\u5448\u73fe \u5f37\u52e2\u8a5e\u5f59\u7684\u73fe\u8c61\u3002\u6211\u5011\u63a1\u53d6\u4e00\u6a23\u7684\u65b9\u5f0f\uf92d\u9032\ufa08\u6aa2\u6e2c\uff0c\u5176 30%\u7684\u8cc7\uf9be\uff0c\u5171\u8a08\u6709 140 \u7b46\uff0c\u5f9e \u5716 8\u3001\u5716 9 \u7684\u5dee\u8ddd\uf969\u503c\uf92d\u770b\uff0c\u78ba\u5be6\u53ef\u4ee5\u8b49\u660e\u9019\u4e9b\u4f7f\u7528\uf967\u540c\u7684\u52d5\u8a5e\u8a5e\u5f59\uff0c\u5728\u7e41\u9ad4\u4e2d\u6587\u7cfb\u7d71 \u8207\u7c21\u9ad4\u4e2d\u6587\u7cfb\u7d71\uff0c\u6709\u5176\u4f7f\u7528\uf9fa\u6cc1\u7684\u5c0d\u6bd4\u6027\u3002 \u5b8c\u5168\uf967\u540c\u7684\u52d5\u8a5e\u6bd4\u5c0d \u5b8c\u5168\uf967\u540c\u4e2d\u52d5\u8a5e\u6bd4\u5c0d\u768430% -2 -1 0 1 2 1 15 29 43 57 71 85 99 113 127 Synset type \u5dee\u8ddd \uf978\u8005\u5dee\u8ddd \u5716 9. \uf978\u5cb8\u4f7f\u7528\u5b8c\u5168\uf967\u540c\u7684\u52d5\u8a5e\u8a5e\u5f59\u4e2d\uff0c\u5dee\u8ddd\u6700\u5c0f\u7684 30%\u7684\u5206\u4f48\u60c5\u6cc1 \uf978\u5cb8\u4f7f\u7528\u5b8c\u5168\u76f8\u540c\u8a5e\u5f59\u7684\u5e73\u5747\u503c\u662f 0.0143%\uff0c\u90a3\u9ebc\uff0c\uf9e4\uf941\u4e0a\uff0c\uf978\u5cb8\u4f7f\u7528\uf967\u540c\u8a5e\u5f59\u7684 \u6bd4\uf9b5\uff0c\u61c9\u8a72\u5927\u65bc\u9019\u500b\u5e73\u5747\u503c\uff0c\u5018\uf974\u5c0f\u65bc\u9019\u500b\u5e73\u5747\u503c\uff0c\u5247\u6709\u53ef\u80fd\u662f\uf978\u5cb8\u4f7f\u7528\u76f8\u540c\u6982\uf9a3\u8a5e\u5f59 \u6642\uff0c\u7522\u751f\u6df7\u7528\u7684\u73fe\u8c61\u3002\u5728\u4f7f\u7528\uf967\u540c\u7684\u540d\u8a5e\u8a5e\u5f59\u4e2d\uff0c\u4ee5\u5927\uf9d3\u7368\u6709\u8a5e\u7684\u6bd4\uf9b5\uf92d\u6392\u5e8f\uff0c\u767c\u73fe\u6709 174 \u7b46\u8cc7\uf9be\u5c0f\u65bc\u9019\u500b\u5e73\u5747\u503c\uff1b\u4ee5\u53f0\u7063\u7368\u6709\u8a5e\u7684\u6bd4\uf9b5\uf92d\u6392\u5e8f\uff0c\u5247\u6709 168 \u7b46\u8cc7\uf9be\u5c0f\u65bc\u9019\u500b\u5e73 \u5609\u99a1\u8207\u9ec3\u5c45\u4ec1\uff0c2008)\u7684\u7814\u7a76\u7d50\u679c\uff0c \u300c\u8b66\u5bdf\u300d\u4e00\u8a5e\u61c9\u5c6c\u65bc\u8f03\u5e38\u88ab\u4f7f\u7528\u5728\u53f0\u7063\u7e41\u9ad4\u4e2d\u6587\u7cfb\u7d71\uff0c \u5bb6\u99d2\u3001\u7afa\u4e00\u9cf4\u3001\u9ad8\u860a\u7426\u8207\u6bb7\u9d3b\u7fd4\uff0c1983)\uff0c\u5c07\u6f22\u8a9e\u540c\u7fa9\u5b57\u8a5e\u5340\u5206\u6210\u5177\u7d50\u69cb\uf9d0\u5225\u3002 \u770b\uf92d\uff0c\u53f0\u7063\u7684\u7528\u6cd5\u5f71\u97ff\u5927\uf9d3\uf976\u5f37\u65bc\u65bc\u5927\uf9d3\u7684\u7528\u6cd5\u5f71\u97ff\u53f0\u7063\u3002 \u4e2d\u6587\u7cfb\u7d71\u7368\u7528\u7684\u8a5e\u5f59\u6216\u5927\uf9d3\u7c21\u9ad4\u4e2d\u6587\u7cfb\u7d71\u7368\u7528\u7684\u8a5e\u5f59\uff0c\uf9b5\u5982\uff1a\u8b66\u5bdf\u8207\u516c\u5b89\u3002\u6839\u64da\u6d2a \u7b49(\u6d2a \u4e2d\uff0c\u5176\u8a9e\u7fa9\u4e2d\u61c9\u6709\u8207\u8a72\uf9d0\u5b57\u8a5e\u540c\u7fa9\u96c6\u5408\u3002\u6b64\uf9d0\u578b\u7684\u4ee3\u8868\u70ba\u300a\u540c\u7fa9\u8a5e\u8a5e\uf9f4\u300b(\u6885 \u8b49\u5be6\uf9ba\u4e00\u500b\u76f4\u89ba\u7684\u89c0\u5bdf\uff0c\u5c31\u662f\uf96f\uf978\u5cb8\u8a5e\u5f59\u4e92\u76f8\u5f71\u97ff\u6ef2\u900f\u7684\u73fe\u8c61\u65e5\u76ca\u986f\u8457\u3002\u4ee5\u76ee\u524d\u7684\uf969\u64da \u65e5\u76ca\u983b\u7e41\uf9fa\u6cc1\u4e0b\uff0c\u5f7c\u6b64\u4f7f\u7528\u5c0d\u65b9\u8a5e\u5f59\u7684\uf9fa\u6cc1\u4e5f\u65e5\u8da8\u983b\u7e41\uff0c\u4ee5\u81f4\u65bc\u6f38\u6f38\u5931\u53bb\u6240\u8b02\u53f0\u7063\u7e41\u9ad4 \u539f\u7531\u5247\u503c\u5f97\uf9a8\u4eba\u63a2\u8a0e\u3002\u5f9e\u8a9e\u7fa9(sense)\u7684\u89c0\u9ede\uf92d\uf96f\uff0c\u591a\u7fa9\u8a5e\u7d44\u6b78\u5230\u7279\u5b9a\u540c\u7fa9\u7d44\u5408 \u9019\u500b\u5e73\u5747\u503c\uff1b\u4ee5\u53f0\u7063\u7368\u6709\u8a5e\u7684\u6bd4\uf9b5\uf92d\u6392\u5e8f\uff0c\u5247\u6709 348 \u7b46\u8cc7\uf9be\u5c0f\u65bc\u9019\u500b\u5e73\u5747\u503c\u3002\u9019\u500b\uf969\u64da \u7528\uff0c\u4ee5\u5448\u73fe\u53f0\u7063\u3001\u5927\uf9d3\u7684\uf978\u5cb8\u8a5e\u5f59\u4f7f\u7528\u5dee\uf962\u6027\uff0c\u7136\u800c\uff0c\uf978\u5cb8\u4eba\u6c11\u5728\u5404\u65b9\u9762\u7684\u4ea4\uf9ca\u3001\u63a5\u89f8 \u540c\u7fa9\u8a5e\u5728\u8cc7\u8a0a\u64f7\u53d6\u8207\u8a9e\u7fa9\u5206\uf9d0\u4e0a\u662f\u5f88\u91cd\u8981\u7684\u8a9e\uf9be\u8cc7\u8a0a\uff0c\u4f46\u5c07\uf978\u8a5e\u6b78\u7d0d\u70ba\u540c\u7fa9\u5176 \u5747\u503c\uff1b\u5728\u4f7f\u7528\uf967\u540c\u7684\u52d5\u8a5e\u8a5e\u5f59\u4e2d\uff0c\u4ee5\u5927\uf9d3\u7368\u6709\u8a5e\u7684\u6bd4\uf9b5\uf92d\u6392\u5e8f\uff0c\u767c\u73fe\u6709 320 \u7b46\u8cc7\uf9be\u5c0f\u65bc \u7531(3)\u7684\u67e5\u8a62\u7d50\u679c\u986f\u793a\uff0c\u300c\u51fa\u79df\uf902\u300d\u5728\u7c21\u9ad4\u4e2d\u6587\u7db2\u9801\u7684\u4f7f\u7528\u983b\uf961\u591a\u65bc\u7e41\u9ad4\u4e2d\u6587\u7db2\u9801\u7684 \u4f7f\u7528\uff0c\u8868\u793a\u300c\u51fa\u79df\uf902\u300d\u4e00\u8a5e\uff0c\u4e00\u822c\u6c11\u773e\u5728\u5927\uf9d3\u5730\u5340\u662f\u6bd4\u8f03\u5e38\u4f7f\u7528\u7684\uff1b\u76f8\u53cd\u5730\uff0c\u5728\u53f0\u7063\u5730 \u5340\u5247\u662f\u6bd4\u8f03\u5c11\u4f7f\u7528\u7684\u3002 \u6709\u4e9b\u662f\u95dc\u65bc\u97f3\u8b6f\u7684\u8a5e\u5f59\uff0c\u5728\uf978\u5cb8\u7684\u4f7f\u7528\u4e0a\u4e5f\u6709\u6240\uf967\u540c\uff0c\uf9b5\u5982\uff1a\u7f8e\u570b\u7e3d\u7d71 Obama\uff0c\u53f0 \u7063\u7684\u97f3\u8b6f\u540d\u662f\u300c\u6b50\u5df4\u99ac\u300d\uff0c\u5927\uf9d3\u7684\u97f3\u8b6f\u540d\u662f\u300c\u5967\u5df4\u99ac\u300d\uff0c\u5f9e Google \u641c\u5c0b\u7684\u7db2\u9801\u8cc7\uf9be\uff0c\u5982 (4)\u6240\u793a\uff0c\u53f0\u7063\u4f7f\u7528\u300c\u6b50\u5df4\u99ac\u300d\u7684\u7b46\uf969\u591a\u65bc\u5927\uf9d3\uff1b\u53cd\u4e4b\uff0c\u5927\uf9d3\u4f7f\u7528\u300c\u5967\u5df4\u99ac\u300d\u7684\u7b46\uf969\u591a\u65bc \u53f0\u7063\uff0c\u5982\u6b64\u4e00\uf92d\uff0c\u986f\u793a\u97f3\u8b6f\u8a5e\u5f59\u65b9\u9762\u5728\u53f0\u7063\u8207\u5927\uf9d3\u7686\u6709\u7368\u7279\u4f7f\u7528\u7684\u5c0d\u61c9\u8a5e\u5f59\uff0c\u4e5f\u53ef\u4ee5\u770b \u51fa\uf978\u5cb8\u5c0d\u65bc\u97f3\u8b6f\u8a5e\u7684\u5dee\uf962\u6027\u53ca\u4f7f\u7528\u7684\u983b\uf961\u3002 (4) \u53f0\u7063\u7684\u300c\u6b50\u5df4\u99ac (4,670,000/ 1,230,000)\u300d \u3001 \u5927\uf9d3\u7684\u300c\u5967\u5df4\u99ac (10,100,000/ 115,900,000)\u300d \u518d\u8005\uff0c\u5728\uf978\u5cb8\u4eba\u6c11\u7684\u751f\u6d3b\u4e2d\uff0c\u4e5f\u6709\u56e0\u70ba\u4e00\u4e9b\u5236\ufa01\u3001\u74b0\u5883\u3001\u65e5\u5e38\u751f\u6d3b\u3001\u7fd2\u6163\u800c\u7522\u751f\u51fa \u7684\u7279\u6b8a\u7528\u8a9e\uff0c\uf9b5\u5982\uff1a\u53f0\u7063\u7684\u300c\u5b78\u6e2c\u300d\u3001\u300c\u514d\u6d17\u7b77\u5b50\u300d\u8207\u5927\uf9d3\u7684\u300c\u7dad\u7a69\u300d\u3001\u300c\u4e00\u6b21\u6027\u7b77\u5b50\u300d\u2026 \u7b49\uff0c\u7686\u53ef\u5f9e Google \u641c\u5c0b\u7db2\u9801\u7684\u8cc7\uf9be\u986f\u793a\u51fa\uf978\u5cb8\u5c0d\u65bc\u67d0\u4e9b\u8a5e\u5f59\u7684\u7368\u7528\uff0c\u6216\u8005\u5c0d\u65bc\u76f8\u540c\u7684\u6982 \uf9a3\u537b\u4ee5\uf967\u540c\u7684\u8a5e\u5f59\uf92d\u5448\u73fe\u3002 \u5118\u7ba1 Google \u641c\u5c0b\u7db2\u9801\u7684\u8cc7\uf9be\u53ef\u4ee5\u986f\u793a\u7e41\u9ad4\u4e2d\u6587\u7db2\u9801\u7684\u504f\u7528\u6216\u7c21\u9ad4\u4e2d\u6587\u7db2\u9801\u7684\u504f 9. \u7d50\uf941 \u4e8e\u6c5f\u751f\u3001\uf9c7\u63da\u3001\u4fde\u58eb\u6c76(2003)\u3002\u4e2d\u6587\u6982\uf9a3\u8a5e\u5178\u898f\u683c\uf96f\u660e\u3002Journal of Chinese language and \u4ee5\u300a\u540c\u7fa9\u8a5e\u8a5e\uf9f4(\u64f4\u5c55\u7248)\u300b\u70ba\uf9b5 \u300c\u540c\u7fa9\u76f8\u8a13\u300d\u662f\u5728\u8a5e\u7fa9\u8a13\u8a41\u4e0a\u7684\u4e3b\u8981\u5de5\u4f5c\u3002\u9019\u9805\u5de5\u4f5c\u5f80\u5f80\u9700\u8981\u82b1\u8cbb\u8a31\u591a\u4eba\uf98a\u8207\u6642\u9593\uff0c\u624d \u00a9 \u57fa\u65bc\u5b57\u5178\u91cb\u7fa9\u95dc\uf997\u65b9\u6cd5\u7684\u540c\u7fa9\u8a5e\u6982\uf9a3\u64f7\u53d6\uff1a Computing, 13(2), 177-194\u3002 \u80fd\u6e05\u695a\u5730\u5206\u6790\u51fa\u8a5e\u5f59\u4e4b\u9593\u7684\u540c\u7fa9\u5167\u6db5\u3002\u6b64\uf9d0\u578b\u7684\u6210\u679c\u773e\u591a\uff0c\u5982\u6885\u5bb6\u99d2\u7b49(1983)\u6240\u7de8\u64b0\u7684 \uf978\u5cb8\u8a5e\u5f59\u5728\u4f7f\u7528\u4e0a\u7684\u76f8\u540c\u3001\uf967\u540c\u6216\u4e9b\u8a31\u7684\u5dee\uf962\uff0c\u751a\u6216\u6df7\u96dc\u4f7f\u7528\uff0c\u5728\u4ea4\uf9ca\u983b\u7e41\u7684\u60c5\u5f62\u4e0b\uff0c \u5df2\u7d93\u65e5\u8da8\u660e\u986f\uff0c\u5982\u4f55\u5340\u5206\u4e26\u91d0\u6e05\uf978\u5cb8\u8a5e\u5f59\u7684\u500b\u5225\u8a9e\u7fa9\u67b6\u69cb\uff0c\u53c8\u80fd\u5728\u5176\u67b6\u69cb\u4e0b\uff0c\u589e\u52a0\u6211\u5011 \u738b\u9435\u6606\u3001\uf9e1\ufa08\u5065(1996)\u3002\uf978\u5cb8\u8a5e\u5f59\u6bd4\u8f03\u7814\u7a76\u7ba1\ufa0a\u3002\u83ef\u6587\u4e16\u754c\uff0c\u7b2c 81 \u671f\uff0c\u53f0\uf963\u3002 \u4e2d \u83ef \u6c11 \u570b \u4ea4 \u901a \u90e8 \u89c0 \u5149 \u5c40 (2011) \u3002 \uf978 \u5cb8 \u5730 \u5340 \u5e38 \u7528 \u8a5e \u5f59 \u5c0d \u7167 \u8868 \uff0c \u4e2d\u6587\u300a\u540c\u7fa9\u8a5e\u8a5e\uf9f4\u300b(\u4ee5\u4e0b\u7a31\u300a\u8a5e\uf9f4\u300b)\uff0c\u6536\uf93f\uf92d\u81ea\u8a5e\u7d20\u3001\u8a5e\u7d44\u3001\u6210\u8a9e\u3001\u65b9\u8a00\u8a5e\u8207\u53e4\u8a9e\u7b49 \u4ee5\u300a\u540c\u7fa9\u8a5e\u8a5e\uf9f4(\u64f4\u5c55\u7248)\u300b\u70ba\uf9b5 1 \u8a5e\uff0c\u5171\u4e94\u842c\u4e09\u5343\u591a\u8a5e\u5f59\uff0c\u4e26\u4e14\u4f9d\u7167\u540c\u7fa9\u8a5e\u5206\uf9d0\u7fa9\u6db5\u6709\u7cfb\u7d71\u5730\u5206\u6210\uf967\u540c\u7684\uf9d0\u5225\u3002\u5176\u5f8c\u7d93\u7531 \u5c0d\u65bc\u6f22\u8a9e\u8a5e\u5f59\u8a9e\u7fa9\u7cfb\u7d71\u6027\u6f14\u8b8a\u8108\u7d61\u7684\uf9e4\u89e3\uff0c\u662f\u6211\u5011\u5f9e\u4e8b\u8a9e\u8a00\u7814\u7a76\u8005\uf967\u5bb9\u5ffd\u8996\u7684\u8b70\u984c\u3002\u672c \u6587\u85c9\u7531 CWN \u5c07\uf978\u5cb8\u8a5e\u5f59\u5c0d\u6bd4\u7684\u4f7f\u7528\uf9fa\u6cc1\u8cea\u5316\u5448\u73fe\uff0c\u800c Gigaword Corpus \u5247\u662f\u4ee5\u5be6\u969b\u8a9e\uf9be\uf92d\u9a57\u8b49 \uf978\u5cb8\u8a5e\u5f59\u5c0d\u6bd4\u7684\u4f7f\u7528\uf9fa\u6cc1\uf97e\u5316\u5448\u73fe\u3002\u6211\u5011\uf901\u9032\u4e00\u6b65\u767c\u73fe\uf9ba\uf978\u5cb8\u5171\u7528\u8a5e\u5f59\u6709\u300c\u540c\u4e2d\u6709\uf962\u300d \u7684\u73fe\u8c61\uff0c\u800c\u5c0d\u6bd4\u8a5e\u5f59\u4e5f\u7522\u751f\uf9ba\u4e92\u76f8\u6ef2\u900f\u5f71\u97ff\u7684\u73fe\u8c61\u3002\u503c\u5f97\uf901\u6df1\u5165\u63a2\u8a0e\u7814\u7a76\u3002\u540c\u6642\uff0c\u61c9\u7528 \u5177\u6709\u5927\uf97e\u7e41\u9ad4\u4e2d\u6587\u3001\u7c21\u9ad4\u4e2d\u6587\u7684 Google \u641c\u5c0b\u7db2\u9801\u7684\u8cc7\uf9be\u9032\ufa08\uf978\u5cb8\u4eba\u6c11\u4f7f\u7528\u8a5e\u5f59\u7684\u5c0d\u6bd4\u8207 \u5dee\uf962\u5206\u6790\uff0c\u5728\u6b64\uff0c\u767c\u73fe\u5177\u6709\u5b78\u8853\u6027\u8cea\u7684\u8a9e\uf9be\u5eab\uff0c\u5982\u672c\u6587\u6240\u4f7f\u7528\u7684 Gigaword Corpus \u5728\u4f5c \u70ba\uf978\u5cb8\u8a5e\u5f59\u5c0d\u6bd4\u7814\u7a76\u6216\u4e16\u754c\u83ef\u8a9e\u5c0d\u6bd4\u7814\u7a76\u6642\uff0c\u5176\u7814\u7a76\u6210\u679c\u8207\u5b78\u8853\u50f9\u503c\u662f\u6bd4 Google \u6240\u63d0\u4f9b \u7684\u8cc7\uf9be\u9ad8\u5f88\u591a\u7684\u3002 http://taiwan.net.tw/m1.aspx?sNo=0016891. \u59da\u69ae\u677e(1997)\u3002\uf941\uf978\u5cb8\u8a5e\u5f59\u5dee\uf962\u4e2d\u7684\u53cd\u5411\uf925\uf98a\u3002\u7b2c\u4e94\u5c46\u4e16\u754c\u83ef\u8a9e\u6587\u6559\u5b78\u7814\u8a0e\u6703\uff0c\u4e16\u754c\u83ef \u8a9e\u6587\u6559\u80b2\u5354\u9032\u6703\u4e3b\u8fa6\uff0c1997 \uf98e 12 \u6708 27-30 \u65e5\uff0c\u53f0\uf963\u528d\u6f6d\u3002 \u5357 \u4eac \u8a9e \u8a00 \u6587 \u5b57 \u7db2 (2004) \u3002 \uf978 \u5cb8 \u666e \u901a \u8a71 \u5927 \u540c \u4e2d \u6709 \u5c0f \uf962 \uff0c http://njyw.njenet.net.cn/news/shownews.asp?newsid=367. \u6d2a\u5609\u99a1\u3001\u9ec3\u5c45\u4ec1(2008)\u3002\u8a9e\uf9be\u5eab\u70ba\u672c\u7684\uf978\u5cb8\u5c0d\u61c9\u8a5e\u5f59\u767c\u6398(A Corpus-Based Approach to the Discovery of Cross-Strait Lexical Contrasts). Language and Linguistics, 9(2), 221-238, 2008. Taipei, Nankang: Institute of Linguistics, Academia Sinica\u3002 \u8a31\u6590\u7d62(1999)\u3002\u53f0\u7063\u7576\u4ee3\u570b\u8a9e\u65b0\u8a5e\u63a2\u5fae\u3002\u570b\uf9f7\u53f0\u7063\u5e2b\u7bc4\u5927\u5b78\u83ef\u8207\u6587\u6559\u5b78\u7814\u7a76\u6240\u78a9\u58eb\uf941\u6587\uff0c \u53f0\uf963\u3002 \u83ef\u590f\u7d93\u7def\u7db2(2004)\u3002\u8da3\u8ac7\u6d77\u5cfd\uf978\u5cb8\u8a5e\u5f59\u5dee\uf962\uff0chttp:/\uf9c7\u63da\u3001\u4fde\u58eb\u6c76\u3001\u4e8e\u6c5f\u751f(2003) \u3002CCD \u8a9e\u7fa9\u77e5\uf9fc\u5eab\u7684\u69cb\u9020\u7814\u7a76\u3002\u4e2d\u570b\u8a08\u7b97\u6a5f\u5927\u6703(CNCC'2003) \u3002 \u6234\u51f1\u5cf0(1996)\u3002\u5f9e\u8a9e\u8a00\u5b78\u7684\u89c0\u9ede\u63a2\u8a0e\u53f0\u7063\u8207\uf963\u4eac\u570b\u8a9e\u9593\u4e4b\u5dee\uf962[A Linguistic Study of Taiwan and Beijing Mandarin]\u3002\u653f\u6cbb\u4f5c\u6230\u5b78\u6821\u5916\u570b\u8207\u6587\u5b78\u7cfb\u78a9\u58eb\uf941\u6587\uff0c\u53f0\uf963\u3002 \u54c8\u723e\u6ff1\u5de5\u696d\u5927\u5b78\u4fe1\u606f\u6aa2\uf96a\u7814\u7a76\u5ba4(HIT IR Lab)\u522a\u9664\u820a\u8a5e\u8207\u7f55\u7528\u8a5e\uff0c\u4e26\u4f9d\u65b0\u805e\u8a9e\uf9be\u52a0\u5165\u5e38\u7528 \u65b0\u8a5e\uff0c\u4f7f\u64f4\u5c55\u7248\u8a5e\u5f59\uf97e\u589e\u52a0\u5230\u4e03\u842c\u591a\u3002\u300a\u8a5e\uf9f4\u300b\u4e2d\uff0c\u8a5e\u8a9e\u5206\uf9d0\u539f\u5247\u662f\u300c\u76f8\u5c0d\u3001\u6bd4\u8f03\u300d(\u6885 \u5bb6\u99d2\u7b49\uff0c1983)\uff0c\u8a5e\u8a9e\u6240\u5c6c\uf9d0\u5225\u8207\uf99c\u8209\u4f4d\u7f6e\u54f2\u5b78\u6709\u4f5c\u8005\u5011\uf967\u53ef\u8a00\u55bb\u5de7\u601d\u3002\u7b2c\u4e8c\uff0c\u6a5f\u5668\u5b78\u7fd2 \u65b9\u6cd5\u9032\ufa08\u540c\u7fa9\u8a5e\u8fa8\u6790\u8fd1\uf98e\uf92d\uf967\u65b7\u767c\u5c55\uff0c\u4f46\u8457\u91cd\u5728\u4f7f\u7528\u540c\u7fa9\u5b57\u8a5e\u9032\ufa08\u4e0a\u4e0b\u6587\u4e4b\u4e2d\u8a5e\u7fa9\u7684\u6d88 \u6b67\u3002\u5728\u9032\ufa08\u540c\u7fa9\u6d88\u6b67\u7684\u8655\uf9e4\u904e\u7a0b\u4e2d\uff0c\u53c8\u53ef\u5340\u5206\u70ba\u76e3\u7763\u5f0f\u8207\u975e\u76e3\u7763\u5f0f\uf978\u7a2e\u5b78\u7fd2\u65b9\u6cd5\uff0c\uf92d\u9032 \ufa08\u8fa8\u5225\u662f\u5426\u53ef\u6b78\u70ba\u540c\u7fa9\u5b57\u7fa4\u3002\u4e0a\u8ff0\u7684\uf978\uf9d0\u6a5f\u5668\u5b78\u7fd2\u65b9\u6cd5\u90fd\u9700\u8981\uf96b\u8003\u8a9e\uf9be\u5eab\u7684\u8a5e\uf9d0\u983b\uf961\u8a08 \u7b97\u5f8c\uff0c\u624d\u80fd\u5f97\u5230\u5b57\u8a5e\u4e4b\u9593\u7684\u76f8\u4f3c\u7a0b\ufa01\u3002\u5728\u9032\ufa08\u6b64\uf9d0\u65b9\u6cd5\u6642\uff0c\u5e38\ufa0a\u7684\u554f\u984c\u662f\u7f3a\u4e4f\u5927\u578b\u8a9e\uf9be \u5eab\u8207\u540c\u7fa9\u8a5e\u8cc7\uf9be\u7a00\u758f\u7684\u901a\u5247(\uf9c7\u633a\u8207\uf902\u842c\u7fd4\uff0c\u7db2\u9801\u8cc7\uf9be\u64f7\u53d6\u65bc 2012)\u3002 \u672c\u7814\u7a76\u4f7f\u7528\u8fad\u5178\u91cb\u7fa9\u5167\u5bb9\uff0c\u9996\u5148\u5c0d\u8a5e\u5f59\u4e4b\u9593\u7684\u5171\u6709\u6982\uf9a3\u8a08\u7b97\u539f\u5247\u9032\ufa08\u8a0e\uf941\uff0c\u518d\u5c0d \u300a\u540c \u7fa9\u8a5e\u8a5e\uf9f4(\u64f4\u5c55\u7248)\u300b(\u4e0b\u7a31\u300a\u64f4\u5c55\u7248\u300b)\u4e4b\u5206\uf9d0\u9032\ufa08\u91cb\u7fa9\u95dc\uf997\u8a08\u7b97\u3002\u672c\u7814\u7a76\u8a66\u4ee5\u5171\u540c\u4f7f \u7528\u7684\u91cb\u7fa9\u7528\u8a5e\uff0c\u64f7\u53d6\u80fd\u8868\u9054\u8a72\u5206\uf9d0\u7684\u5171\u6709\u6982\uf9a3\u8a5e\u7d44\u3002\u9664\uf9ba\u8a08\u7b97\u300a\u64f4\u5c55\u7248\u300b\u4e2d\u7684\u540c\u7fa9\uf9d0\u5225 A Definition-\u6458\u8981 \u7fa9\u6db5\uff0c\u4e26\u900f\u904e\u91cb\u7fa9\u6db5\u84cb\uf969\u8207\u6700\u5927\u5e73\u5747\u91cb\u7fa9\u95dc\uf997\u8a5e\u503c\u6bd4\u8f03\u540c\u7fa9\uf9d0\u5225\u4e2d\u7684\u8a5e\u5f59\uff0c\u6a19\u8a18\u8f03\u9069\u5408
\u8a5e\uf9d0\u8207\u983b\uf961 \u5716 7. \uf978\u5cb8\u4f7f\u7528\u5b8c\u5168\uf967\u540c\u7684\u540d\u8a5e\u8a5e\u5f59\u4e2d\uff0c\u5dee\u8ddd\u6700\u5c0f\u7684 30%\u7684\u5206\u4f48\u60c5\u6cc1
", "num": null, "html": null, "type_str": "table", "text": "\u5f88\u5927\u7684\u56e0\u7d20\uff0c\u5728 9261 \u500b\u8a5e\u5f59\uf9e8\uff0c\u5c31\u6709 6586 \u500b\u8a5e\u5f59\uff0c\u5927\u7d04\u662f 71.10%\uff0c\u5176\u771f\u6b63\uf978\u5cb8\u5c0d\u65bc\u52d5 \u8a5e\u7684\uf967\u540c\u4f7f\u7528\uff0c\u5247\u6709 2676 \u500b\u8a5e\u5f59\uff0c\u5927\u7d04\u662f 28.90%\u3002 \u6211\u5011\u5c07\u4ee5\u5716 3 \u548c\u8868 3 \u4e2d\uff0c\u56db\u7a2e\u8a5e\uf9d0\uf9e8\uff0c\u4f7f\u7528\u5b8c\u5168\u76f8\u540c\u7684\u8a5e\u5f59\u8207\u771f\u6b63\uf967\u540c\u7684\u8a5e\u5f59\uff0c\u85c9 \u7531 Gigaword Corpus \uf92d\u5206\u6790\uf978\u5cb8\u4eba\u6c11\u5c0d\u65bc\u8a5e\u5f59\u4f7f\u7528\u7684\u5be6\u969b\uf9fa\u6cc1\u3002 8. \u5be6\u9a57\u8a2d\u8a08\u8207\u8a5e\u5f59\u5dee\uf962\u5206\u6790 \u4ee5 WordNet \u70ba\u4e2d\u5fc3\u6240\u5c0d\u8b6f\u51fa CCD \u7684\u7c21\u9ad4\u4e2d\u6587\u548c CWN \u7684\u7e41\u9ad4\u4e2d\u6587\uff0c\u6bd4\u8f03\uf978\u8005\u7684\u5c0d\u8b6f\uff0c \u6709\u5b8c\u5168\u76f8\u540c\u3001\u5b8c\u5168\uf967\u540c\u8207\u90e8\u4efd\u76f8\u540c\u7b49\u4e09\u5927\uf9d0\uff0c\u5728\u6b64\uff0c\u672c\u7814\u7a76\u50c5\u5c31\u524d\uf978\uf9d0\u7684\u8cc7\uf9be\uff0c\u518d\u4ee5 CCD \u7684\u7c21\u9ad4\u4e2d\u6587\u548c CWN \u7684\u7e41\u9ad4\u4e2d\u6587\uf9e8\uff0c\u6709 6637 \u7b46\uf978\u5cb8\u4f7f\u7528\u5b8c\u5168\u76f8\u540c \u8a5e\u5f59\uff0c\u6211\u5011\u4ee5 Gigaword Corpus \u7684\u8a9e\uf9be\u9032\ufa08\u6aa2\u6e2c\uff0c\u767c\u73fe\u4e2d\u592e\u793e/\u65b0\u83ef\u793e\u8a9e\uf9be\u6240\u4f7f\u7528\u5206\u4f48\u5dee \uf962\u7684\u5e73\u5747\u503c\u70ba 0.0143%\u3002Gigaword Corpus \u5c0d\u65bc\uf978\u5cb8\u8a5e\u5f59\u4f7f\u7528\u5dee\uf962\u5206\u4f48\u5728\u9019\u500b\u5e73\u5747\u503c\u5167\u7684 \u8a5e\u5f59\uff0c\u5171\u8a08\u6709 5880 \u7b46\u3002\u63db\uf906\u8a71\uf96f\uff0c\uf978\u5cb8\u4f7f\u7528\u5b8c\u5168\u76f8\u540c\u7684\u8a5e\u5f59\uf9e8\uff0c\u5728 Gigaword Corpus \u4f7f \u7528\uf9fa\u6cc1\u8f03\u70ba\u76f8\u8fd1\u7684\u6709 5880 \u7b46\uff0c\u4f7f\u7528\uf9fa\u6cc1\u8f03\u70ba\uf967\u76f8\u540c\u7684\u4ecd\u6709 757 \u7b46\u3002\u5728\u4e2d\u592e\u793e\u8207\u65b0\u83ef\u793e\u8a9e \uf9be\u4e2d\u5206\u5225\u6709 354 \u7b46\u548c 403 \u7b46\u3002\u9019 757 \u7b46\u8cc7\uf9be\u662f\u300c\u540c\u4e2d\u6709\uf962\u300d\u7684\u8a5e\u8a9e\uff0c\u503c\u5f97\u6211\u5011\u5c07\uf92d\u9032\u4e00 \u6b65\u5206\u6790\u3002 \u5728 6637 \u500b\uf978\u5cb8\u4f7f\u7528\u5b8c\u5168\u76f8\u540c\u8a5e\u5f59\u4e2d\uff0c\u5716 4 \u96d6\u7136\u986f\u793a\u5176\u983b\uf961\u5dee\u8ddd\u5e7e\u4e4e\u662f\uf9b2\u3002\u4f46\u662f\uff0c \u5982\u679c\u6211\u5011\u7531\u5dee\u8ddd\u6700\u5c0f\u7684\u7b2c 3076 \u500b\u8a5e\uff0c\u4f9d\u524d\u5f8c\u5404\u53d6 30%\u7684 (\u5c31\u662f\u7b2c 2153 \u500b\u8a5e\u5f59\u53d6\u5230\u7b2c 4144 Gigaword Corpus \u4e2d\u5448\u73fe\u7684\u5206\u4f48\uff0c\u5728\u6b64\uff0c\u672c\u6587\u50c5\u53d6\uf969\uf97e\u8f03\u5927\u7684\u540d\u8a5e\u548c\u52d5\u8a5e\uf92d\u505a\u6bd4\u5c0d\uff0c \u4e26\u4e14\u64f7\u53d6\u8a9e\uf9be\u7684\u539f\u5247\u662f\u51fa\u73fe\u5728 CCD \u6240\u4f7f\u7528\u7684\u8a5e\u5f59\uff0c\u662f XIN \u7684\u8a5e\u983b\u5927\u65bc CNA \u7684\u8a5e\u983b\uff1b\u51fa \u73fe\u5728 CWN \u6240\u4f7f\u7528\u7684\u8a5e\u5f59\uff0c\u662f CNA \u7684\u8a5e\u983b\u5927\u65bc XIN \u7684\u8a5e\u983b\uff0c\u5176\u5206\u4f48\u60c5\u5f62\uff0c\u5982\u4e0b\u5716\u6240\u793a\uff1a WordNet \u6240\u767c\u5c55\u51fa\u7684\u7e41\u9ad4\u4e2d\u6587\u7cfb\u7d71 CWN \u8207\u7c21\u9ad4\u4e2d\u6587\u7cfb\u7d71 CCD\uff0c\u9032\ufa08\uf978\u5cb8\u8a5e\u5f59\u7684 \u6bd4\u5c0d\uff0c\u518d\u5c07\u6bd4\u5c0d\u904e\u5f8c\u7684\u8a5e\u5f59\uff0c\u4ee5\u6536\u96c6\u5be6\u969b\u5927\uf97e\u8a9e\uf9be\u7684 Gigaword Corpus \u70ba\u57fa\u790e\uff0c\u6aa2\u6e2c\uf978 \u5cb8\u5728\u8a5e\u5f59\u4e0a\u4f7f\u7528\u7684\u73fe\u8c61\u8207\u5206\u4f48\uf9fa\u6cc1\uff1b\u4ea6\u53ef\u7531 Gigaword Corpus \u6240\u5448\u73fe\u7684\uf9fa\u6cc1\uff0c\u8b49\u660e\u7e41\u9ad4 \u4e2d\u6587\u7cfb\u7d71 CWN \u8207\u7c21\u9ad4\u4e2d\u6587\u7cfb\u7d71 CCD \u5728\u6bd4\u5c0d\u4e0a\u7684\u6b63\u78ba\ufa01\u8207\u53ef\u9760\u6027\uff1b\u4e5f\u8b49\u5be6\uf9ba CCD \u548c /www.huaxia.com/wh/zsc/00162895.html. \u5ec8\u9580\u65e5\u5831(2004)\u3002\u8da3\u8ac7\uf978\u5cb8\u8a5e\u5f59\u5dee\uf962\uff0chttp://www.csnn.com.cn/csnn0401/ca213433.htm. The Association for Computational Linguistics and Chinese Language Processing" }, "TABREF16": { "content": "
\u8a13\u8a41\u8a13\u6545\u6545\u8a13\u53e4\u8a13\u89e3\u6545\u89e3\u8a41\u5e73\u5747\u503c
\u8a13\u8a41N/AN/A0.8140.5750.5690.652
", "num": null, "html": null, "type_str": "table", "text": "\u81ea\u5efa\u540c\u7fa9\u8a5e\u7d44\"\u8a13\u8a41\"\u53ca\u5404\u8a5e\u9593\u91cb\u7fa9\u95dc\uf997\u8868" }, "TABREF18": { "content": "
54\u57fa\u65bc\u5b57\u5178\u91cb\u7fa9\u95dc\uf997\u65b9\u6cd5\u7684\u540c\u7fa9\u8a5e\u6982\uf9a3\u64f7\u53d6: \u57fa\u65bc\u5b57\u5178\u91cb\u7fa9\u95dc\uf997\u65b9\u6cd5\u7684\u540c\u7fa9\u8a5e\u6982\uf9a3\u64f7\u53d6:\u8d99\u9022\u6bc5\u3001\u937e\u66c9\u82b3 53 \u8d99\u9022\u6bc5\u3001\u937e\u66c9\u82b3 55
\u4ee5\u300a\u540c\u7fa9\u8a5e\u8a5e\uf9f4(\u64f4\u5c55\u7248)\u300b\u70ba\uf9b5 \u4ee5\u300a\u540c\u7fa9\u8a5e\u8a5e\uf9f4(\u64f4\u5c55\u7248)\u300b\u70ba\uf9b5
\u4e4b\u4e2d\u51fa\u73fe\u8a5e\u983b\u70ba 10185\uff0c\u4e14\u5f97\u5230\u76f8\u4f3c\u8a5e\u5f59\u5171 60 \u500b\u3002\u63a5\u8457\u6211\u5011\u4f7f\u7528\u76f8\u540c\u7684\u7dad\u57fa\u767e\u79d1\u7684\u7e41\u7c21 \u5c0d\uf997 0.07 6769 0.62\uf978 \u55ae\u4f4d \u8a08\u7b97 \u4e00 \u4eba \u500b \uf97e\u8a5e \u7269 \u7b49\u65bc \u516c\u65a4 6. \u7d50\uf941\u8207\u8a0e\uf941
\u5206\u6b67\u8a5e\u8868\u9032\ufa08\u7e41\u7c21\u8f49\u63db\uff0c\u5c07\u53d6\u5f97\u7684\u7c21\u9ad4\u8a5e\u5f59\u8f49\u63db\u6210\u7e41\u9ad4\u8a5e\u5f59\u5f8c\uff0c\u4f7f\u7528\u300a\u570b\u8a9e\u8fad\u5178\u300b\u9032\ufa08 \u91cb\u7fa9\uff0c\u4e26\u4e00\u4e00\u8207\"\u62db\u724c\uff02\u8a08\u7b97\u7b2c\u56db\u968e\u5c64\u91cb\u7fa9\u8a9e\u7fa9\u95dc\uf997\u503c\uff0c\u5340\u5225 Sketch Engine \u6240\u5f97\u5230\u7684\u76f8 \u4f3c\u8a5e\u5f59\u8207\u672c\u7814\u7a76\u4e2d\u6240\u6307\u7684\u91cb\u7fa9\u8a9e\u7fa9\u76f8\u95dc\u4e4b\u9593\u7684\u5dee\uf962(\u7d50\u679c\ufa0a\u4e0b\u8868 8)\u3002 \u7368\u7279 0.07 135938 0.07 \u7368\u6709 \u5360\u6709 \u7279\u6b8a \u50c5\u6709 \u6307 \u7279\u5225 \u610f\u601d \uf967\u540c\u65bc \u5728\u672c\u6587\u4e2d\uff0c\u6211\u5011\u5df2\u7d93\u8a0e\uf941\u591a\u968e\u5c64\u91cb\u7fa9\u95dc\uf997\u5728\u8a08\u7b97\u540c\u7fa9\u8a5e\u5f59\u4e0a\u7684\u61c9\u7528\uff0c\u4e5f\u6bd4\u8f03\u8207\u73fe\u6709\u7684 \u6240\u6709 \u5360\u64da \u6b61\u8fce 0.07 196262 Sketch Engine \u4e2d Thesaurus \u8a08\u7b97\u539f\u5247\u4e0a\u7684\u5dee\u5225\u3002\u5728\u672c\u7814\u7a76\u4e2d\uff0c\u70ba\uf9ba\u907f\u514d\u91cb\u7fa9\u8a5e\u5f59\u5728\u591a\u968e 0.62\u4ed6 \u6307 \u4eba \u65b9\u9762 \u500b \u7db4 \u9019 \u5225\u7684 \u7b2c\u4e09\u4eba \u4e2d \uf977\u9ede 0.07 64220 \u5c64\u7684\u8a6e\u91cb\u5f8c\u800c\u592a\u904e\u767c\u6563(\u7121\u6cd5\u6536\u6582)\uff0c\u5f9e\u800c\u4f7f\u7528\u4fee\u6b63\u5f8c\u7684\u591a\u968e\u5c64\u91cb\u7fa9\u95dc\uf997\u8a08\u7b97\u65b9\u6cd5\u3002\u5c07\u8f03 0 \u8868 8. Sketch Engine \u8207\u91cb\u7fa9\u95dc\uf997\u8a08\u7b97\u7d50\u679c\u6bd4\u8f03 (\u90e8\u5206\u522a\u9664) \u4fc3\u92b7 0.07 39704 0.67\u6709 \u4eba \u5b58 \u4f7f \u4e8b\u7269 \u8868\u793a \u67d0 \u5404 \u610f\uf9fc \uf9a8 \u6dfa\u968e\u5c64\u7684\u91cb\u7fa9\u8a5e\u5f59\u6b0a\u91cd\u589e\u52a0\uff0c\u4ee5\u6e1b\u5c11\u5728\u6df1\u5c64\u91cb\u7fa9\u4e4b\u4e2d\u6cdb\u4e00\u822c\u6027\u7684\u6982\uf9a3\u8a5e\u5f59\u5f71\u97ff\uff0c\u4ee5\u7a81\u986f
Sketch Lemma \u7cbe\u7dfb \uf978\u8a5e\u5f59\u4e4b\u9593\u7684\u5171\u540c\u64c1\u6709\u91cb\u7fa9\u8a5e\u5f59\u7684\u7279\u8272\uff0c\u4e26\u5efa\uf9f7\u95dc\uf997\u3002\u95dc\uf997\u503c\u7684\u8a08\u7b97\uff0c\u662f\u5c07\uf978\u8a5e\u5f59\u9593\u5171 Sketch Scores Sketch Freq. SRD-Scores SRD-\u5171\u6709\u8fad\u5178\u91cb\u7fa9\u5b57\u51fa\u73fe\u6bd4\uf961 Top 10 0.07 23820 \u7d30\u5bc6 \u6771\u897f \u4ed4\u7d30 \u6587\u660e \u7cbe\u7dfb \u5468\u8a73 \u5c0f\u53f2 \u7d05\uf94c\u5922 0.28 \u7cbe\u6df1 \u6771 \u540c\u64c1\u6709\u7684\u91cb\u7fa9\u5b57\u8a5e\u51fa\u73fe\u4f54\u6709\u6bd4\uf961\uff0c\uf92d\u8868\u793a\u5171\u540c\u91cb\u7fa9\u6587\u5b57\u6982\uf9a3\u7684\u4ea4\u96c6\uff0c\u4e26\u4f7f\u7528\u53cd\u8986\u91cb\u7fa9\u7684
\u724c\u533e \u5f62\u8c61 \u591a\u968e\u5c64\u539f\u5247\uff0c\u4ee5\u6e1b\u5c11\u91cb\u7fa9\u6587\u5b57\u540c\u7fa9\uf967\u540c\u578b\u7684\u554f\u984c\uff0c\u540c\u6642\uf9dd\u7528\u968e\u5c64\u91cb\u7fa9\u6587\u5b57\u6bd4\uf961\u4f5c\u70ba\u91cb\u7fa9 0.21 5421 0.79 \u6709 \u5b57 \u8868\u793a \u984c \u524d \u55ae\u4f4d \u63d0 \u8a18\uf93f \u4eba \u984c\u76ee 0.07 274352 0.72\u4eba \u6307 \u4e8b\u7269 \u6709 \u500b \u4e2d \u5be6\u9ad4 \u7269 \u4e00 \u8868\u793a
\u5ee3\u544a\u724c \u5403 \u8a5e\u5f59\uf96b\u8207\u8a08\u7b97\u4e2d\u7684\u6b0a\u91cd\u3002\u6b64\u591a\u968e\u5c64\u7d71\u8a08\u4e2d\u7684\u5171\u6709\u91cb\u7fa9\u6587\u5b57\u6b0a\u91cd\uff0c\u53ef\u8996\u70ba\u89e3\u91cb\u8a5e\u5f59\u4e4b\u9593\u5171 0.15 4839 0 0.07 433702 0.65\uf96f\u8a71 \uf96f \u4e8b\u7269 \u6a23\u5b50 \u77ed \u5510 \u8b1b \u4e8b\u60c5 \u4eca \u4e00
\u6a19\u8a9e \u540c\u64c1\u6709\u7684\u91cb\u7fa9\u5167\u6db5\uff0c\u4f5c\u70ba\uf978\u8a5e\u5f59\u9593\u95dc\uf997\u63cf\u8ff0\u4f7f\u7528\u3002 0.15 19510 0.43 \u5ba3\u50b3 \u5ba3\u5e03 \u5ee3\u544a \u5ba3\u63da \uf96f\u660e \u8b1b\u89e3 \u50b3\u9054 \u5927\u773e \u5728\u8868 8 \u4e2d\uff0c\u524d\u4e09\uf91d\u5206\u5225\u70ba Sketch Engine \u6240\u63d0\u4f9b\u8a9e\u6cd5\u6a21\u5f0f\u53ca\u6587\u5b57\u5171\u540c\u51fa\u73fe\ufa08\u70ba\uff0c\u627e \u516c\u5e03 \u6587\u5b57 \u6a6b\u5e45 0.12 15051 0.35 \u51fa\uff0c\u8207\"\u62db\u724c\uff02\u76f8\u4f3c\u7684\u9084\u539f\u5b57\u8a5e(Sketch Lemma)\u3001\u5206\uf969(Sketch Score)\u8207\u983b\uf961(Sketch-Freq.)\u3002 \u5728\u5b8c\u6210\u591a\u968e\u5c64\u91cb\u7fa9\u95dc\uf997\u539f\u5247\u5f8c\u5b9a\u7fa9\uff0c\u6211\u5011\u4ee5\u8655\uf9e4\u8a5e\u5f59\u9593\u4e00\u8a5e\u591a\u7fa9\u7684\u60c5\u6cc1\uff0c\u4f7f\u7528\u591a\u968e \u7e6a\u756b \u540a\u639b \u61f8\u639b \u66f8\u6cd5 \u5b57\u756b \u7b46 \u4f5c\u54c1 \u6587\u5b57 \u85dd \u8853 \u7b46\u756b \u5f8c\uf978\uf91d\u70ba\u4f7f\u7528(\u7b2c\u56db\u968e\u5c64)\u591a\u968e\u5c64\u91cb\u7fa9\u95dc\uf997\u8a08\u7b97\u7684\u7d50\u679c(SRD Score)\u8207 SRD-\u5171\u6709\u8fad\u5178\u91cb\u7fa9 \u5c64\u91cb\u7fa9\u95dc\uf997\u7684\u6700\u5927\u503c\uff0c\u4ee5\u53ca\u5171\u6709\u91cb\u7fa9\u6587\u5b57\u7684\u6db5\u84cb\u7a0b\ufa01\uff0c\uf92d\u6c7a\u5b9a\u54ea\u4e00\u7d44\u91cb\u7fa9\u5167\u5bb9\u9069\u5408\u4f5c\u70ba
\u559c\u6b61 \u5b57\u51fa\u73fe\u6bd4\uf961\u3002\u8868 8 \u662f\u4f9d Sketch Score \u7531\u5927\u800c\u5c0f\u6392\uf99c\uff0c\u6240\u4ee5\u53ef\u4ee5\u770b\u5230\u5728 Sketch Score \u76f8\u5c0d\u8f03 0.12 247564 \u540c\u7fa9\u8a5e\u7d44\u9593\u4e00\u8a5e\u591a\u7fa9\u4ee3\u8868\u3002\u4e26\u4e14\u4ee5\u6b64\u65b9\u5f0f\u5c07\u300a\u64f4\u5c55\u7248\u300b\u4e2d\u300c\u76f8\u7b49\u3001\u540c\u7fa9\u300d\u8a5e\uf9d0\u9032\ufa08\u5be6\u9a57\uff0c \u9ad8\u8208 \u4e8b\u60c5 \u6c7a\u5b9a \u6839\u64da \u6b61\u559c \u5feb\uf914 \u8208\u81f4 \u8208\u8da3 0.3 \u6109\u6085 \u6109\u5feb \u9ad8\u7684\u8a5e\u8a9e\u4e2d\uff0c\u8a5e\u5f59\ufa08\u70ba\u96d6\u8207\u300c\u62db\u724c\u300d\u76f8\u4f3c\uff0c\u4f46\u5b57\u7fa9\u4e0a\u8207\"\u62db\u724c\uff02\u7121\u95dc\u7684\u8a5e\u5f59\uff0c\u5982\uff1a\"\u559c \u5176\u7d50\u679c\u96d6\u7136\u53d7\u9650\u65bc\u300a\u570b\u8a9e\u8fad\u5178\u300b\u7121\u6cd5\u5b8c\u5168\u91cb\u7fa9\u300a\u64f4\u5c55\u7248\u300b\u4e2d\u7684\u5404\u9805\u8a5e\u5f59\uff0c\u4f46\u6211\u5011\u5c07\u73fe\u6709
\u724c\u5b50 \u6b61\u3001\u53ef\u53e3\uff02\u7b49\u3002\u800c SRD Score \u503c\u8f03\u9ad8\u7684\u8a5e\u5f59(\u7c97\u9ad4\u7dda\u5916\u6846)\uff0c\u5247\u53ef\u770b\u51fa\u8207\"\u62db\u724c\uff02\u7fa9\u6db5\u4e0a 0.12 18442 \u8abf\u5b50 \u97f3 \u724c\u5b50 \u5927\u8abf \uf96f\u8a71 \u7a0b\ufa01 \u9ad8\u4f4e \u8868\u793a \u97f3 \u7684\u8cc7\uf9be\u9032\ufa08\u6a19\u8a18\uff0c\u5340\u5225\u51fa\u300a\u64f4\u5c55\u7248\u300b\u4e2d\u300a\u570b\u8a9e\u8fad\u5178\u300b\u6240\u64c1\u6709\u7684\u8fad\u689d\uff0c\u4e26\u5c07\u6700\u9069\u5408\u8a6e\u91cb\u540c 0.79 \u8abf \u6642 \u6709\u8f03\u9ad8\u7684\u540c\u7fa9\uff0c\u4e14\u5171\u6709\u7684\u8fad\u5178\u91cb\u7fa9\u5b57\u4ea6\uf90f\uf99c\u65bc\u5f8c\u3002\u5982\"\u62db\u724c\uff02\u8207\"\u5b57\u865f\uff02\u7522\u751f\u91cb\u7fa9\u95dc\u4fc2 \u7fa9\u8a5e\u7d44\u8207\u5171\u6709\u91cb\u7fa9\u8a5e\u5f59\u6b0a\u91cd\u6700\u9ad8\u7684 Top 20 \u7b46\u8cc7\uf9be\uff0c\u6574\uf9e4\u51fa 8311 \u7b46\u5408\u4f75\u65bc\u300a\u64f4\u5c55\u7248\u300b\u4e2d\uff0c
\u53ef\u53e3 \u8a5e\u5f59\u8f03\u9ad8\u6b0a\u91cd\u503c\u7684 Top 10 \u662f\"\u4eba\u724c\u5b50\u62db\u724c\u5546\u5e97\u540d\u7a31\u6a19\uf9fc\u5b57\u865f\u737b\u6709\u865f\u78bc\uff02\u7b49\uff0c\uf901\u6e05\u695a\u5730\u77e5 0.12 6800 0.57\u7f8e \u6709 \u4eba \u4f7f \u597d \u91ce \u8868\u793a \u8b8a \u4e8b\u7269 \u597d\u770b \u4e26\u653e\u65bc\u7db2\uf937\u4e0a\u4f9b\u7814\u7a76\u8005\uf96b\u8003\u3002
\u689d\u5e45 \u9053\uf978\u8a5e\u5f59\u662f\u900f\u904e\"\u724c\u5b50\uff02\u3001\"\u62db\u724c\uff02\u7b49\u9032\ufa08\u91cb\u7fa9\u95dc\uf997\u3002 0.11 5848 0.66\u7d44 \u6307 \u55ae\u4f4d \u7d44\u7e54 \u7269 \u6a5f\u95dc \u4eba\u4e8b \u4e8b\u7269 \u4eba \u4e2d \u6700\u5f8c\uff0c\u70ba\uf901\uf9ba\u89e3\u591a\u968e\u5c64\u91cb\u4f7f\u7528\u5728\u540c\u7fa9\u8a5e\u7d44\u7684\u8a08\u7b97\u4e0a\u6709\uf9fd\u9ebc\u5340\u5225\uff0c\u6211\u5011\u4ea6\u6bd4\u8f03\u900f\u904e\u8a08
\u559c\u611b \u7b97\u9f90\u5927\u8a9e\uf9be\u5eab (\u5982\uff1aSketch Engine) \u6240\u53d6\u5f97\u7684\u540c\u7fa9\u8a5e\u5f59\uff0c\u8207\u672c\u7814\u7a76\u65b9\u6cd5\u7684\u540c\u7fa9\u95dc\uf997\u4e4b\u9593\u7684 0.11 47276 0.08 \u611b\u597d \u559c\u6b61 \u9ad8\u8208 \u559c\u597d \u559c\u611b \u5feb\uf914 \u81ea\u611b \u4e8b\u60c5 \u96d6\u7136\u524d\u8ff0\uf978\u7a2e\u65b9\u6cd5\u90fd\u80fd\u53d6\u5f97\uf978\u8a5e\u5f59\u76f8\u4f3c\u95dc\uf997\uf96b\u8003\u6307\u6a19\uff0c\u4f46\u4f7f\u7528\u8fad\u5178\u91cb\u7fa9\u5b57\u9032\ufa08\u540c\u7fa9 \u4e8b\u7269 \u6839\u64da \u6ae5\u7a97 0.10 7917 \u95dc\uf997\u63a2\u7a76\u7684\u65b9\u6cd5\u8207 Sketch Engine \u65b9\u6cd5\uf967\u540c\u5728\u65bc\uff1a(1) \uf978\u8005\u7684\u76f8\u4f3c\u95dc\u4fc2\u6240\u5efa\uf9f7\u7684\u539f\u5247\uf967\u540c\uff1a \u5dee\u5225\u3002\u96d6\u7136\u591a\u968e\u5c64\u91cb\u7fa9\u8a9e\u7fa9\u95dc\uf997\u65b9\u6cd5\u7121\u6cd5\u5982 Sketch Engine \u9032\ufa08\u5927\u8a9e\uf9be\u5eab\u4e2d\u8a5e\u8a9e\u7684\u8a08\u7b97\u5f8c 0.62\u4e8b\u7269 \u7269 \u8005 \u4eba \u5a92\u4ecb \u6307 \u4e2d \u4e00\ufa00 \u73fe\u8c61 \u5404 \u540d\u7247 0.10 18497 0.3 Sketch Engine \u4e2d\u7684\u540c\u7fa9\u95dc\uf997\u7684\u5efa\uf9f7\u662f\u900f\u904e\u8a5e\u5f59\u5728\u8a31\u591a\u8a9e\uf9be\u5eab\u4e4b\u4e2d\u7684\u51fa\u73fe\u983b\uf961\u3001\u6587\u6cd5\u7d50\u69cb \u7522\u751f\u76f8\u4f3c\u5b57\u8868\u4ee5\u4f9b\u67e5\u8a62\uff0c\u4f46\u53ef\u4ee5\u4f5c\u70ba Sketch Engine \u8a08\u7b97\u53d6\u5f97\u7d50\u679c\u4e4b\u5f8c\uff0c\u540c\u7fa9\u8a5e\u5f8c\u7684\u91cb\u7fa9 \u96fb\u5f71 \u5f71\u7247 \u4eba \u81a0\u7247 \u540d\u8072 \u4f9b \u6d3b\u52d5 \u5e95\u7248 \u97ff\uf977 \u6a23\u5f0f\u8207\u9020\uf906\ufa08\u70ba\uf92d\u6c7a\u5b9a\uff0c\u800c\u672c\u7814\u7a76\u6240\u4f7f\u7528\u7684\u65b9\u5f0f\u662f\u4ee5\u8fad\u5178\u70ba\u57fa\u790e\u7684\u5171\u7528\u91cb\u7fa9\u5b57\u8a5e\u89c0\u9ede\u51fa \u6bd4\u8f03\u3002 \u4eba\u7269 \u6a19\u724c 0.10 5443 0 \u767c\uff0c\u540c\u7fa9\u8a5e\u5f59\u5247\u662f\u5efa\u69cb\u5728\u4f7f\u7528\u76f8\u540c\u91cb\u7fa9\u5b57\u6240\u4f54\u7684\u6bd4\uf961\u591a\u5be1\u800c\u6c7a\u5b9a\u3002\u4f46\u6bd4\u8f03\u4e0b\uff0c\u4f7f\u7528\u591a\u968e \u591a\u968e\u5c64\u91cb\u7fa9\u95dc\uf997\u57fa\u65bc\u8fad\u5178\u91cb\u7fa9\u8a08\u7b97\uff0c\u96d6\u7136\u8fad\u5178\u5167\u5bb9\u7684\u7de8\u5beb\u7528\u5b57\u6703\u5f71\u97ff\u8a08\u7b97\u7d50\u679c\uff0c\u4f46
\u6d77\u5831 \u5c64\u91cb\u7fa9\u8a9e\u7fa9\u95dc\uf997\u8a08\u7b97\u539f\u5247\u4e0b\u7684\u8a5e\u5f59\u76f8\u95dc\u6027\uff0c\u76f8\u8f03\u65bc\u4f7f\u7528 Sketch Engine \u6240\u5f97\u5230\u7684\u7d50\u679c\uff0c 0.10 15672 0.62\u6307 \u5927\u5bb6 \u4eba \u4e00 \u7a31\u70ba \u6709 \u500b \u4e2d \u4e0a \u7a2e \u56e0\u4e2d\u6587\u91cb\u7fa9\u80fd\u63d0\u4f9b\u8a5e\u5f59\u8a9e\u7fa9\u4e2d\u6240\u5305\u62ec\u7684\u6982\uf9a3\u5167\u5bb9\uff0c\u5f9e\u800c\u4f7f\u5df2\u77e5\u7684\uf978\u8a5e\u5f59\u9593\u9032\ufa08\u540c\u7fa9\u6982\uf9a3
\u5ee3\u544a \uf901\u80fd\uf9a8\u4eba\u76f4\u89ba\uf9e4\u89e3\u671f\u6db5\u7fa9\u3002\u56e0\u70ba\u8a08\u7b97\u904e\u7a0b\u4e2d\u91cb\u7fa9\uf967\u65b7\u9032\ufa08\u8a5e\u5f59\u64f4\u5145\uff0c\u4e26\u8a08\u7b97\uf978\u8005\u9593\u7684\u5171 0.10 212258 0.67\u4e8b\u7269 \u6709 \u4eba \u4e00 \u4e0a \u89c0\uf9a3 \u7cbe\u795e \u8868\u793a \u6307 \u610f\uf9fc \u7684\u63a2\u8a0e\u6642\uff0c\u91cb\u7fa9\u8a9e\u7fa9\u95dc\uf997\u76f8\u8f03\u65bc\u6587\u6cd5\u3001\u8a5e\u983b\u8207\u5171\u73fe\u6b21\uf969\uff0c\u8f03\u80fd\u53d6\u5f97\uf901\u597d\u7684\u540c\u7fa9\u7d50\u679c\u3002\u96d6
\u9580\u9762 \u6709\u91cb\u7fa9\u8a5e\u5f59\uff0c\u56e0\u6b64\u5c0b\u627e\u51fa\u7684\u8a5e\u5f59\u4e5f\u8f03 Sketch Engine \uf9e0\u65bc\uf9ba\u89e3\u3002(2) \u76f8\u4f3c\u8a5e\u8868\u7522\u751f\u65b9\u5f0f\uff1a 0.10 5216 0.65\u4eba \u9ad4\u9762 \u9580\u9762 \u500b\u4eba \u6307 \u8eab\u5206 \u6709 \u9762\u5b50 \u4e0a \u4e00 \u7136\u6982\uf9a3\u8868\u9054\u662f\u7531\u5b57/\u8a5e\u5f59\u7d44\u5408\u800c\u6210\uff0c\u800c\u91cb\u7fa9\u5167\u5bb9\u6240\u4f7f\u7528\u7684\u6587\u5b57\u96d6\uf967\u5fc5\u7136\u6703\u8207\u6982\uf9a3\u7d44\u6210\u6587\u5b57
\u9910\u9928 Sketch Engine \u900f\u904e\u6587\u6cd5\u6a21\u5f0f\u8207\u7d71\u8a08\u503c\u7522\u751f\u76f8\u4f3c\u8a5e\u8868\uff0c\u53ef\u4ee5\u4f9d\u6a21\u5f0f\u5c0b\u627e\u7b26\u5408\u7684\u8a5e\u5f59\u800c\u7522\u751f 0.09 13639 0.73\u4eba \u4f9b \u8005 \u6709 \u53d7 \uf96f \u5225\u4eba \u8868\u793a \u4ed6\u4eba \u667a\u6167 \u76f8\u540c\uff0c\u4f46\u900f\u904e\u591a\u968e\u5c64\u7684\u91cb\u7fa9\u6bd4\u8f03\u4e0b\uff0c\u4ea6\u80fd\u5c0d\u591a\u7fa9\u8a5e\u9032\ufa08\u540c\u7fa9\u6b78\u7d0d\u3002\u8fad\u5178\u8207\u300a\u8a5e\uf9f4\u300b\u7684\u7de8
~~~\u904e\u9577\u522a\u9664~~~ \uf99c\u8868\u3002\u7136\u800c\u4f7f\u7528\u591a\u968e\u5c64\u8a9e\u7fa9\u95dc\uf997\u8a08\u7b97\u539f\u5247\uff0c\u56e0\u53d7\u968e\u5c64\u8b8a\uf969\u6c7a\u5b9a\u6240\u8a08\u7b97\u7684\uf978\u8a5e\u5f59\u4e4b\u9593\u6982\uf9a3 \u64b0\u90fd\u662f\u8271\u5de8\u7684\u6587\u5b78\u8a9e\uf9be\u5de5\u4f5c\uff0c\u5728\u904e\u53bb\u7684\u4eba\u5de5\u7684\u76f8\u4e92\u8a13\u8a41\u5de5\u4f5c\u4e4b\u4e2d\uff0c\u6211\u5011\u5e0c\u671b\u80fd\u61c9\u7528\u96fb\u8166
\u5b57\u865f \u6df1\u6dfa\u800c\u5b9a\uff0c\u4e14\u9700\u7d93\u904e\u8a5e\u5f59\u9593\u4ea4\u4e92\u6bd4\u8f03\u8a08\u7b97\u5f8c\u624d\u53ef\u5f97\u5230\u7d50\u679c\uff0c\u5373\u9700\u8981\u5c07\u8fad\u5178\u4e2d\u7684\u6240\u6709\u8a5e\u689d 0.07 13197 0.93 \u4eba \u724c\u5b50 \u62db\u724c \u5546\u5e97 \u540d\u7a31 \u6a19\uf9fc \u5b57\u865f \u737b \u6709 \u8f14\u52a9\u8a9e\uf9be\u5de5\u5177\u65b9\u6cd5\uff0c\u5354\u52a9\u7de8\u64b0\u8005\u80fd\u5c0d\u91cb\u7fa9\u5167\u5bb9\u9032\ufa08\u6574\uf9e4\u3001\u6821\u5c0d\u3002\u4ea6\u5e0c\u671b\uf9dd\u7528\u4eba\u5de5\u91cb\u7fa9\u7684 \u865f\u78bc \u51fa\u540d 0.07 10320 0.38 \u9032\ufa08\u4ea4\u4e92\u6bd4\u8f03\u3002\uf974\u4ee5\u76ee\u524d\u5be6\u9a57\u4e2d\u6240\u4f7f\u7528\u7684\u300a\u570b\u8a9e\u8fad\u5178\u300b\u70ba\uf9b5\uff0c\u662f\u5728 15 \u842c\u689d\u8a5e\u689d\u300127 \u842c \u5167\u5bb9\uff0c\u80fd\u8207\u77e5\uf9fc\u6982\uf9a3\uff0c\u5982 HowNet \u6216 Chinese WordNet\uff0c\u9032\ufa08\u4ea4\u4e92\u6bd4\u5c0d\uff0c\u4f7f\u91cb\u7fa9\u95dc\uf997\u8a08\u7b97 \u540d \u5177\u540d \u51fa\u540d \u51fa\u9762 \u4eba \u55ae\u4f4d \u7c3d\u540d \u8a08\u7b97 \u7a31\u865f loading)\uff0c\u6240\u4ee5\u9019\u78ba\u5be6\u4e00\u6a23\u5927\u5de5\u7a0b\u3002\u5728\u672c\u5be6\u9a57\u4e2d\u8981\u8a08\u7b97 Dd15A09=\u7b2c\u4e00\u5230\u7b2c\u56db\u968e\u5c64\u5927\u7d04\u90fd \u7f72\u540d \u500b\uf967\u540c\u91cb\u7fa9\u4e4b\u4e2d\u4ea4\u4e92\u6bd4\u8f03\uff0c\u4e14\u5728\u968e\u5c64\u8a08\u7b97\u539f\u5247\u4e0b\u6703\u7522\u751f\u6307\uf969\u589e\u52a0\u7684\u8a08\u7b97\u8ca0\u8377(computing \u80fd\u76f4\u63a5\u4f7f\u7528\u91cb\u7fa9\u6982\uf9a3\u9032\ufa08\u6bd4\u8f03\uff0c\u5f9e\u800c\u8b93\u8a5e\u5f59\u4e4b\u9593\u7684\u540c\u7fa9\u95dc\u4fc2\u53ef\u4ee5\uf901\u52a0\u6e05\u695a\u3002
~~~\u904e\u9577\u522a\u9664~~~ \u5728 1 \u79d2\u4e4b\u5167\u3001\u7b2c\u4e94\u968e\u5c64\u7d04\u5728 1.24 \u5206\u9418\u800c\u7b2c\uf9d1\u968e\u5c64\u5247\u7d04\u9700\u8981 5 \u5206\u9418\u3002\u96d6\u7136\u91cb\u7fa9\u8a9e\u7fa9\u95dc\uf997\u8a08 \u505a 0.07 1650329 0.68\u524d \u505a \u4eba \u67d0 \u6211 \u9032\ufa08 \u4e8b\u7269 \u88fd\u9020 \u81ea\u7a31 \u8a5e \u7279\u8272 0.07 566599 \u7b97\u65b9\u6cd5\uf967\u9069\u5408\u7528\u5728\u7522\u751f\u540c\u7fa9\u5b57\u8868\u4e0a\uff0c\u4f46\u5728\u8a08\u7b97\u5df2\u77e5\u7684\u540c\u7fa9\u8a5e\u6216\uf978\u751f\u8a5e\u4e4b\u9593\u7684\u95dc\uf997\uff0c\u56e0\u70ba 0.66\u5730\u65b9 \u4e8b\u7269 \u7269 \u7576\u5730 \u4eba \u67d0 \u5ba2\u89c0 \u4e00\ufa00 \u6307 \u5b58\u5728 \u719f\u6089 0.07 141431 \u8a73\u60c5 \u719f\u6089 0.22 \u77e5\u9053 \u660e\u767d \u8a73\u7d30 \u9053\uf9e4 \u4ed4\u7d30 \u7d30\u7bc0 \u77ad\u89e3 \u660e\u66c9 \u80fd\u63d0\u4f9b\uf978\u8a5e\u5f59\u4e4b\u9593\u8f03\u80fd\uf9a8\u4eba\uf9e4\u89e3\u7684\u91cb\u7fa9\u95dc\uf997\uff0c\u76f8\u8f03 Sketch Engine \u65b9\u6cd5\u662f\uf901\u70ba\u5408\u9069\u7684\u3002
", "num": null, "html": null, "type_str": "table", "text": "\u3002\u6211\u5011\u4ee5\"\u62db\u724c\uff02\u70ba\uf9b5\uff0c\u4f7f\u7528 zhTenTen \u8a9e\uf9be\u5eab\u53d6\u5f97\u5176\u540c\u7fa9\u8a5e\uff0c \u4e26\u6bd4\u8f03\u672c\u7814\u7a76\u6240\u4f7f\u7528\u7684\u91cb\u7fa9\u8a9e\u7fa9\u95dc\uf997\u539f\u5247\u6240\u8868\u793a\u7684\u95dc\uf997\u7a0b\ufa01\uff0c\u8207\u4f7f\u7528 Sketch Engine \u4e2d\u8a9e \u6cd5\u6a21\u5f0f\u53ca\u6587\u5b57\u5171\u540c\u51fa\u73fe\ufa08\u70ba\u6240\u5c0b\u627e\u7684\u540c\u7fa9\u8a5e\u5f59\u9032\ufa08\u6bd4\u8f03\u3002\u9078\u7528\u7684 zhTenTen \u8a9e\uf9be\u5eab\u662f\u7531\u7a0b \u5f0f\u81ea\u52d5\u6293\u53d6\u7db2\uf937\u4e0a\u7c21\u9ad4\u5b57\u4e2d\u6587\u6587\u672c\u5f8c\uff0c\u4f7f\u7528 Stanford Chinese Word Segmenter \u53ca Chinese Penn Treebank standard \u6a21\u5f0f\u6240\u5efa\uf9f7\u7684 Stanford Log-linear Part-of-speech Tagger \u539f\u5247\u8655\uf9e4 \u7684\u8a9e\uf9be\u5eab\uff0c\u76ee\u524d\u7d04\u6709 20 \u5104\u500b\u5b57\uff0c\u5408\u8a08 17 \u5104\uf967\u91cd\u8986\u5b57\u5728\u8a9e\uf9be\u5eab\u4e2d\u3002\u5728 Sketch Engine \u4e4b\u4e2d\uff0c \u6211\u5011\u4ee5\"\u62db\u724c\uff02\u8a5e\u5f59\u67e5\u8a62 Thesaurus \u529f\u80fd\u4e2d\u7684 Find Similar Words\uff0c\"\u62db\u724c\uff02\u5728 zhTenTen Proceedings of the 4th SIGHAN Workshop on Chinese Language Processing, Jeju island, Korea, 48-55." }, "TABREF19": { "content": "
. The Sketch Engine, Information
Technology Research Institute Technical Report, ITRI-04-08.
Loper, E. & Bird, S. (2002). NLTK: The Natural Language Toolkit, In Proceedings of the
ACL-02 Workshop on Effective tools and methodologies for teaching natural language
processing and computational linguistics, 1, 63-70.
Thesaurus Entry\uff0chttps://trac.sketchengine.co.uk/wiki/SkE/Help/PageSpecificHelp/Thesaurus,
last visited 2012/6/30.
Turney, P. (2001). Mining the Web for Synonyms: PMI-IR Versus LSA on TOEFL, In
Proceedings of the Twelfth European Conference on Machine Learning, 491-502.
\u738b\u5efa\u8389(2012)\u3002\uf941\u723e\u96c5\u7684\u540c\u7fa9\u8a5e\u8a5e\u5178\u6027\u8cea\u3002\u8fad\u66f8\u7814\u7a76\uff0c(02)\uff0c60-65\u3002
\u4e2d\u7814\u9662\u65b7\u8a5e\u7cfb\u7d71\uff0chttp://ckipsvr.iis.sinica.edu.tw/, last visited 2012/6/27.
\u5168\u6587\u5955\uff0c\u90ed\u8056\uf9f4(2012)\u3002\"\u6dfa\u8ac7\u7684\u58eb\uff02\u53ca\u5176\u540c\u7fa9\u8a5e\u7fa4\u7684\u7af6\u722d\u8207\u9078\u64c7\u3002\u524d\u6cbf\uff0c02\uff0c153-154\u3002
\u300a\u540c\u7fa9\u8a5e\u8a5e\uf9f4\u300b\u64f4\u5c55\u7248\uff0c
http://ir.hit.edu.cn/phpwebsite/index.php?module=pagemaster&PAGE_user_op=view_p
age&PAGE_id=162, last visited 2012/6/17.
\u5468\u4e9e\u6c11\uff0c\u9ec3\u5c45\u4ec1(2005)\u3002\u6f22\u5b57\u610f\u7b26\u77e5\uf9fc\u7d50\u69cb\u7684\u5efa\uf9f7\u3002\u7b2c\uf9d1\u5c46\u6f22\u8a9e\u8a5e\u5f59\u8a9e\u7fa9\u5b78\u7814\u8a0e\u6703\uf941\u6587
\u96c6\u3002
\u8303\u7d05\uf988(2011)\u3002\u300a\u5de6\u50b3\u300b\u4e2d\u8dea\u62dc\u7fa9\u540c\u7fa9\u8a5e\u7fa4\u8003\u5bdf\uff0c\u897f\u5357\u79d1\u6280\u5927\u5b78\u5b78\u5831(\u54f2\u5b78\u793e\u6703\u79d1\u5b78\u7248)\uff0c
28(5)\uff0c93-97\u3002
\uf9f4\u980c\u5805(2004)\u3002\u57fa\u65bc\u8853\u8a9e\u62bd\u53d6\u8207\u8853\u8a9e\u53e2\u96c6\u6280\u8853\u7684\u4e3b\u984c\u3002Computational Linguistics and
Chinese Language Processing, 9(1)\uff0c97-112\u3002
\u91cd\u7de8\u570b\u8a9e\u8fad\u5178\u4fee\u8a02\u672c\uff0chttp:/\u9673\u5149\u83ef\u3001\u838a\u96c5\u84c1(2001)\u3002\u61c9\u7528\u65bc\u8cc7\u8a0a\u6aa2\uf96a\u7684\u4e2d\u6587\u540c\u7fa9\u8a5e\u4e4b\u5efa\u69cb\u3002\u4e2d\u570b\u5716\u66f8\u9928\u5b78\u6703\u6703\u5831\uff0c
67\uff0c93-108\u3002
\u6885\u5bb6\u99d2\u3001\u7afa\u4e00\u9cf4\u3001\u9ad8\u860a\u7426\u8207\u6bb7\u9d3b\u7fd4(1983)\u3002\u7de8\u7e82\u6f22\u8a9e\uf9d0\u7fa9\u8a5e\u5178\u7684\u5617\u8a66-\u300a\u540c\u7fa9\u8a5e\u8a5e\uf9f4\u300b\u7c21
\u4ecb\u3002\u8fad\u66f8\u7814\u7a76\uff0c1983(01)\uff0c133-138\u3002
\u9ec3\u4f83\u8ff0\u3001\u9ec3\u60bc\u7de8(1983)\u3002\u300a\u6587\u5b57\u8072\u97fb\u8a13\u8a41\u7b46\u8a18\u300b\uff0c\u4e0a\u6d77\u53e4\u7c4d\u51fa\u7248\u793e\uff0c1983 \uf98e 4 \u6708\u7248\uff0c190\u3002
\u66fe\u6167\u99a8\u3001\uf9c7\u662d\uf9f3\u3001\u9ad8\u7167\u660e\u8207\u9673\u514b\u5065(2002)\u3002\u4ee5\u69cb\u8a5e\u8207\u76f8\u4f3c\u6cd5\u70ba\u672c\u7684\u4e2d\u6587\u52d5\u8a5e\u81ea\u52d5\u5206\uf9d0\u7814
\u7a76 \u3002 International Journal of Computational Linguistics and Chinese Language
Processing\uff0c7(1)\uff0c1-28\u3002
\u8d99\u9022\u6bc5\u8207\u937e\u66c9\u82b3(2011)\u3002\u57fa\u65bc\u8fad\u5178\u8a5e\u5f59\u91cb\u7fa9\u4e4b\u591a\u968e\u5c64\u8a9e\u7fa9\u95dc\uf997\u7a0b\ufa01\u8a08\uf97e-\u4ee5\u300c\u76ee\u300d\u5b57\u90e8\u70ba
\uf9b5\u3002\u4e2d\u6587\u8a08\u7b97\u8a9e\u8a00\u5b78\u671f\u520a\uff0c16(3-4)\uff0c21-40\u3002
\u7dad\u57fa\u767e\u79d1\uff0c\u7e41\u7c21\u5206\u6b67\u8a5e\u8868\uff0chttp://zh.wikipedia.org/zh-hant/Wikipedia:\u7e41\u7b80\u5206\u6b67\u8bcd\u8868\uff0clast
visited 2012/6/27.
\uf9c7\u633a\u3001\uf902\u842c\u7fd4\u3002\u4e2d\u6587\u8a9e\u7fa9\u8655\uf9e4\uff0chttp://ir.hit.edu.cn/\uff0clast visited 2012/12/06.
\u9b91\u514b\u6021(1983)\u3002\u6f22\u8a9e\uf9d0\u7fa9\u8a5e\u5178\u63a2\uf96a\u300a\u540c\u7fa9\u8a5e\u8a5e\uf9f4\u300b\u7de8\u5f8c\u3002\u8fad\u66f8\u7814\u7a76\uff0c(02)\uff0c64-70\u3002
", "num": null, "html": null, "type_str": "table", "text": "/dict.revised.moe.edu.tw/, last visited 2012/6/17." }, "TABREF21": { "content": "
CATEGORY
", "num": null, "html": null, "type_str": "table", "text": "" }, "TABREF23": { "content": "
WordFreq.Translation
\u5c0d\u8c61 duixiang5322object; target
\u4e8b\u7269 shiwu797event; object
\u7bc4\u570d fanwei525range
\u7a0b\ufa01 chengdu481extent, degree
\u5176\u4ed6 qita454other
\ufa08\u70ba xingwei393behavior
\u8005 zhe334someone; something
\u4e8b shi\uff1b\u4e8b\u60c5 shiqing330thing; job; business
\u8072\u97f3 shengyin329sound; voice
\u5de5\u5177 gongju280tool
\u689d\u4ef6 tiaojian252condition
\uf967\u540c butong233difference
\u6a19\u6e96 biaozhun228standard
\u6587\u5316 wenhua209culture
\u529f\u80fd gongneng184function
\u76ee\u6a19 mubiao177goal
\u53e4\u4ee3 gudai177ancient times
\u7cfb\u7d71 xitong170system
\uf96b\u8003\u9ede cankaodian169reference point
\u76ee\u7684 mudi163purpose
\uf9b4\u57df lingyu161field, domain
\u897f\u5143 xiyuan154A.D.
\u60c5\u7dd2 qingxu152emotion
\u751f\u7269 shengwu149creature
\u5fc3\uf9e4 xinli145mentality
\u5730\u4f4d diwei143status
\u6eab\ufa01 wendu140temperature
\u904e\u7a0b guocheng138process
", "num": null, "html": null, "type_str": "table", "text": "" }, "TABREF25": { "content": "
\u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u5b78\u6703
\u76f8\u95dc\u51fa\u7248\u54c1\u50f9\u683c\u8868\u53ca\u8a02\u8cfc\u55ae
\u7de8\u865f\u66f8\u76ee\u6703 \u54e1\u975e\u6703\u54e1\u518a\u6578\u91d1\u984d
no.92-01, no. 92-04 (\u5408\u8a02\u672c) ICG \u4e2d\u7684\u8ad6\u65e8\u89d2\u8272 \u8207
1.A conceptual Structure for Parsing Mandarin--its
Frame and General Applications--NT$ 80NT$__________
1. 2.2. no.92-02, no. 92-03 (\u5408\u8a02\u672c) no.92-01, no. 92-04(\u5408\u8a02\u672c) ICG \u4e2d\u7684\u8ad6\u65e8\u89d2\u8272\u8207 A Conceptual V-N \u8907\u5408\u540d\u8a5e\u8a0e\u8ad6\u7bc7 \u8207V-R \u8907\u5408\u52d5\u8a5e\u8a0e\u8ad6\u7bc7 Structure for Parsing Mandarin --Its Frame and General Applications--3. no.93-01 \u65b0\u805e\u8a9e\u6599\u5eab\u5b57\u983b\u7d71\u8a08\u8868 no.92-02 V-N \u8907\u5408\u540d\u8a5e\u8a0e\u8ad6\u7bc7 & 92-03 V-R \u8907\u5408\u52d5\u8a5e\u8a0e\u8ad6\u7bc7 4. no.93-02 \u65b0\u805e\u8a9e\u6599\u5eab\u8a5e\u983b\u7d71\u8a08\u8868US$ 9 12120 120 360US$ 19 21_____ US$15 _____ 17 __________ _____ _____ _____ __________ _____
3. no.93-01 \u65b0\u805e\u8a9e\u6599\u5eab\u5b57\u983b\u7d71\u8a08\u8868 5. no.93-03 \u65b0\u805e\u5e38\u7528\u52d5\u8a5e\u8a5e\u983b\u8207\u5206\u985e 4. no.93-02 \u65b0\u805e\u8a9e\u6599\u5eab\u8a5e\u983b\u7d71\u8a08\u8868 6. no.93-05 \u4e2d\u6587\u8a5e\u985e\u5206\u67908 18180 18513 3011 _____ 24 __________ _____ _____ __________ _____
5. no.93-03 \u65b0\u805e\u5e38\u7528\u52d5\u8a5e\u8a5e\u983b\u8207\u5206\u985e 7. no.93-06 \u73fe\u4ee3\u6f22\u8a9e\u4e2d\u7684\u6cd5\u76f8\u8a5e10401513 __________ __________
6. no.93-05 \u4e2d\u6587\u8a5e\u985e\u5206\u6790 8. no.94-01 \u4e2d\u6587\u66f8\u9762\u8a9e\u983b\u7387\u8a5e\u5178(\u65b0\u805e\u8a9e\u6599\u8a5e\u983b\u7d71\u8a08)103801513 __________ __________
7. no.93-06 \u73fe\u4ee3\u6f22\u8a9e\u4e2d\u7684\u6cd5\u76f8\u8a5e 8. no.94-01 \u4e2d\u6587\u66f8\u9762\u8a9e\u983b\u7387\u8a5e\u5178(\u65b0\u805e\u8a9e\u6599\u8a5e\u983b\u7d71\u8a08) 9. no.94-02 \u53e4\u6f22\u8a9e\u5b57\u983b\u8868 9. no.94-02 \u53e4\u6f22\u8a9e\u5b57\u983b\u8868 10. no.95-01 \u6ce8\u97f3\u6aa2\u7d22\u73fe\u4ee3\u6f22\u8a9e\u5b57\u983b\u88685 18 11180 7510 30 168 24 _____ 14 __________ _____ _____ _____ __________ _____ _____
10. no.95-01 \u6ce8\u97f3\u6aa2\u7d22\u73fe\u4ee3\u6f22\u8a9e\u5b57\u983b\u8868 11. no.95-02/98-04 \u4e2d\u592e\u7814\u7a76\u9662\u5e73\u8861\u8a9e\u6599\u5eab\u7684\u5167\u5bb9\u8207\u8aaa\u660e8751310 __________ __________
11. no.95-02/98-04 \u4e2d\u592e\u7814\u7a76\u9662\u5e73\u8861\u8a9e\u6599\u5eab\u7684\u5167\u5bb9\u8207\u8aaa\u660e 12. no.95-03 \u8a0a\u606f\u70ba\u672c\u7684\u683c\u4f4d\u8a9e\u6cd5\u8207\u5176\u5256\u6790\u65b9\u6cd5 12. no.95-03 \u8a0a\u606f\u70ba\u672c\u7684\u683c\u4f4d\u8a9e\u6cd5\u8207\u5176\u5256\u6790\u65b9\u6cd5 13. no.96-01 \u300c\u641c\u300d\u6587\u89e3\u5b57-\u4e2d\u6587\u8a5e\u754c\u7814\u7a76\u8207\u8cc7\u8a0a\u7528\u5206\u8a5e\u6a19\u6e963 375 1108 86 _____ 6 __________ _____ _____ __________ _____
13. no.96-01 \u300c\u641c\u300d\u6587\u89e3\u5b57-\u4e2d\u6587\u8a5e\u754c\u7814\u7a76\u8207\u8cc7\u8a0a\u7528\u5206\u8a5e\u6a19\u6e96 14. no.97-01 \u53e4\u6f22\u8a9e\u8a5e\u983b\u8868 (\u7532)84001311 __________ __________
14. no.97-01 \u53e4\u6f22\u8a9e\u8a5e\u983b\u8868 (\u7532) 15. no.97-02 \u8ad6\u8a9e\u8a5e\u983b\u886819903125 __________ __________
15. no.97-02 \u8ad6\u8a9e\u8a5e\u983b\u8868 16. no.98-01 \u8a5e\u983b\u8a5e\u5178 16 no.98-01 \u8a5e\u983b\u8a5e\u5178 17. no.98-02 Accumulated Word Frequency in CKIP Corpus 17. no.98-02 Accumulated Word Frequency in CKIP Corpus9 18 15395 34014 30 2512 26 _____ 21 __________ _____ _____ _____ __________ _____ _____
18. no.98-03 \u81ea\u7136\u8a9e\u8a00\u8655\u7406\u53ca\u8a08\u7b97\u8a9e\u8a00\u5b78\u76f8\u95dc\u8853\u8a9e\u4e2d\u82f1\u5c0d\u8b6f\u8868 18. no.98-03 \u81ea\u7136\u8a9e\u8a00\u8655\u7406\u53ca\u8a08\u7b97\u8a9e\u8a00\u5b78\u76f8\u95dc\u8853\u8a9e\u4e2d\u82f1\u5c0d\u8b6f\u886849097 __________ __________
19. no.02-01 \u73fe\u4ee3\u6f22\u8a9e\u53e3\u8a9e\u5c0d\u8a71\u8a9e\u6599\u5eab\u6a19\u8a3b\u7cfb\u7d71\u8aaa\u660e 19. no.02-01 \u73fe\u4ee3\u6f22\u8a9e\u53e3\u8a9e\u5c0d\u8a71\u8a9e\u6599\u5eab\u6a19\u8a3b\u7cfb\u7d71\u8aaa\u660e8751311 __________ __________
20. 21. Readings in Chinese Language Processing Computational Linguistics & Chinese Languages Processing (One year) (Back issues of IJCLCLP: US$ 20 per copy) 20 \u8ad6\u6587\u96c6 COLING 2002 \u7d19\u672c 21. \u8ad6\u6587\u96c6 COLING 2002 \u5149\u789f\u7247---25100 300100 25100 _____ 21 __________ _____ _____ __________ _____
22. \u8ad6\u6587\u96c6 COLING 2002 Workshop \u5149\u789f\u7247300__________
23. \u8ad6\u6587\u96c6 ISCSLP 2002 \u5149\u789f\u7247300TOTAL __________ __________
24.10% member discount: ___________Total Due:__________ (\u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u5b78\u67031997\u7b2c\u56db\u5b63\u5b78\u8853\u6d3b\u52d5) \u4ea4\u8ac7\u7cfb\u7d71\u66a8\u8a9e\u5883\u5206\u6790\u7814\u8a0e\u6703\u8b1b\u7fa9 130 _____ _____
\u2027 OVERSEAS USE ONLY \u4e2d\u6587\u8a08\u7b97\u8a9e\u8a00\u5b78\u671f\u520a (\u4e00\u5e74\u56db\u671f) \u5e74\u4efd\uff1a______ 25. (\u904e\u671f\u671f\u520a\u6bcf\u672c\u552e\u50f9500\u5143) \u2027 PAYMENT\uff1a \u25a1 Credit Card ( Preferred ) 26. Readings of Chinese Language Processing 27. \u5256\u6790\u7b56\u7565\u8207\u6a5f\u5668\u7ffb\u8b6f 1990 \u203b \u6b64\u50f9\u683c\u8868\u50c5\u9650\u570b\u5167 (\u53f0\u7063\u5730\u5340) \u4f7f\u7528 \u25a1 Name (please print): \u5283\u64a5\u5e33\u6236\uff1a\u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u5b78\u6703 \u5283\u64a5\u5e33\u865f\uff1a19166251 ---675 150 Signature:2,500 \u5408 \u8a08_____ _____ _____ __________ _____ _____ _____
Fax: \uf997\u7d61\u96fb\u8a71\uff1a(02) 2788-3799 \u8f491502 \uf997\u7d61\u4eba\uff1a \u9ec3\u742a \u5c0f\u59d0\u3001\u4f55\u5a49\u5982 \u5c0f\u59d0E-mail: E-mail:aclclp@hp.iis.sinica.edu.tw
\u8a02\u8cfc\u8005\uff1a Address\uff1a \u5730 \u5740\uff1a\u6536\u64da\u62ac\u982d\uff1a
\u96fb\u8a71\uff1aE-mail:
", "num": null, "html": null, "type_str": "table", "text": "Money Order or Check payable to \"The Association for Computation Linguistics and Chinese Language Processing \" or \"\u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u5b78\u6703\" \u2027 E-mail\uff1aaclclp@hp.iis.sinica.edu.tw" } } } }