{ "paper_id": "O08-1011", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:02:18.467103Z" }, "title": "Propositional Term Extraction over Short Text using Word Cohesiveness and Conditional Random Fields with Multi-Level Features", "authors": [ { "first": "Ru-Yng", "middle": [], "last": "\u5f35\u5982\uf9ae", "suffix": "", "affiliation": { "laboratory": "", "institution": "Cheng Kung University", "location": {} }, "email": "" }, { "first": "", "middle": [], "last": "Chang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Cheng Kung University", "location": {} }, "email": "" }, { "first": "Chung-Hsien", "middle": [], "last": "\u5433\u5b97\u61b2", "suffix": "", "affiliation": { "laboratory": "", "institution": "Cheng Kung University", "location": {} }, "email": "" }, { "first": "", "middle": [], "last": "Wu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Cheng Kung University", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Propositional terms in a research abstract (RA) generally convey the most important information for readers to quickly glean the contribution of a research article. This paper considers propositional term extraction from RAs as a sequence labeling task using the IOB (Inside, Outside, Beginning) encoding scheme. In this study, conditional random fields (CRFs) are used to initially detect the propositional terms, and the combined association measure (CAM) is applied to further adjust the term boundaries. This method can extract beyond simply NP-based propositional terms by combining multi-level features and inner lexical cohesion. Experimental results show that CRFs can significantly increase the recall rate of imperfect boundary term extraction and the CAM can further effectively improve the term boundaries. \u6458\u8981 \u547d\u984c\u8853\u8a9e(Propositional Term)\u8868\u9054\u6587\u7ae0\u4e2d\u91cd\u8981\u6982\uf9a3\u4e14\u5f15\u5c0e\uf95a\u8005\u6587\u7ae0\u8108\u7d61\u4e4b\u767c\u5c55\u3002\u9019\u7bc7\uf941 \u6587\u4ee5\u5b78\u8853\uf941\u6587\u6458\u8981\u70ba\u5be6\u9a57\u5c0d\u8c61\u9032\ufa08\u547d\u984c\u8853\u8a9e\u64f7\u53d6\uff0c\u7814\u7a76\u4e2d\u6574\u5408\u689d\u4ef6\u96a8\u6a5f\u57df(Conditional Random Fields, CRFs) \u4ee5\u53ca\u7d50\u5408\uf997\u7e6b\u6e2c\uf97e(Combined Association Measure, CAM) \uf978\u7a2e\u65b9 \u6cd5\uff0c\u8003\uf97e\u8a5e\u5f59\u5167\u90e8\u51dd\u805a\uf98a\u548c\u6587\u8108\uf978\u5927\uf9d0\u8a0a\u606f\uff0c\u622a\u53d6\u51fa\u7684\u547d\u984c\u8853\u8a9e\uf967\u518d\u4fb7\u9650\u65bc\u540d\u8a5e\u7247\u8a9e\u578b \u614b\uff0c\u4e14\u53ef\u7531\u55ae\u8a5e\u6216\u591a\u8a5e\u6240\u69cb\u6210\u3002\u5728\u547d\u984c\u8853\u8a9e\u64f7\u53d6\u7684\u904e\u7a0b\u4e2d\uff0c\u5c07\u5176\u8996\u70ba\u4e00\u7a2e\u5e8f\uf99c\u8cc7\uf9be\u6a19\u7c64 \u7684\u4efb\u52d9\uff0c\u4e26\uf9dd\u7528 IOB \u7de8\u78bc\u65b9\u5f0f\uf9fc\u5225\u547d\u984c\u8ff0\u8a9e\u7684\u908a\u754c\uff0cCRF \u8003\uf97e\u591a\u5c64\u6b21\u69cb\u6210\u547d\u984c\u8ff0\u8a9e\u7684 \u7279\u5fb5\uff0c\u8ca0\u8cac\u521d\u6b65\u547d\u984c\u8853\u8a9e\u5075\u6e2c\uff0c\u518d\uf9dd\u7528 CAM \u8a08\u7b97\u8a5e\u5f59\u51dd\u805a\uf98a\uff0c\u85c9\u4ee5\u52a0\u5f37\u78ba\u8a8d\u547d\u984c\u8853\u8a9e \u8a5e\u5f59\u7684\u908a\u754c\u3002\u5be6\u9a57\u7d50\u679c\u986f\u793a \uff0c\u672c\u7814\u7a76\u6240\u63d0\u51fa\u7684\u65b9\u6cd5\u6bd4\u4ee5\u5f80\u8ff0\u8a9e\u5075\u6e2c\u65b9\u6cd5\u5728\u6548\u80fd\u4e0a\u6709\u660e \u986f\u589e\u9032\uff0c\u5176\u4e2d\uff0cCRF \u660e\u986f\u589e\u9032\u975e\u5b8c\u7f8e\u8853\u8a9e\u8a5e\u5f59\u908a\u754c\u8fa8\uf9fc(Imperfect hits)\u7684\u53ec\u56de\uf961\uff0c\u800c CAM \u5247\u6709\u6548\u4fee\u6b63\u8853\u8a9e\u8a5e\u5f59\u908a\u754c\u3002", "pdf_parse": { "paper_id": "O08-1011", "_pdf_hash": "", "abstract": [ { "text": "Propositional terms in a research abstract (RA) generally convey the most important information for readers to quickly glean the contribution of a research article. This paper considers propositional term extraction from RAs as a sequence labeling task using the IOB (Inside, Outside, Beginning) encoding scheme. In this study, conditional random fields (CRFs) are used to initially detect the propositional terms, and the combined association measure (CAM) is applied to further adjust the term boundaries. This method can extract beyond simply NP-based propositional terms by combining multi-level features and inner lexical cohesion. Experimental results show that CRFs can significantly increase the recall rate of imperfect boundary term extraction and the CAM can further effectively improve the term boundaries. \u6458\u8981 \u547d\u984c\u8853\u8a9e(Propositional Term)\u8868\u9054\u6587\u7ae0\u4e2d\u91cd\u8981\u6982\uf9a3\u4e14\u5f15\u5c0e\uf95a\u8005\u6587\u7ae0\u8108\u7d61\u4e4b\u767c\u5c55\u3002\u9019\u7bc7\uf941 \u6587\u4ee5\u5b78\u8853\uf941\u6587\u6458\u8981\u70ba\u5be6\u9a57\u5c0d\u8c61\u9032\ufa08\u547d\u984c\u8853\u8a9e\u64f7\u53d6\uff0c\u7814\u7a76\u4e2d\u6574\u5408\u689d\u4ef6\u96a8\u6a5f\u57df(Conditional Random Fields, CRFs) \u4ee5\u53ca\u7d50\u5408\uf997\u7e6b\u6e2c\uf97e(Combined Association Measure, CAM) \uf978\u7a2e\u65b9 \u6cd5\uff0c\u8003\uf97e\u8a5e\u5f59\u5167\u90e8\u51dd\u805a\uf98a\u548c\u6587\u8108\uf978\u5927\uf9d0\u8a0a\u606f\uff0c\u622a\u53d6\u51fa\u7684\u547d\u984c\u8853\u8a9e\uf967\u518d\u4fb7\u9650\u65bc\u540d\u8a5e\u7247\u8a9e\u578b \u614b\uff0c\u4e14\u53ef\u7531\u55ae\u8a5e\u6216\u591a\u8a5e\u6240\u69cb\u6210\u3002\u5728\u547d\u984c\u8853\u8a9e\u64f7\u53d6\u7684\u904e\u7a0b\u4e2d\uff0c\u5c07\u5176\u8996\u70ba\u4e00\u7a2e\u5e8f\uf99c\u8cc7\uf9be\u6a19\u7c64 \u7684\u4efb\u52d9\uff0c\u4e26\uf9dd\u7528 IOB \u7de8\u78bc\u65b9\u5f0f\uf9fc\u5225\u547d\u984c\u8ff0\u8a9e\u7684\u908a\u754c\uff0cCRF \u8003\uf97e\u591a\u5c64\u6b21\u69cb\u6210\u547d\u984c\u8ff0\u8a9e\u7684 \u7279\u5fb5\uff0c\u8ca0\u8cac\u521d\u6b65\u547d\u984c\u8853\u8a9e\u5075\u6e2c\uff0c\u518d\uf9dd\u7528 CAM \u8a08\u7b97\u8a5e\u5f59\u51dd\u805a\uf98a\uff0c\u85c9\u4ee5\u52a0\u5f37\u78ba\u8a8d\u547d\u984c\u8853\u8a9e \u8a5e\u5f59\u7684\u908a\u754c\u3002\u5be6\u9a57\u7d50\u679c\u986f\u793a \uff0c\u672c\u7814\u7a76\u6240\u63d0\u51fa\u7684\u65b9\u6cd5\u6bd4\u4ee5\u5f80\u8ff0\u8a9e\u5075\u6e2c\u65b9\u6cd5\u5728\u6548\u80fd\u4e0a\u6709\u660e \u986f\u589e\u9032\uff0c\u5176\u4e2d\uff0cCRF \u660e\u986f\u589e\u9032\u975e\u5b8c\u7f8e\u8853\u8a9e\u8a5e\u5f59\u908a\u754c\u8fa8\uf9fc(Imperfect hits)\u7684\u53ec\u56de\uf961\uff0c\u800c CAM \u5247\u6709\u6548\u4fee\u6b63\u8853\u8a9e\u8a5e\u5f59\u908a\u754c\u3002", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Researchers generally review Research Abstracts (RAs) to quickly track recent research trends. However, many non-native speakers experience difficulties in writing and reading RAs [1] . The author-defined keywords and categories of the research articles currently utilized to provide researchers with access to content guiding information are cursory and general. Therefore, developing a propositional term extraction system is an attempt to exploit the linguistic evidence and other characteristics of RAs to achieve efficient paper comprehension. Other applications of the proposed method contain sentence extension, text generation, and content summarization.", "cite_spans": [ { "start": 180, "end": 183, "text": "[1]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "A term is a linguistic representation of a concept with a specific meaning in a particular field. It may be composed of a single word (called a simple term), or several words (a multiword term) [2] . A propositional term is a term that refers to the basic meaning of a sentence (the proposition) and helps to extend or control the development of ideas in a text. The main difference between a term and a propositional term is that a propositional term, which can guide the reader through the flow of the content, is determined by not only syntax or morphology but semantic information. Take RAs to illustrate the difference between a term and a propositional term. Cheng [3] indicted that a science RA is composed of background, manner, attribute, comparison and evaluation concepts. In Figure 1 , the terms underlined are the propositional terms which convey the important information of the RA. In the clause \"we present one of the first robust LVCSR systems that use a syllable-level acoustic unit for LVCSR, \uff02 the terms \" LVCSR systems \uff02 , \" syllable-level acoustic unit \uff02 and \"LVCSR\uff02 respectively represent the background, manner and background concepts of the research topic, and can thus be regarded as propositional terms in this RA. The background concepts can be identified by clues from the linguistic context, such as the phrases \"most\u2026LVCSR systems\uff02 and \"in the past decade\uff02, which indicate the aspects of previous research on LVCSR. For the manner concept, contextual indicators such as the phrases \"present one of\u2026\uff02, \"that use\uff02 and \"for LVCSR\uff02 express the aspects of the methodology used in the research. Propositional terms may be composed of a variety of word forms and syntactic structures and thus may not only be NP-based, and therefore cannot be extracted by previous NP-based term extraction approaches.", "cite_spans": [ { "start": 194, "end": 197, "text": "[2]", "ref_id": "BIBREF1" }, { "start": 671, "end": 674, "text": "[3]", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 787, "end": 795, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Most large vocabulary continuous speech recognition (LVCSR) systems in the past decade have used a context-dependent (CD) phone as the fundamental acoustic unit. In this paper, we present one of the first robust LVCSR systems that use a syllable-level acoustic unit for LVCSR on telephone-bandwidth speech. This effort is motivated by the inherent limitations in phone-based approaches-namely the lack of an easy and efficient way for modeling long-term temporal dependencies. A syllable unit spans a longer time frame, typically three phones, thereby offering a more parsimonious framework for modeling pronunciation variation in spontaneous speech. We present encouraging results which show that a syllable-based system exceeds the performance of a comparable triphone system both in terms of word error rate (WER) and complexity. The WER of the best syllable system reported here is 49.1% on a standard SWITCHBOARD evaluation, a small improvement over the triphone system. We also report results on a much smaller recognition task, OGI Alphadigits, which was used to validate some of the benefits syllables offer over triphones. The syllable-based system exceeds the performance of the triphone system by nearly 20%, an impressive accomplishment since the alphadigits application consists mostly of phone-level minimal pair distinctions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In the past, there were three main approaches to term extraction: linguistic [4] , statistical [5, 6] , and C/NC-value based [7, 8] hybrid approaches. Most previous approaches can only achieve a good performance on a test article composed of a relatively large amount of words. Without the use of large amount of words, this study proposes a method for extracting and weighting single-and multi-word propositional terms of varying syntactic structures.", "cite_spans": [ { "start": 77, "end": 80, "text": "[4]", "ref_id": "BIBREF3" }, { "start": 95, "end": 98, "text": "[5,", "ref_id": "BIBREF4" }, { "start": 99, "end": 101, "text": "6]", "ref_id": "BIBREF5" }, { "start": 125, "end": 128, "text": "[7,", "ref_id": "BIBREF6" }, { "start": 129, "end": 131, "text": "8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Figure1. A Manually-Tagged Example of Propositional Terms in an RA", "sec_num": null }, { "text": "This research extracts the propositional terms beyond simply the NP-based propositional terms from the abstract of technical papers and then regards propositional term extraction as a sequence labeling task. To this end, this approach employs an IOB (Inside, Outside, Beginning) encoding scheme [9] to specify the propositional term boundaries, and conditional random fields (CRFs) [10] to combine arbitrary observation features to find the globally optimal term boundaries. The combined association measure (CAM) [11] is further adopted to modify the propositional term boundaries. In other words, this research not only considers the multi-level contextual information of an RA (such as word statistics, tense, morphology, syntax, semantics, sentence structure, and cue words) but also computes the lexical cohesion of word sequences to determine whether or not a propositional term is formed, since contextual information and lexical cohesion are two major factors for propositional term generation. The system framework essentially consists of a training phase and a test phase. In the training phase, the multi-level features were extracted from specific domain papers which were gathered from the SCI (Science Citation Index)-indexed and SCIE (Science Citation Index Expanded)-indexed databases. The specific domain papers are annotated by experts and then parsed. The feature extraction module collects statistical, syntactic, semantic and morphological level global and local features, and the parameter estimation module calculates conditional probabilities and optimal weights. The propositional term detection CRF model was built with feature extraction module and the parameter estimation module. During the test phase users can input an RA and obtain system feedback, i.e. the propositional terms of the RA. When the CRF model produces the preliminary candidate propositional terms, the propositional term generation module utilizes the combined association measure (CAM) to adjust the propositional term boundaries. The system framework proposed in this paper for RA propositional term extraction is shown in Figure 2 . A more detailed discussion is presented in the following subsections.", "cite_spans": [ { "start": 295, "end": 298, "text": "[9]", "ref_id": "BIBREF8" }, { "start": 382, "end": 386, "text": "[10]", "ref_id": "BIBREF9" }, { "start": 514, "end": 518, "text": "[11]", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 2123, "end": 2131, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "System Design and Development", "sec_num": "2." }, { "text": "In order to produce different levels of information and further assist feature extraction in the training and test phases, several resources were employed. This study chooses the ACM Computing Classification System (ACM CSS) [12] to serve as the domain terminology list for propositional term extraction from computer science RAs. The ACM CSS provides important subject descriptors for computer science, and was developed by the Association for Computing Machinery. The ACM CSS also provides a list of Implicit Subject Descriptors, which includes names of languages, people, and products in the field of computing. A mapping database, derived from WordNet (http://wordnet.princeton.edu/) and SUMO (Suggested Upper Merged Ontology) (http://ontology.teknowledge.com/) [13] , supplies the semantic concept information of each word and the hierarchical concept information from the ontology. The AWL (Academic Words List) (http://www.vuw.ac.nz/lals/research/awl/) [14] is an academic word list containing 570 word families whose words are selected from different subjects. The syntactic level information of the RAs was obtained using Charniak's parser [15] , which is a \"maximum-entropy inspired\" probabilistic generative model parser for English.", "cite_spans": [ { "start": 225, "end": 229, "text": "[12]", "ref_id": "BIBREF11" }, { "start": 766, "end": 770, "text": "[13]", "ref_id": "BIBREF12" }, { "start": 960, "end": 964, "text": "[14]", "ref_id": null }, { "start": 1149, "end": 1153, "text": "[15]", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Assisted Resource", "sec_num": "2.1." }, { "text": "For this research goal, given a word sequence ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields (CRFs)", "sec_num": "2.2." }, { "text": "A CRF is a conditional probability sequence as well as an undirected graphical model which defines a conditional distribution over the entire label sequence given the observation sequence. Unlike Maximum Entropy Markov Models (MEMMs), CRFs use an exponential model for the joint probability of the whole label sequence given the observation to solve the label bias problem. CRFs also have a conditional nature and model the real-world data depending on non-independent and interacting features of the observation sequence. A CRF allows the combination of overlapping, arbitrary and agglomerative observation features from both the past and future. The propositional terms extracted by CRFs are not restricted by syntactic variations or multiword forms and the global optimum is generated from different global and local contributor types. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields (CRFs)", "sec_num": "2.2." }, { "text": "( ) ( ) 0 1 exp , , , k k t t k k t S t k t k Z f s sW g sW \u03bb \u03bc \u2212 \u239b \u239e = + \u239c \u239f \u239d \u23a0 \u2211 \u2211\u2211 \u2211\u2211 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields (CRFs)", "sec_num": "2.2." }, { "text": "The set of weights in a CRF model,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields (CRFs)", "sec_num": "2.2." }, { "text": "( ) , k k \u03bb \u03bc \u03a8 =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields (CRFs)", "sec_num": "2.2." }, { "text": ", is usually estimated by maximizing the conditional log-likelihood of the labeled sequences in the training data", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields (CRFs)", "sec_num": "2.2." }, { "text": "{ } ( ) ( ) 1 , n i i i D S W = = .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields (CRFs)", "sec_num": "2.2." }, { "text": "(Equation 3) For fast training, parameter estimation was based on L-BFGS (the limited-memory BFGS) algorithm, a quasi-Newton algorithm for large scale numerical optimization problems [16] . The L-BFGS had proved [17] that converges significantly faster than Improved Iterative Scaling (IIS) and General Iterative Scaling (GIS).", "cite_spans": [ { "start": 183, "end": 187, "text": "[16]", "ref_id": "BIBREF15" }, { "start": 212, "end": 216, "text": "[17]", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields (CRFs)", "sec_num": "2.2." }, { "text": "( ) ( ) ( ) ( ) 1... log | i i i N L P S W \u03a8 \u03a8 = = \u2211 (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields (CRFs)", "sec_num": "2.2." }, { "text": "After the CRF model is trained to maximize the conditional log-likelihood of a given training set P(S|W), the test phase finds the most likely sequence using the combination of forward Viterbi and backward A* search [18] . The forward Viterbi search makes the labeling task more efficient and the backward A* search finds the n-best probable labels.", "cite_spans": [ { "start": 216, "end": 220, "text": "[18]", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Conditional Random Fields (CRFs)", "sec_num": "2.2." }, { "text": "According to the properties of propositional term generation and the characteristics of the CRF feature function, this paper adopted local and global features which consider statistical, syntactic, semantic, morphological, and structural level information. In the CRF model, the features used were binary and were formed by instantiating templates, and the maximum entropy principle was provided for choosing the potential functions. Equation 4shows an example of a feature function, which was set to 1 when the word was found in the rare words list (RW).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Level Features", "sec_num": "2.3." }, { "text": "( ) t 1 2 t , , ,..., 1 1, if s W , 0, otherwise n n s w w w t s isRW g s w \u23a7 = \u2229 = \u23a8 \u23a9 (4) 2.3.1. Local Feature (1). Morphological Level:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "( )", "sec_num": null }, { "text": "Scientific terminology often ends with similar words, e.g. \"algorithm\" or \"model\", or is represented by connected words (CW) expressed with hyphenation, quotation marks or brackets. ACMCSS represents entries in the ACM Computing Classification System (ACM CSS). The last word of every entry in the ACM CSS (ACMCSSAff) satisfies the condition that it is a commonly occurring last word in scientific terminology. The existing propositional terms of the training data were the seeds of multiword terms (MTSeed).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "( )", "sec_num": null }, { "text": "Words identified as acronyms were stored as useful features, consisting of IsNenadic, IsISD, and IsUC. IsNenadic was defined using the methodology of Nenadi\u0107, Spasi\u0107 and Ananiadou [19] to acquire possible acronyms of a word sequence that was extracted by the C/NC value method. IsISD refers to the list of Implicit Subject Descriptors in the ACM CCS and IsUC signifies that all characters of the word were uppercase (2) .", "cite_spans": [ { "start": 180, "end": 184, "text": "[19]", "ref_id": "BIBREF18" }, { "start": 416, "end": 419, "text": "(2)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "( )", "sec_num": null }, { "text": "Semantic Level:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "( )", "sec_num": null }, { "text": "MeasureConcept infers that the word was found under SUMO's \"UNITS-OF-MEASURE\" concept subclass and SeedConcept denotes that the concept of the word corresponded to the concept of a propositional term in the training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "( )", "sec_num": null }, { "text": "(3). Frequency Level:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "( )", "sec_num": null }, { "text": "A high frequency word list (HF) was generated from the top 5 percent of words in the training data. A special words list (SW) consists of the out-of-vocabulary and rare words. Out-of-vocabulary words are those words that do not exist in WordNet. Rare words are words not appearing in the AWL or which appear in less than 5 different abstracts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "( )", "sec_num": null }, { "text": "Syntactic Level:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(4).", "sec_num": null }, { "text": "This feature was set to 1 if the syntactic pattern of the word sequence matched the regular expression \"(NP)*(preposition)?(NP)*\" (SynPattern), or matched the terms in the training data (SeedSynPattern). SyntaxCon means that concordances of ACMCSSAff or ACMCSSAffSyn (ACMCSSAff synonyms) used the keyword in context to find the syntactic frame in the training data. If the part-of-speech (POS) of the word was a cardinal number, then this feature CDPOS was set to 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(4).", "sec_num": null }, { "text": "Statistical and Syntactic Level:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(5).", "sec_num": null }, { "text": "This research used the CRF model to filter terms extracted by the C/NC value approach with no frequency threshold 2.3.2. Global Feature (1) .", "cite_spans": [ { "start": 136, "end": 139, "text": "(1)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "(5).", "sec_num": null }, { "text": "Cue word:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(5).", "sec_num": null }, { "text": "KeyWord infers that the word sequence matched one of the user's keywords or one word of the user's title. IsTransW and IsCV represent that a word was found in an NP after TransW or CV respectively. TransW indicates summative and enumerative transitional words, such as \"in summary\", \"to conclude\", \"then\", \"moreover\", and \"therefore\", and CV refers to words under SUMO's \"communication\" concepts, such as \"propose\", \"argue\", \"attempt\" and so on.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(5).", "sec_num": null }, { "text": "(2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(5).", "sec_num": null }, { "text": "Tense:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(5).", "sec_num": null }, { "text": "If the first sentence of the RA is in the past tense and contains an NP, then the word sequence of that NP was used as a useful feature PastNP. This is because the first sentence often impresses upon the reader the shortest possible relevant characterization of the paper, and the use of past tense emphasizes the importance of the statement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(5).", "sec_num": null }, { "text": "(3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(5).", "sec_num": null }, { "text": "Sentence structure:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(5).", "sec_num": null }, { "text": "Phrases in a parallel structure sentence refers to the phrases appearing in a sentence structure such as Phrase, Phrase, or (and) Phrase, and implies that the same pattern of words represents the same concept. ParallelStruct indicates that the word was part of a phrase in a parallel structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(5).", "sec_num": null }, { "text": "By calculating the cohesiveness of words, the combined association measure (CAM) can assist in further enhancing and editing the CRF-based propositional term boundaries for achieving a perfect boundary of propositional terms. CAM extracts the most relevant word sequence by combining endogenous linguistic statistical information, including word form sequence and its POS sequence. CAM is a variant of normalized expectation (NE) and mutual expectation (ME) methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Cohesiveness Measure", "sec_num": "2.4." }, { "text": "To characterize the degree of cohesiveness of a sequence of textual units, NE evaluates the average cost of loss for a component in a potential word sequence. NE is defined in Equation 5where the function c(\u2022) means the count of any potential word sequence. An example of NE is shown in Equation 6. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Cohesiveness Measure", "sec_num": "2.4." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "[ ] ( ) [ ] ( ) [ ] ( ) [ ] ( ) 1 1 1 1 2 ... ... ... ... 1. .. ... ... ... i n i n n i n i n i C w w w NE w w w C w w w C w w w n = = \u239b \u239e + \u239c \u239f \u239d \u23a0 \u2211 (5) [ ]", "eq_num": "( ) [ ] ( ) [ ] ( ) [" } ], "section": "Word Cohesiveness Measure", "sec_num": "2.4." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "C C \u239b \u239e \u239c \u239f \u239c \u239f \u239c \u239f \u239c \u239f \u239c \u239f + \u239c \u239f \u239c \u239f \u239c \u239f + \u239d \u23a0", "eq_num": "(6)" } ], "section": "Word Cohesiveness Measure", "sec_num": "2.4." }, { "text": "Based on NE and relative frequency, the ME of any potential word sequence is defined as Equation 7, where function P(\u2022) represents the relative frequency.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Cohesiveness Measure", "sec_num": "2.4." }, { "text": "[ ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Cohesiveness Measure", "sec_num": "2.4." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) [ ] ( ) [ ] ( ) 1 1 1 ... ... ... ... ... ... i n i n i n ME w w w P w w w NE w w w = \u00d7", "eq_num": "(7)" } ], "section": "Word Cohesiveness Measure", "sec_num": "2.4." }, { "text": "CAM considers that the global degree of cohesiveness of any word sequence is evaluated by integrating the strength in a word sequence and the interdependence of its POS. Thus CAM evaluates the cohesiveness of a word sequence by the combination of its own ME and the ME of its associated POS sequence. In Equation 8 [ ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Cohesiveness Measure", "sec_num": "2.4." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 1 ... ... ... ... ... ... i n i n i n CAM w w w ME w w w ME p p p \u03b1 \u03b1 \u2212 = \u00d7", "eq_num": "( ) [ ] ( ) [ ] ( ) 1 1" } ], "section": "Word Cohesiveness Measure", "sec_num": "2.4." }, { "text": "This paper uses a sliding window moving in a frame and compares the CAM value of neighboring word sequences to determine the optimal propositional term boundary. Most lexical relations associate words distributed by the five neighboring words [20] . Therefore this paper only calculates the CAM value of the three words to the right and the three words to the left of the CRF-based terms. Figure 3 represents an illustration for the CAM computation that was fixed in the [(2*3) + length(CRF-Based term)] frame size with a sliding window. When the window starts a forward or backward move in the frame, the three marginal words of a term are the natural components of the window. As the word number of the CRF term is less than three words, the initial sliding windows size is equal to the word number of the term. To find the optimal propositional term boundary, this study calculates the local maximum CAM value by using the Modified CamLocalMax Algorithm. The principle of the original algorithm [21] is to infer the word sequence as a multiword unit if the CAM value is higher than or equal to the CAM value of all its sub-group of (n-1) words and if the CAM value is higher than the CAM value of all its super-group of (n+1) words. In the Modified CamLocalMax Algorithm, when the CAM value of the combination of CRF-based single word propositional terms and its immediate neighbor word is higher than the average of the CAM value of bi-gram propositional terms in the training data, the components of the CRF-based single word propositional terms are turned into a bi-gram propositional term. The complete Modified CamLocalMax Algorithm is shown in the following, where cam means the combined association measure, size(\u2022) returns the number of words of a possible propositional term, M represents a possible propositional term, \u2126 n+1 denotes the set of all the possible (n+1)grams containing M, \u2126 n-1 denotes the set of all the possible (n-1)grams contained in M, and bi-term typifies bi-gram propositional terms in the training data. ", "cite_spans": [ { "start": 243, "end": 247, "text": "[20]", "ref_id": "BIBREF19" }, { "start": 998, "end": 1002, "text": "[21]", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 389, "end": 397, "text": "Figure 3", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Word Cohesiveness Measure", "sec_num": "2.4." }, { "text": "The IOB encoding scheme was adopted to label the words, where I represents words Inside the propositional term, O marks words Outside the propositional term, and B denotes the Beginning of a propositional term. It should be noted that here the B tag differs slightly from Ramshaw and Marcus's definition, which marks the left-most component of a baseNP for discriminating recursive NPs. Figure 4 shows an example of the IOB encoding scheme that specifies the B, I, and O labels for the sentence fragment \"The syllable-based system exceeds the performance of the triphone system by\u2026\". An advantage of this encoding scheme is that it can avoid the problem of ambiguous propositional term boundaries, since IOB tags can identify the boundaries of immediate neighbor propositional terms, whereas binary-based encoding schemes cannot. In Figure 4 , \"syllable-based system\", and \"exceeds\" are individual and immediate neighbor propositional terms distinguished by B tags. ", "cite_spans": [], "ref_spans": [ { "start": 387, "end": 395, "text": "Figure 4", "ref_id": "FIGREF6" }, { "start": 833, "end": 841, "text": "Figure 4", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Encoding Schema", "sec_num": "2.6." }, { "text": "To facilitate the development and evaluation of the propositional term extraction method, experts manually annotated 260 research abstracts, including speech, language, and multimedia information processing journal papers from SCI and SCIE-indexed databases. In all, there were 109, 72, and 79 annotated research abstracts in the fields of speech, language, and multimedia information processing, respectively. At run time, 90% of the RAs were allocated as the training data and the remaining 10% were reserved as the test data for all evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3.1." }, { "text": "In system implementation, the CRF++: Yet Another CRF toolkit 0.44 [22] was adopted. The training parameters were chosen using ten-fold cross-validation on each experiment.", "cite_spans": [ { "start": 66, "end": 70, "text": "[22]", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3.1." }, { "text": "The proposed system was compared with three baseline systems. The first was the C/NC-value algorithm with no frequency threshold, because the C/NC-value algorithm is a hybrid methodology and its historical result is better than the linguistic and statistical approaches. The second baseline system proposed by Nenadi\u0107 et al. [8] is a variant of the C/NC-value algorithm enriched by morphological and structural variants. The final baseline system is a linguistic approach proposed by Ananiadou [4] . That study, however, made no comparisons with statistical approaches which are suitable for a document containing a large amount of words.", "cite_spans": [ { "start": 325, "end": 328, "text": "[8]", "ref_id": "BIBREF7" }, { "start": 494, "end": 497, "text": "[4]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3.1." }, { "text": "To evaluate the performance in this study, two hit types for propositional term extraction: perfect and imperfect [23] are employed. A perfect hit means that the boundaries of a term's maximal term form conform to the boundaries assigned by the automatic propositional term extraction. An imperfect hit means that the boundaries assigned by the automatic propositional term extraction do not conform to the boundaries of a term's maximal term form but include at least one word belonging to a term's maximal term form. Taking the word sequence \"large vocabulary continuous speech recognition\" as an example, when the system detects that \"vocabulary continuous speech recognition\" is a propositional term, it then becomes an imperfect hit. There is only one perfect hit condition where \"large vocabulary continuous speech recognition\" is recognized. The metrics of recall and precision were also used to measure the perfect and imperfect hits. The definition of recall and precision of perfect hits and imperfect hits are shown in Equation 9and Equation (10) . Thus, our system is evaluated with respect to the accuracies of propositional term detection and propositional term boundary detection. That is, our motivation for propositional term extraction was to provide CRF and CRF+CAM for accurate detection of propositional terms and the improvement of the detected propositional term boundaries.", "cite_spans": [ { "start": 114, "end": 118, "text": "[23]", "ref_id": "BIBREF22" }, { "start": 1053, "end": 1057, "text": "(10)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3.1." }, { "text": "Hits Perfect (or Imperfect) Recall= Target Termforms (9) Hits Perfect (or Imperfect) Precision= Extracted Termforms", "cite_spans": [ { "start": 53, "end": 56, "text": "(9)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3.1." }, { "text": "This study evaluated empirically two aspects of our research for different purposes. First, the performance of propositional term extraction for CRF-based and CRF+CAM-based propositional term sets on different data was measured. Second, the impact of different level features for propositional term extraction using CRF was evaluated. Table 1 lists the recall rate, the precision rate and F-score of propositional term extraction for imperfect hits of different domain data. In each case, the recall and precision of imperfect hits using CRF inside testing was greater than 93%. The CRF outside test achieved approximately 73% average recall and 73% average precision for imperfect hits, and the CAM approach improved the original performance of recall and precision for imperfect hits. The C/NC-value approach achieved approximately 56% average recall and 63% average precision for imperfect hits. The performance of Ananiadou's approach was about 56% average recall and 67% average precision for imperfect hits. Another baseline, the approach of Nenadi\u0107, Ananiadou and McNaught, obtained approximately 62% average recall and 67% average precision for imperfect hits. Table 2 summarizes the recall rates, precision rates and F-score of propositional term extraction for perfect hits of data from different domains. The CRF inside test achieved approximately 67% average recall and 66% average precision on perfect hits, but the CRF outside test did not perform as well. However, the CAM approach still achieved an increase of 1%-7% for perfect hits. The C/NC-value approach obtained approximately 30% average recall and 34% average precision for perfect hits. Ananiadou's approach achieved approximately 29% average recall and 38% average precision for perfect hits. The performance of Nenadi\u0107, Ananiadou and McNaught's approach was about 32% average recall and 40% average precision for perfect hits.", "cite_spans": [], "ref_spans": [ { "start": 335, "end": 342, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 1169, "end": 1176, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "3.2." }, { "text": "The results show that the C/NC-value does not demonstrate a significant change over different fields, except for the multimedia field, which had slightly better recall rate. The main reasons for errors produced by C/NC-value were propositional terms that were single words or acronyms, propositional terms that were not NP-based, or propositional terms that consisted of more than four words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Different Methods", "sec_num": null }, { "text": "Ananiadou's approach was based on a morphological analyzer and combination rules for the different levels of word forms. Experimental results showed that this approach is still unable to deal with single words or acronyms, and propositional terms that are not NP-based.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Different Methods", "sec_num": null }, { "text": "Nenadi\u0107 et al.'s approach considered local morphological and syntactical variants using C value to determine the propositional terms. This approach had slightly better performance than the C/NC value methodology. Acronyms were included in the propositional term candidates but were filtered by frequency, as they often appear only a few times. This approach also ignored single words, and propositional terms that were not NP-based. Furthermore, none of these three baseline systems are suitable for handling special symbols.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Different Methods", "sec_num": null }, { "text": "For CRF inside testing, both the precision and recall rates were significantly better for imperfect hits, but the precision and recall rates were reduced by about 30% for perfect hits in most RAs. Due to insufficient training data, CRF no longer achieved outstanding results. In particular, the large variability and abstract description of the multimedia field RAs led to huge differences between measures. For example, in the sentence \"For surfaces with varying material properties, a full segmentation into different material types is also computed\", \"full segmentation into different material types\" is a propositional term that it isn't concretely specified as a method. CRF achieved a better result in recall rate, but failed on propositional term boundary detection, unlike the C/NC-value approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Different Methods", "sec_num": null }, { "text": "The CAM approach effectively enhanced propositional term boundary detection by calculating word cohesiveness, except in the case of multimedia data. The CAM approach couldn't achieve similar performance for the multimedia data as a result of the longer word count of terms that differ from the data of other fields. However, the CAM approach performed best with \u03b1 equal to 0.4, which demonstrates that the POS provided a little more contribution for multiword term construction. The CAM approach not only considered the POS sequence but also the word sequence, therefore the results are a little better for speech data, which is the biggest part of the training data (SCI and SCIE-indexed databases).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Different Methods", "sec_num": null }, { "text": "The above results show that the CRF approach exhibited impressive improvements in propositional term detection. The major reason for false positives was that the amount of the data was not enough to construct the optimal model. Experimental results revealed that the CAM is sufficiently efficient for propositional term boundary enhancement but the longer word count of propositional terms were excluded.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of Different Methods", "sec_num": null }, { "text": "In order to assess the impact of different level features on the extraction method, this paper also carried out an evaluation on the performance when different level features were omitted. Table 3 presents the performance of CRF when omitting different level features for imperfect hits and the symbol \"-\" denoted the test without a level feature. For all data, the recall rate was reduced by approximately 1%-5% and the precision rate was reduced by approximately 2%-6% in inside testing result. In all data outside testing, the recall rate was reduced by 2%-10% and the precision rate was reduced by 1%-5%. The recall and precision for speech data retained similar results from semantic level features, but showed little impact from other local features. For language data, without morphological, syntactic, frequency, and syntactic & statistical level features the performance was slightly worse than the original result and without semantic level features the original performance was preserved. The performance for multimedia data was affected greatly by semantic level features. A slight improvement without morphological, and syntactic & statistical level features and similar results were obtained when frequency and syntactic level features were omitted. In Table 4 , it can be noticed that the omission of any single level features results in a deterioration in the performance of perfect hits. Removing the syntactic level features had the most pronounced effect on performance for all, speech and language data, while removing the semantic level features had the least effect on performance for all, speech and language data. According to the experimental results, the use of the frequency features did not result in any significant performance improvement for the multimedia data, and the use of the syntactic and syntactic & statistical level features did not result in any performance improvement for the multimedia data. Removing the semantic level features had the greatest effect on the performance for the multimedia data. Overall the five different level features were all somewhat effective for propositional term extraction. This suggests that propositional terms are determined by different level feature information which can be effectively used for propositional term extraction. The frequency level features contributed little for propositional term extraction in all and speech data. This may be due to the fact that speech data comprised the main portion of the training data. In the multimedia case, the semantic level features were useful. Although semantic level features may include some useful information, it was still a problem to correctly utilize such information in the different domain data for propositional term extraction. Syntactic and morphological level features obtained the best performance for all, speech and language data. This may be due to the amount of training data in each domain and the various word forms of propositional terms in the multimedia data. The syntactic and statistical level features improved or retained the same performance, which indicates the combined effectiveness of syntactic and statistical information. Table 5 shows the distribution of error types on propositional term extraction for each domain data using outside testing. This study adopts the measure used in [24] to evaluate the error type, where M indicates the condition when the boundary of the system and that of the standard match, O denotes the condition when the boundary of the system is outside that of the standard and I denotes the condition when the boundary of the system is inside that of the standard. Therefore, the MI, IM, II, MO, OM, IO, OI and OO error types were used to evaluate error distribution. The relative error rate (RER) and the absolute error rate (AER) were computed in error analysis, the relative error rate was compared with all error types, and the absolute error rate was compared with the standard. In the overall error distribution, the main error type was \"IM\" and \"MI\" and the CRF+CAM can significantly reduce those two error types. ", "cite_spans": [ { "start": 3343, "end": 3347, "text": "[24]", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 189, "end": 196, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 1267, "end": 1274, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 3182, "end": 3189, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Evaluation of Different Level Features", "sec_num": null }, { "text": "This study has presented a conditional random field model and a combined association measure approach to propositional term extraction from research abstracts. Unlike previous approaches using POS patterns and statistics to extract NP-based multiword terms, this research considers lexical cohesion and context information, integrating CRFs and CAM to extract single or multiword propositional terms. Experiments demonstrated that in each corpus, both CRF inside and outside tests showed an improved performance for imperfect hits. The proposed approach further effectively enhanced the propositional term boundaries by the combined association measure approach which calculates the cohesiveness of words. The conditional random field model initially detects propositional terms based on their local and global features, which includes statistical, syntactic, semantic, morphological, and structural level information. Experimental results also showed that different multi-level features played a key role in CRF propositional term detection model for different domain data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4." } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Contrastive Rhetoric: Cross-Cultural Aspects of Second Language Writing U.K.: Cambridge Applied Linguistics", "authors": [ { "first": "U", "middle": [ "M" ], "last": "Connor", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "U. M. Connor, Contrastive Rhetoric: Cross-Cultural Aspects of Second Language Writing U.K.: Cambridge Applied Linguistics, 1996.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Term Extraction and Automatic Indexing", "authors": [ { "first": "C", "middle": [], "last": "Jacquemin", "suffix": "" }, { "first": "D", "middle": [], "last": "Bourigault", "suffix": "" } ], "year": 2003, "venue": "Oxford Handbook of Computational Linguistics, M. Ruslan", "volume": "", "issue": "", "pages": "599--615", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Jacquemin and D. Bourigault, \"Term Extraction and Automatic Indexing,\" in Oxford Handbook of Computational Linguistics, M. Ruslan, Ed. Oxford: Oxford University Press, 2003, pp. 599-615.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "How to Write a Scientific Paper", "authors": [ { "first": "C.-K", "middle": [], "last": "Cheng", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C.-K. Cheng, How to Write a Scientific Paper? Taipei: Hwa Kong Press, 2003.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A Methodology for Automatic Term Recognition", "authors": [ { "first": "S", "middle": [], "last": "Ananiadou", "suffix": "" } ], "year": 1994, "venue": "15th Conference on Computational Linguistics", "volume": "2", "issue": "", "pages": "1034--1038", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Ananiadou, \"A Methodology for Automatic Term Recognition,\" in 15th Conference on Computational Linguistics -Volume 2, Kyoto, Japan, 1994, pp. 1034-1038.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Generating and Evaluating Domain-Oriented Multi-word Terms From Texts", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Damerau", "suffix": "" } ], "year": 1993, "venue": "Inf. Process. Manage", "volume": "29", "issue": "", "pages": "433--447", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. J. Damerau, \"Generating and Evaluating Domain-Oriented Multi-word Terms From Texts,\" Inf. Process. Manage., vol. 29, pp. 433-447, 1993.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Automatic Natural Acquisition of a Terminology", "authors": [ { "first": "C", "middle": [], "last": "Enguehard", "suffix": "" }, { "first": "L", "middle": [], "last": "Pantera", "suffix": "" } ], "year": 1995, "venue": "Journal of Quantitative Linguistics", "volume": "2", "issue": "", "pages": "27--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Enguehard and L. Pantera, \"Automatic Natural Acquisition of a Terminology,\" Journal of Quantitative Linguistics, vol. 2, pp. 27-32, 1995.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Automatic Recognition of Multi-word Terms: the C-value/NC-Value Method", "authors": [ { "first": "K", "middle": [ "T" ], "last": "Frantzi", "suffix": "" }, { "first": "S", "middle": [], "last": "Ananiadou", "suffix": "" }, { "first": "H", "middle": [], "last": "Mima", "suffix": "" } ], "year": 2000, "venue": "Int. J. on Digital Libraries", "volume": "3", "issue": "", "pages": "115--130", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. T. Frantzi, S. Ananiadou, and H. Mima, \"Automatic Recognition of Multi-word Terms: the C-value/NC-Value Method,\" Int. J. on Digital Libraries, vol. 3, pp. 115-130, 2000.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Enhancing Automatic Term Recognition through Recognition of Variation", "authors": [ { "first": "G", "middle": [], "last": "Nenadi\u0107", "suffix": "" }, { "first": "S", "middle": [], "last": "Ananiadou", "suffix": "" }, { "first": "J", "middle": [], "last": "Mcnaught", "suffix": "" } ], "year": 2004, "venue": "20th international conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Nenadi\u0107, S. Ananiadou, and J. McNaught, \"Enhancing Automatic Term Recognition through Recognition of Variation,\" in 20th international conference on Computational Linguistics Geneva, Switzerland: Association for Computational Linguistics, 2004.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Text Chunking Using Transformation-Based Learning", "authors": [ { "first": "L", "middle": [ "A" ], "last": "Ramshaw", "suffix": "" }, { "first": "M", "middle": [ "P" ], "last": "Marcus", "suffix": "" } ], "year": 1995, "venue": "Third Workshop on Very Large Corpora", "volume": "", "issue": "", "pages": "82--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. A. Ramshaw and M. P. Marcus, \"Text Chunking Using Transformation-Based Learning,\" in Third Workshop on Very Large Corpora, 1995, pp. 82-94.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data", "authors": [ { "first": "J", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "F", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "ICML '01: Proceedings of the Eighteenth International Conference on Machine Learning", "volume": "", "issue": "", "pages": "282--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Lafferty, A. Mccallum, and F. Pereira, \"Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data,\" in ICML '01: Proceedings of the Eighteenth International Conference on Machine Learning, 2001, pp. 282-289.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Multiword Unit Hybrid Extraction", "authors": [ { "first": "G", "middle": [], "last": "Dias", "suffix": "" } ], "year": 2003, "venue": "ACL 2003 Workshop on Multiword Expressions", "volume": "18", "issue": "", "pages": "41--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Dias, \"Multiword Unit Hybrid Extraction,\" in ACL 2003 Workshop on Multiword Expressions: Analysis, Acquisition and Treatment -Volume 18, 2003, pp. 41-48.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The ACM Computing Classification System", "authors": [], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Association for Computing Machinery, Inc., The ACM Computing Classification System [1998 Version], New York: ACM. Available: http://www.acm.org/class/1998/. [Accessed: June 17, 2006]", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Suggested Upper Merged Ontology (SUMO) Mapping to WordNet", "authors": [ { "first": "I", "middle": [], "last": "Niles", "suffix": "" }, { "first": "A", "middle": [], "last": "Pease", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Niles and A. Pease, Suggested Upper Merged Ontology (SUMO) Mapping to WordNet, Piscataway NJ: IEEE. Available: http://sigmakee.cvs.sourceforge.net/sigmakee/KBs/WordNetMappings/. [Accessed: 2004]", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Eugene Charniak's Parser, Providence: Brown University", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Charniak, Eugene Charniak's Parser, Providence: Brown University. Available: http://cs.brown.edu/~ec/. [Accessed: June 1, 2006]", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Updating quasi-Newton Matrices with Limited Storage", "authors": [ { "first": "J", "middle": [], "last": "", "suffix": "" } ], "year": 1980, "venue": "Mathematics of Computation", "volume": "35", "issue": "", "pages": "773--782", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Nocedal, \"Updating quasi-Newton Matrices with Limited Storage,\" Mathematics of Computation, vol. 35, pp. 773-782, 1980.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Shallow Parsing with Conditional Random Fields", "authors": [ { "first": "F", "middle": [], "last": "Sha", "suffix": "" }, { "first": "F", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2003, "venue": "2003 Human Language Technology Conference and North American Chapter of the Association for Computational Linguistics (HLT/NAACL-03)", "volume": "", "issue": "", "pages": "213--220", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Sha and F. Pereira, \"Shallow Parsing with Conditional Random Fields,\" in 2003 Human Language Technology Conference and North American Chapter of the Association for Computational Linguistics (HLT/NAACL-03), Edmonton, Canada, 2003, pp. 213-220.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Probabilistic Segmentation for Segment-Based Speech Recognition", "authors": [ { "first": "S", "middle": [ "C" ], "last": "Lee", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. C. Lee, \"Probabilistic Segmentation for Segment-Based Speech Recognition.\" M. S. thesis, Massachusetts Institute of Technology, MA, U.S.A., 1998.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Automatic Acronym Acquisition and Term Variation Management within Domain-specific Texts", "authors": [ { "first": "G", "middle": [], "last": "Nenadi\u0107", "suffix": "" }, { "first": "I", "middle": [], "last": "Spasi\u0107", "suffix": "" }, { "first": "S", "middle": [], "last": "Ananiadou", "suffix": "" } ], "year": 2002, "venue": "Third International Conference on Language Resources and Evaluation (LREC2002)", "volume": "", "issue": "", "pages": "2155--2162", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Nenadi\u0107, I. Spasi\u0107, and S. Ananiadou, \"Automatic Acronym Acquisition and Term Variation Management within Domain-specific Texts,\" in Third International Conference on Language Resources and Evaluation (LREC2002), Las Palmas, Canary Islands, Spain, 2002, pp. 2155-2162.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "English Lexical Collocations: A Study in Computational Linguistics", "authors": [ { "first": "S", "middle": [], "last": "Jones", "suffix": "" }, { "first": "J", "middle": [], "last": "Sinclair", "suffix": "" } ], "year": 1974, "venue": "Cahiers de Lexicologie", "volume": "23", "issue": "", "pages": "15--61", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Jones and J. Sinclair, \"English Lexical Collocations: A Study in Computational Linguistics,\" Cahiers de Lexicologie, vol. 23, pp. 15-61, 1974.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Extraction Automatique d'Associations Lexicales \u00e0partir de Corpora", "authors": [ { "first": "G", "middle": [], "last": "Dias", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Dias, \"Extraction Automatique d'Associations Lexicales \u00e0partir de Corpora.\" Ph. D dissertation, DI/FCT New University of Lisbon, Lisbon, Portugal, and LIFO University, Orl\u00e9ans, France , 2002.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "CRF++: Yet Another CRF toolkit 0.44", "authors": [ { "first": "K", "middle": [], "last": "Taku", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Taku, CRF++: Yet Another CRF toolkit 0.44. Available: http://crfpp.sourceforge.net/. [Accessed: Oct 1, 2006]", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Criteria for Measuring Term Recognition", "authors": [ { "first": "A", "middle": [], "last": "Lauriston", "suffix": "" } ], "year": 1995, "venue": "Seventh Conference on European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "17--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Lauriston, \"Criteria for Measuring Term Recognition,\" in Seventh Conference on European Chapter of the Association for Computational Linguistics, Dublin, Ireland, 1995, pp. 17-22.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "ME-based Biomedical Named Entity Recognition Using Lexical Knowledge", "authors": [ { "first": "K.-M", "middle": [], "last": "Park", "suffix": "" }, { "first": "S.-H", "middle": [], "last": "Kim", "suffix": "" }, { "first": "H.-C", "middle": [], "last": "Rim", "suffix": "" }, { "first": "Y.-S", "middle": [], "last": "Hwang", "suffix": "" } ], "year": 2006, "venue": "ACM Transactions on Asian Language Information Processing (TALIP)", "volume": "5", "issue": "", "pages": "4--21", "other_ids": {}, "num": null, "urls": [], "raw_text": "K.-M. Park, S.-H. Kim, H.-C. Rim, and Y.-S. Hwang, \"ME-based Biomedical Named Entity Recognition Using Lexical Knowledge,\" ACM Transactions on Asian Language Information Processing (TALIP), vol. 5, pp. 4-21, 2006.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "The System Framework of Propositional Term Extraction" }, "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "framework with the set of weights \u03a8 can be obtained from the following equation." }, "FIGREF3": { "uris": null, "type_str": "figure", "num": null, "text": ", CAM integrates the ME of word form sequence [] . Let \u03b1 be a weight between 0 and 1, which determines the degree of the effect of POS or word sequence in the word cohesiveness measure." }, "FIGREF4": { "uris": null, "type_str": "figure", "num": null, "text": "An Illustration for the CAM Computation Steps" }, "FIGREF5": { "uris": null, "type_str": "figure", "num": null, "text": ", the set of all the possible (n+1)grams containing M, , the set of all the possible (n-1)grams contained in M Output: CT={ct 1 ,ct 2 ,\u2026ct n }, a CRF+CAM-based propositional term set If (size(M)=2 and cam(M) > cam(y)) or ( size(M)>2 and cam(M) \u2267 cam(x) and cam(M) >cam(y) ) or ( size(M)=1 and cam(bi-gram) \u2266 cam(M) ) End if Return ct" }, "FIGREF6": { "uris": null, "type_str": "figure", "num": null, "text": "An Example of the IOB Encoding Scheme 3. Evaluation" }, "TABREF2": { "type_str": "table", "content": "
The Performance of Imperfect Hits on Different Data
MethodRPFRPF
All DataLanguage Data
CRF Inside Testing93.294.593.996.798.197.4
CRF +CAM Inside Testing96.696.096.398.499.699.0
CRF Outside Testing77.174.175.678.676.377.4
CRF +CAM Outside Testing82.682.582.685.888.887.2
C/NC Value53.465.358.848.153.350.6
Ananiadou51.370.059.252.468.459.3
Nenadi\u0107 et al.58.072.364.460.169.064.3
Speech DataMultimedia Data
CRF Inside Testing96.699.098.298.099.298.6
CRF +CAM Inside Testing97.599.099.498.699.399.0
CRF Outside Testing74.976.174.361.265.063.1
CRF +CAM Outside Testing82.683.984.265.471.268.2
C/NC Value53.579.062.767.753.259.6
Ananiadou53.168.459.865.460.062.6
", "text": "", "html": null, "num": null }, "TABREF3": { "type_str": "table", "content": "
MethodRPFRPF
All DataLanguage Data
CRF Inside Testing66.566.266.366.467.567.0
CRF +CAM Inside Testing69.068.668.869.469.969.6
CRF Outside Testing39.842.241.943.237.340.0
CRF +CAM Outside Testing43.549.246.245.345.445.3
C/NC Value27.637.831.928.929.129.0
Ananiadou26.337.931.131.337.734.2
Nenadi\u0107 et al.30.241.034.831.240.935.4
Speech DataMultimedia Data
CRF Inside Testing62.361.061.770.970.370.6
CRF +CAM Inside Testing69.667.968.773.170.371.6
CRF Outside Testing36.941.639.142.142.542.3
CRF +CAM Outside Testing42.848.945.645.645.044.3
C/NC Value29.040.033.634.629.932.1
Ananiadou27.437.731.729.338.033.1
Nenadi\u0107 et al.30.038.633.735.337.635.3
", "text": "The Performance of Perfect Hits on Different Data", "html": null, "num": null }, "TABREF4": { "type_str": "table", "content": "
Data TypeAllSpeechLanguageMultimedia
Testing TypeRPRPRPRP
Inside -Frequency Features9292
", "text": "The Performance of CRF Excepting Different Level Features for Imperfect Hits", "html": null, "num": null }, "TABREF5": { "type_str": "table", "content": "
Data TypeAllSpeechLanguageMultimedia
Testing TypeRPRPRPRP
Inside -Frequency Features63605655
", "text": "The Performance of CRF without Different Level Features for Perfect Hits", "html": null, "num": null }, "TABREF6": { "type_str": "table", "content": "
Error TypeCRFCRF+CAMCRFCRF+CAM
RERAERRERAERRERAERRERAER
All DataSpeech Data
MI24.626.1118.002.9024.906.4120.303.03
IM36.488.7228.504.8838.228.0632.504.08
II18.674.9623.403.8812.372.8814.802.05
MO, OM, IO, OI7.493.0812.501.0710.502.4612.851.85
OO12.742.9117.602.0814.014.5519.552.53
Language DataMultimedia Data
MI23.114.0318.502.6719.186.5817.254.64
IM31.259.0828.503.5625.729.0019.104.05
II26.487.5031.004.0736.34 10.63 34.348.30
MO,OM,IO,OI8.121.0312.451.896.425.0010.091.53
OO11.042.069.551.2012.344.8519.223.85
", "text": "Distribution of Error Types on Propositional Term Extraction", "html": null, "num": null } } } }