{ "paper_id": "O00-1004", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:59:07.991201Z" }, "title": "Building A Chinese Text Summarizer with Phrasal Chunks and Domain Knowledge", "authors": [ { "first": "Weiquan", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Intel China Research Center", "location": { "addrLine": "601 North Tower, Beijing Kerry Center #1 Guanghua Road", "postCode": "10002", "settlement": "Beijing", "country": "China" } }, "email": "" }, { "first": "Joe", "middle": [], "last": "Zhou", "suffix": "", "affiliation": { "laboratory": "", "institution": "Intel China Research Center", "location": { "addrLine": "601 North Tower, Beijing Kerry Center #1 Guanghua Road", "postCode": "10002", "settlement": "Beijing", "country": "China" } }, "email": "" }, { "first": "Joe", "middle": [ "F" ], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Intel China Research Center", "location": { "addrLine": "601 North Tower, Beijing Kerry Center #1 Guanghua Road", "postCode": "10002", "settlement": "Beijing", "country": "China" } }, "email": "" }, { "first": "@intel", "middle": [], "last": "Zhou}", "suffix": "", "affiliation": { "laboratory": "", "institution": "Intel China Research Center", "location": { "addrLine": "601 North Tower, Beijing Kerry Center #1 Guanghua Road", "postCode": "10002", "settlement": "Beijing", "country": "China" } }, "email": "" }, { "first": "", "middle": [], "last": "Com", "suffix": "", "affiliation": { "laboratory": "", "institution": "Intel China Research Center", "location": { "addrLine": "601 North Tower, Beijing Kerry Center #1 Guanghua Road", "postCode": "10002", "settlement": "Beijing", "country": "China" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper introduces a Chinese summarizier called ThemePicker. Though the system incorporates both statistical and text analysis models, the statistical model plays a major role during the automated process. In addition to word segmentation and proper names identification, phrasal chunk extraction and content density calculation are based on a semantic network pre-constructed for a chosen domain. To improve the readability of the extracted sentences as auto-generated summary, a shallow parsing algorithm is used to eliminate the semantic redundancy.", "pdf_parse": { "paper_id": "O00-1004", "_pdf_hash": "", "abstract": [ { "text": "This paper introduces a Chinese summarizier called ThemePicker. Though the system incorporates both statistical and text analysis models, the statistical model plays a major role during the automated process. In addition to word segmentation and proper names identification, phrasal chunk extraction and content density calculation are based on a semantic network pre-constructed for a chosen domain. To improve the readability of the extracted sentences as auto-generated summary, a shallow parsing algorithm is used to eliminate the semantic redundancy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Due to the overwhelming amount of textual resources over Internet people find it increasingly difficult to grasp targeted information without any adjunctive tools. One of these tools is automatic summarization and abstraction. When coupled with general search and retrieval systems, text summarization can contribute to alleviating the effort in accessing these abundant information resources. It is capable of condensing the amount of original text, enabling the user to quickly capture the main theme of the text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Based on the techniques employed (Hovy, 1998) , existing summarization systems can be divided into three categories, i.e., word-frequency-based, cohesion-based, or information-extraction-based.", "cite_spans": [ { "start": 33, "end": 45, "text": "(Hovy, 1998)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Comparing to the other two techniques the first one is statistical oriented, fast and domain independent (Brandow et al, 1995) . The quality, however, is often questionable. Cohesion-based techniques (or sometimes called as being linguistic oriented) can generate more fluent abstracts, but the sentence-by-sentence computation against the entire raw text is often quite expensive. Even the most advanced part of speech (POS) tagging or syntactic parsing algorithms are unable to handle all the language phenomena emerged from giga-bytes of naturally running text. Summarization based on information extraction relies on the predefined templates. It is domain dependent. The unpredictable textual content over Internet, however, may let the templates suffer from incompletion or intra-contradiction no matter how well they might be predefined.", "cite_spans": [ { "start": 105, "end": 126, "text": "(Brandow et al, 1995)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we introduces a Chinese summarization system. Though it is a hybrid system incorporating some natural language techniques, considering the speed and efficiency of text processing we still adapted a statistical oriented algorithm and allowed it to play a major role during the automatic process. After pre-processing, the system first extracts phrasal chunks from the input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The phrasal chunks normally refer to meaningful terms and proper names existing in the text that are difficult to capture using simple methods. Then, we use a domain specific concept network to calculate the content density, i.e. measuring the significance score of each individual sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Finally, a Chinese dependency grammar applies as a shallow parser to process the extracted sentences into bracketed frames so as to achieve further binding and embellishment for the final output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The system, hereafter referred to as ThemePicker, works as a plug-in to web browsers. When surfing among some selected Chinese newspaper web sites, ThemePicker monitors the content of the browser s window. When the number of domain words or terms exceeds a pre-defined threshold, it will kick off the summary generation process and display the output in a separate window. Currently, we chose economic news as our specific domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Overview", "sec_num": "2" }, { "text": "The system consists of four components (see Fig. 1 ). The first component is a pre-processor dealing with the layout of the news web pages and removing unnecessary HTML tags while keeping the headline, title and paragraph hierarchy. The retained information will provide the location of the extracted sentences for later manipulation.", "cite_spans": [], "ref_spans": [ { "start": 44, "end": 50, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "System Overview", "sec_num": "2" }, { "text": "The second component performs two tasks in parallel, resolving Chinese word segmentation and identifying and extracting phrasal chunks. As it is known to all, Chinese is an ideographical character based language with no spaces or delimiting symbols between adjacent words. After breaking the input sentence into a chain of separate character strings we use a lexical knowledge base to look up each word and parse the sentence appropriately. Person names and other proper names are also recognized during the segmentation process. Phrasal chunks are lexical units larger than words but not idioms. They are content oriented special terms (Zhou, 1999) . We examined hundreds of documents and frequently encountered these phrasal chunks in the text that bear important information about the document. Since the meaning of a phrasal chunk is by no means the simple aggregation of the meanings of all the words in it, the word segmentation can not handle it. ThemePicker uses a statistical algorithm for phrasal chunk identification, aiming at the larger lexical unit that consists of two or more words always occurring in the same sequence.", "cite_spans": [ { "start": 637, "end": 649, "text": "(Zhou, 1999)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "System Overview", "sec_num": "2" }, { "text": "The third component in sequence computes the degrees of sentence content density. The computation assigns a significance score to each sentence. The concept net that contains of more than 2000 concept nodes on economic news domain is used to define the semantic similarities between different sentences and adjust the significance scores of sentences across the input text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Overview", "sec_num": "2" }, { "text": "Sentences with high scores are selected for the inclusion in the candidate summary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Overview", "sec_num": "2" }, { "text": "The fourth component analyzes the candidate sentences using a Chinese dependency grammar. The purpose is to improve the readability of the output summary. In the remaining sections of this paper we will describe in some details the major system components, i.e., word segmentation and proper name identification (Section 3), phrasal chunk extraction (Section 4), domain knowledge for summary generation (Section 5), and the dependency grammar (Section 6). The final section (Section 7) devotes to the system evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Overview", "sec_num": "2" }, { "text": "The segmentation algorithm is a single scan Reverse Maximum Matching (RMM). One major difference from other RMMs is the special lexicon it uses. The lexicon consists of two parts, the indexing pointers and the main body of lexical entries (see Fig 2) . ", "cite_spans": [], "ref_spans": [ { "start": 244, "end": 250, "text": "Fig 2)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Word Segmentation and Proper Name Identification", "sec_num": "3" }, { "text": "The algorithm works efficiently. The average number of comparisons needed to segment each word is only 2.89 (Liu et al, 1998) . The unregistered single characters that are left behind the word segmentation will become the target of proper name recognition. To fulfil the task of recognizing Chinese person names we built a surname and a given name databases. Intuitively, any given Chinese person name is formed by a lead surname and followed by 1 or 2 given names. The surname has only one character and rarely has two, therefore the length of each person name ranges from 2 to 4 characters. In the surname and given name databases, each character is given a possibility value that is obtained by calculating its frequency over a large name bank. Our person name recognition algorithm works as follows.", "cite_spans": [ { "start": 108, "end": 125, "text": "(Liu et al, 1998)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Output summary", "sec_num": null }, { "text": "When an unregistered single character word is encountered during the scan of the segmented text, the algorithm will check a) whether the character is a surname, and b) whether the character is followed by one or two single character words. When calculating the possibilities, the title words, such as Mr., Mrs. etc. that immediately before n and verbs that follow n are also considered heuristically.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Output summary", "sec_num": null }, { "text": "The difference between Chinese person name and transliterated foreign name is that the latter uses only a limited set of characters. The number of characters that allow to be used to denote foreign origin names is about 400 to 500 (Sun, 1998) . Within this set, a portion of it can only be used as the first character and another subset can only be the tail ones. Using this principle we defined a set of rules to label the margins of foreign names resulting in satisfactory precision and recall.", "cite_spans": [ { "start": 231, "end": 242, "text": "(Sun, 1998)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Output summary", "sec_num": null }, { "text": "Company name identification is also statistical and heuristic in nature. Based on the observation and analysis of a large quantity of collected Chinese text, we concluded that most company names can be denoted by the following BNF:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Output summary", "sec_num": null }, { "text": " + [] + {|} + ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Output summary", "sec_num": null }, { "text": "Thus, we built a FSM in which heuristic rules are introduced to allow the system capture such text strings as company names.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Output summary", "sec_num": null }, { "text": "Our initial evaluation of some sample text databases indicates that approximately 3% of the original text are proper names of various kinds, among whom the above two categories constitute more than 95%. This means that we would lose 2.85% of the segmentation accuracy if no action were taken to handle these two names. The above procedure now achieves more than 96% in accuracy. The improvement to the segmentation is 2.74%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Output summary", "sec_num": null }, { "text": "As mentioned above, proper names denote critical information in the original document. Their incorporation can make the summary more informative. Improved segmentation helps identify domain words more accurately. The identification of proper names also benefits the shallow parsing and improves the coherence and cohesion of summary output. Though phrasal chunk identification is independent to the segmentation, it is character based not word based.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Output summary", "sec_num": null }, { "text": "The phrasal chunk identification algorithm is to locate new terms formed by two or more words that frequently occur in the input text. For the words , and found in the input text, if their frequencies all exceed a pre-defined threshold, we can say that they are key words in the original text. But, this does not mean the whole phrasal chunk is also a key word. To determine such a long term or a phrase chunk is also a key word we have to prove that these three words or 6 characters frequently appear in exactly the same sequence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrasal Chunk Identification", "sec_num": "4" }, { "text": "Our phrasal chunk identification algorithm uses a data structure used called Association Tree (A-Tree). A unique A-Tree can be constructed for each individual character using itself as the root of the respective tree. \u2022 Repeat step 4 until no leaf can be expanded, then the A-Tree of C i is complete.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrasal Chunk Identification", "sec_num": "4" }, { "text": "Once all A-Trees are constructed, new phrasal chunks can be extracted using entropy measurement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3: Phrasal chunk identification and an A-Tree", "sec_num": null }, { "text": "By tracking from the root node to each leaf node we can get a string of characters. For example,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3: Phrasal chunk identification and an A-Tree", "sec_num": null }, { "text": "given a string a 1 a 2 a n b 1 b 2 b m that denotes two sub-strings A=a 1 a 2 a n and B=b 1 b 2 b m with a 1 as the root, the entropy in B given A is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3: Phrasal chunk identification and an A-Tree", "sec_num": null }, { "text": ") | ( log ) | ( A B p A B H \u2212 = .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3: Phrasal chunk identification and an A-Tree", "sec_num": null }, { "text": "For an A-Tree the ratio |b m | / |a n | is an estimation of p(B|A). The smaller the H value the closer the relationship between these two sub-strings. A zero value means B always follows A, suggesting that AB is a meaningful phrasal chunk.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3: Phrasal chunk identification and an A-Tree", "sec_num": null }, { "text": "For a string \u0393=C 0 C 1 C 2 C n , the entropy in C 1 given C 0 is H C1 = -log P(C 1 |C 0 ). Given C 0 C 1 , entropy in C 2 is H C2 = -log p(C 2 |C 0 C 1 ). Thus, the total entropy measurement of \u0393 is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3: Phrasal chunk identification and an A-Tree", "sec_num": null }, { "text": ") ( log where , ) ... ( log 0 0 0 0 C p H C C p H H C n n i Ci \u2212 = \u2212 = = \u2211 = \u0393", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3: Phrasal chunk identification and an A-Tree", "sec_num": null }, { "text": "As shown in Fig. 3 there are three phrasal chunks that have been listed with their respective H values with the first one bearing the lowest. The chunk identification algorithm will collect all the phrasal chunks with H value less than a certain threshold among all the A-Trees built from the input text. These phrasal chunks are larger than a word and likely express the key content of the input.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 18, "text": "Fig. 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Figure 3: Phrasal chunk identification and an A-Tree", "sec_num": null }, { "text": "The significance score of a sentence is determined based on the sum of two measurements, the density of domain concepts and the density of phrasal chunks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Extraction Using Domain Knowledge", "sec_num": "5" }, { "text": "Suppose a sentence denoted as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Extraction Using Domain Knowledge", "sec_num": "5" }, { "text": "S=U 1 U 2 U 3 U L , U i \u2208[F | W |K], 1", "num": null }, "TABREF1": { "html": null, "type_str": "table", "text": "If both conditions are met, these two to three consecutive character string may likely be a person name, denoted as n=sc 1 c 2 . (four-character names are temporarily omitted since they are rare). Here is the calculation of the possibility of n:", "content": "
p(n)logp(s)p(1 c),ifthereisasinglegivenname,or
p(n)=logp(s)p(1 c)p(c2),iftherearedoublegivennames.
", "num": null } } } }