{ "paper_id": "O06-3003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:07:04.735810Z" }, "title": "A Structural-Based Approach to Cantonese-English Machine Translation", "authors": [ { "first": "Yan", "middle": [], "last": "Wu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": { "postCode": "150001", "settlement": "Harbin", "country": "China" } }, "email": "" }, { "first": "Xiukun", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": { "postCode": "150001", "settlement": "Harbin", "country": "China" } }, "email": "" }, { "first": "Caesar", "middle": [], "last": "Lun", "suffix": "", "affiliation": { "laboratory": "", "institution": "City University of Hong Kong", "location": { "addrLine": "83 Tat Chee Avenue", "settlement": "Kowloon, Hong Kong" } }, "email": "ctslun@cityu.edu.hk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we present an integrated method to machine translation from Cantonese to English text. Our method combines example-based and rule-based methods that rely solely on example translations kept in a small Example Base (EB). One of the bottlenecks in example-based Machine Translation (MT) is a lack of knowledge or redundant knowledge in its bilingual knowledge base. In our method, a flexible comparison algorithm, based mainly on the content words in the source sentence, is applied to overcome this problem. It selects sample sentences from a small Example Base. The Example Base only keeps Cantonese sentences with different phrase structures. For the same phrase structure sentences, the EB only keeps the most simple sentence. Target English sentences are constructed with rules and bilingual dictionaries. In addition, we provide a segmentation algorithm for MT. A feature of segmentation algorithm is that it not only considers the source language itself but also its corresponding target language. Experimental results show that this segmentation algorithm can effectively decrease the complexity of the translation process.", "pdf_parse": { "paper_id": "O06-3003", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we present an integrated method to machine translation from Cantonese to English text. Our method combines example-based and rule-based methods that rely solely on example translations kept in a small Example Base (EB). One of the bottlenecks in example-based Machine Translation (MT) is a lack of knowledge or redundant knowledge in its bilingual knowledge base. In our method, a flexible comparison algorithm, based mainly on the content words in the source sentence, is applied to overcome this problem. It selects sample sentences from a small Example Base. The Example Base only keeps Cantonese sentences with different phrase structures. For the same phrase structure sentences, the EB only keeps the most simple sentence. Target English sentences are constructed with rules and bilingual dictionaries. In addition, we provide a segmentation algorithm for MT. A feature of segmentation algorithm is that it not only considers the source language itself but also its corresponding target language. Experimental results show that this segmentation algorithm can effectively decrease the complexity of the translation process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Although Machine Translation has been an important research topic for many years, the development of a useful Machine Translation system has been very slow. Researchers have found that developing a practical MT system is a very challenging task. Nevertheless, in our age of increasing internationalization, machine translation has a clear and intermediate attraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "There are many methods for designing machine translation systems [Carl 1999; Carpuat 2005; Kit 2002b; Mclean 1992; Mosleh and Tang 1999; Somers 2000; Knight and Marcu 2005; Tsujii 1986; Brown 1997; Zhou et al. 1998; Zens 2004] , such as the rule-based method, knowledge-based method, and example-based method. In recent years, with the development of bilingual corpora, the example-based method has become a better choice than the rule-based method, although statistical MT systems are now able to translate across a wide variety of language pairs [Knight and Marcu 2005] . This is because the rule-based MT system has some disadvantages, such as a lack of robustness and poor rule coverage [Zhou and Liu 1997] . On the other hand, the large-scale, high-quality bilingual corpora are seldom readily available, so the example-based method has encountered a lot of problems in machine translation, such as a lack of sufficient example sentences and redundant example sentences. The good performance of an EBMT system depends on there being a sentence in the example base which is similar to the one that is to be translated. In contrast, an SMT system may be able to produce perfect translations even when the sentence given as input does not resemble any sentence in the training corpus. However, such a system may be unable to generate translations that use idioms and phrases that reflect long-distance dependencies and contexts, which are usually not captured by current translation models [Marcu 2001 ]. On the other hand, the example-based method can effectively solve the problem of insufficient knowledge that the rule-based method often encounters during the translation process [Chen and Chen 1995] . In view of this fact, a machine translation prototype system, called LangCompMT05, has been implemented. It integrates rule features, text understanding, and a corpus of example sentences.", "cite_spans": [ { "start": 65, "end": 76, "text": "[Carl 1999;", "ref_id": "BIBREF1" }, { "start": 77, "end": 90, "text": "Carpuat 2005;", "ref_id": "BIBREF2" }, { "start": 91, "end": 101, "text": "Kit 2002b;", "ref_id": null }, { "start": 102, "end": 114, "text": "Mclean 1992;", "ref_id": "BIBREF16" }, { "start": 115, "end": 136, "text": "Mosleh and Tang 1999;", "ref_id": "BIBREF17" }, { "start": 137, "end": 149, "text": "Somers 2000;", "ref_id": null }, { "start": 150, "end": 172, "text": "Knight and Marcu 2005;", "ref_id": null }, { "start": 173, "end": 185, "text": "Tsujii 1986;", "ref_id": null }, { "start": 186, "end": 197, "text": "Brown 1997;", "ref_id": "BIBREF0" }, { "start": 198, "end": 215, "text": "Zhou et al. 1998;", "ref_id": "BIBREF24" }, { "start": 216, "end": 226, "text": "Zens 2004]", "ref_id": null }, { "start": 548, "end": 571, "text": "[Knight and Marcu 2005]", "ref_id": null }, { "start": 691, "end": 710, "text": "[Zhou and Liu 1997]", "ref_id": "BIBREF23" }, { "start": 1492, "end": 1503, "text": "[Marcu 2001", "ref_id": "BIBREF14" }, { "start": 1686, "end": 1706, "text": "[Chen and Chen 1995]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this paper, a brief review of the MT method is given first. This is followed by an introduction to the framework for LangCompMT05. In section 3, a detailed description of this system, whose implementation involves combining example-based and rule-based methods, is presented. Experimental results are discussed in section 4. The last section gives conclusions and discusses future work. Figure 1 shows the architecture of the LangCompMT05 system.", "cite_spans": [], "ref_spans": [ { "start": 390, "end": 398, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The implementation mechanism of the LangCompMT05 system is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Design Constructs", "sec_num": "2." }, { "text": "1) The source Cantonese sentence is segmented with a new segmentation algorithm, whose implementation is based on the word frequency, and the criterion for segmentation considers not only the source sentence itself but also its corresponding translation. The source sentence \"\u5979\u6709\u4e9b\u795e\u7d93\u904e\u654f\" (She is a little bit hypersensitive), for example, can be segmented as \"\u5979/\u6709\u4e9b/\u795e\u7d93/\u904e\u654f\" in general. Because \"\u795e\u7d93\u904e\u654f\" can be translated into the English word \"hypersensitive\", for MT, the sentence is segmented as \"\u5979/\u6709\u4e9b/\u795e\u7d93\u904e\u654f\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Design Constructs", "sec_num": "2." }, { "text": "2) The rule-based method is applied to analyze the source sentence, and its phrase structure is generated. The Rule Base (RB) of this system is established through analysis of the real corpus. The phrases are classified as noun phrases (NPs) or verb phrases (VPs). Some of the rules for phrases are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Design Constructs", "sec_num": "2." }, { "text": "NP= : [a] [n] | [m] (q) (n), VP= : [d] (v) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Design Constructs", "sec_num": "2." }, { "text": "Here, \"a\", \"n\", \"m\", \"q\", \"d\", and \"v\" denote adjective, noun, numeral, quantifier, adverb, and verb, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Design Constructs", "sec_num": "2." }, { "text": "3) A new knowledge representation, called SST, is applied to store the sentence structure. The target sentence can be generated with this tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Design Constructs", "sec_num": "2." }, { "text": "4) The example-based method and rule-based method are combined and used to select, convert, and generate the target sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Design Constructs", "sec_num": "2." }, { "text": "The principle for classifying a Cantonese content word, such as \"\u55ae\uf902 (bike)\" or \"\u8fd4\u5de5 (go to work) \", is dependent not only on the syntactic features of the word but also its semantic features; for a function word, such as \"\u7684\", \"\u88ab\", or \"\u56e0\u6b64 (so)\", the principle for classification is only based on its syntactic features. 6) The understanding model of the system includes two parts: a word model and a phrase model. Both of them consist of six parts: a Cantonese word, a category, a frequency, and three corresponding English words: word1, word2, and word3. The phrase model has the same structure as the word model. Table 1 show examples of these two models, where \"d\", \"c\", and \"v\" represent adverb, conjunction and verb, respectively. Figure 1 ) that can be maintained independently.", "cite_spans": [], "ref_spans": [ { "start": 613, "end": 620, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 734, "end": 742, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "5)", "sec_num": null }, { "text": "The system can translate written Cantonese into English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "9)", "sec_num": null }, { "text": "The implementation of the LangCompMT05 system is composed of the following parts: an example base, dictionaries, rule bases, the main program and five additional function modules (see Figure 1 ). It integrates rule features, text understanding, and a corpus of example sentences. For the preprocessing stages, it uses a rule-based method to deal with the source sentence. Then, the EBMT method is used to select the translation template. In the target sentence construction stage, which involves the translation of sentence components, the system is mostly based on a rule-based method.", "cite_spans": [], "ref_spans": [ { "start": 184, "end": 192, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Implementation", "sec_num": "3." }, { "text": "Word segmentation is the basic tack in many word-based applications, such as machine translation, speech processing, and information retrieval. Chinese word segmentation, being an interesting and challenging problem, has drawn much attention from many researchers [Hu 2004; Kit 2002a; Dunning 1993; Hou 1995; Liu 1994; Nie 1995] . We will present the segmentation algorithm in detail in another paper.", "cite_spans": [ { "start": 264, "end": 273, "text": "[Hu 2004;", "ref_id": null }, { "start": 274, "end": 284, "text": "Kit 2002a;", "ref_id": null }, { "start": 285, "end": 298, "text": "Dunning 1993;", "ref_id": null }, { "start": 299, "end": 308, "text": "Hou 1995;", "ref_id": null }, { "start": 309, "end": 318, "text": "Liu 1994;", "ref_id": "BIBREF13" }, { "start": 319, "end": 328, "text": "Nie 1995]", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Segmentation Algorithm", "sec_num": "3.1" }, { "text": "Parts of speech can help us analyze the syntax structure of a sentence, and they are fundamental to the understanding and transformation of MT. A knowledge base and rules are used to tag each Cantonese sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POS Tagging", "sec_num": "3.2" }, { "text": "The knowledge base consists of records that contain words and their parts-of-speech. After segmentation, all of the words in the source sentence are tagged. For ambiguous words that have more than one part-of-speech, the rules in RB 0 are used to perform disambiguation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POS Tagging", "sec_num": "3.2" }, { "text": "Suppose { } , , , , , , , , , , , , , , , T n np m q r v a p w d u f c t b g =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POS Tagging", "sec_num": "3.2" }, { "text": "is the tag set of the system, and A is the set of all Cantonese words. The formal presentation of the disambiguation rules is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POS Tagging", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "{ } , ,", "eq_num": ", , . A T" } ], "section": "POS Tagging", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "T T \u03b1 \u03b2 \u03b1 \u03b2 \u03b1 \u03b2 * \u2135 \u2192 \u2208 \u222a \u2135 \u2286 \u2208", "eq_num": "(1)" } ], "section": "POS Tagging", "sec_num": "3.2" }, { "text": "Here, \u03c7 is the subset of POS set T, is the element of T, and \u03b1 and \u03b2 are null, a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POS Tagging", "sec_num": "3.2" }, { "text": "Cantonese word or an element of T. \u2192 denotes that if an ambiguous word that has the POS \u03c7 is preceded by POS \u03b1 and succeeded by POS \u03b2 , then it can be tagged as . For example, the POS rule ( { , } m u n mn \u2192 ) means that if a word has the property of an auxiliary word (u) or a noun (n) and is preceded by a quantifier, then it is a noun.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POS Tagging", "sec_num": "3.2" }, { "text": "The following is an example of this process:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POS Tagging", "sec_num": "3.2" }, { "text": "\uf978/m \u5730/(u,n)\u76f8\u8ddd/n \u4e09/m \u54e9(u,q) \u23af \u23af \u23af \u23af \u23af \u23af \u23af \u2192 \u23af \u2192 \u2192 q m q u m n m n u m } , { , } , {", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POS Tagging", "sec_num": "3.2" }, { "text": "\uf978/m \u5730/n \u76f8\u8ddd/n \u4e09/m \u54e9/q (The distance between the two locations is 3 miles)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POS Tagging", "sec_num": "3.2" }, { "text": "\u4ed6/r \u9a0e/v \u55ae\uf902/n \u8ffd/v \u4e0a\uf92d/(u,v) \u23af \u23af \u23af \u2192 \u23af \u2192 u v v u v } , { \u4ed6/r \u9a0e/v \u55ae\uf902/n \u8ffd/v \u4e0a\uf92d/u (He catches up by bike) \u5979/r \u7d42\u65bc/d \u4e0a\u4f86/(u,v)\u4e86/u \u23af \u23af \u23af \u23af \u2192 \u23af \u2192dv u v u d } , { \u5979/\u7d42\u65bc/d \u4e0a\u4f86/v \u4e86/u (Finally, she comes up)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POS Tagging", "sec_num": "3.2" }, { "text": "The function of parsing is to identify the phrase structure of a sentence. At this stage, both the input and output sentences are parsed. This procedure works with some paring rules that have been generated from the corpus. These rules in RB 1 include the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing", "sec_num": "3.3" }, { "text": "S NP.VP, NP adjective . noun || article . noun ||...||noun.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing", "sec_num": "3.3" }, { "text": "The sentence is scanned backwards from the end; i.e. the last two words of the sentence are checked first, then the next two prior words, and so on till the first word of the sentence is scanned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing", "sec_num": "3.3" }, { "text": "After parsing, the system only needs to match out the POS. This procedure can reduce the searching time needed to identify the most similar example sentence in the EB.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing", "sec_num": "3.3" }, { "text": "\u4ed6/r \u662f/v \u4e00\u500b/q \u5b78\u751f/n (He is a student) is parsed as S=[\u4ed6/r]NP[\u662f/v[\u4e00\u500b/q \u5b78\u751f/n]NP]VP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For example, a tagged Cantonese sentence", "sec_num": null }, { "text": "Its parsing tree is shown in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 29, "end": 37, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "For example, a tagged Cantonese sentence", "sec_num": null }, { "text": "After parsing, the sentence is converted into SST as shown in Figure 3 .", "cite_spans": [], "ref_spans": [ { "start": 62, "end": 70, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "For example, a tagged Cantonese sentence", "sec_num": null }, { "text": "Definition 3. SST is a Binary Tree; it is used to store the natural language sentence. Let s=w 1 w 2 ... w n be a sentence: 1) w i is a root if and only if w i is the center word of the predicate in the sentence. 2) w 1 ...w i-1 forms the left sub-tree of the root, while w i+1 ...w n forms the right sub-tree of the root.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For example, a tagged Cantonese sentence", "sec_num": null }, { "text": "\u662f/v/VP \u4ed6/r/NP \u5b78\u751f/n/NP \u4e00\u500b/q", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For example, a tagged Cantonese sentence", "sec_num": null }, { "text": "3) The left sub-tree and the right sub-tree are formed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For example, a tagged Cantonese sentence", "sec_num": null }, { "text": "a) If w 1 ...w i-1 or w i+1 ...w n is a sub-sentence, then go to 1). b) If w 1 ...w i-1 or w i+1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For example, a tagged Cantonese sentence", "sec_num": null }, { "text": "..w n is a phrase, then the root of the sub-tree is the center word (or content word), while the following word is the modifier of the center word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For example, a tagged Cantonese sentence", "sec_num": null }, { "text": "This type of knowledge representation can easily reflect the structure of a sentence, and can be implemented for the translation process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For example, a tagged Cantonese sentence", "sec_num": null }, { "text": "In general, an example-based MT system should address the following problems: 1) building the map relation of bilingual alignment, based on characters, words, phrases, sub-sentences or sentences;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Comparison and Example Selection", "sec_num": "3.4" }, { "text": "2) similarity calculation and example selection;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Comparison and Example Selection", "sec_num": "3.4" }, { "text": "3) constructing a target.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Comparison and Example Selection", "sec_num": "3.4" }, { "text": "Among these problems, problem 2 is the most important one in example-based MT. Many researchers have focused on the above problems [Li 2005; Church 1994; Fung 1993; Carl 1999; FuRusE 1992; Mosleh 1999; Carl 1999 ] and tried to solve it in different ways.", "cite_spans": [ { "start": 131, "end": 140, "text": "[Li 2005;", "ref_id": "BIBREF12" }, { "start": 141, "end": 153, "text": "Church 1994;", "ref_id": "BIBREF5" }, { "start": 154, "end": 164, "text": "Fung 1993;", "ref_id": null }, { "start": 165, "end": 175, "text": "Carl 1999;", "ref_id": "BIBREF1" }, { "start": 176, "end": 188, "text": "FuRusE 1992;", "ref_id": null }, { "start": 189, "end": 201, "text": "Mosleh 1999;", "ref_id": "BIBREF17" }, { "start": 202, "end": 211, "text": "Carl 1999", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Similarity Comparison and Example Selection", "sec_num": "3.4" }, { "text": "For problem 2, our research addresses three important questions as follows: 1) Determining the matching level:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Comparison and Example Selection", "sec_num": "3.4" }, { "text": "The matching level includes the sentence level and sub-sentence level. For the former, it is easy to determine the boundary of a sentence. Because the sentence can contain a certain number of messages, the possibility of having an exact match is very low, so the system lacks flexibility and robustness. In contrast, matching at the sub-sentence level has the advantage of exact matching and the disadvantage of boundary ambiguity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Comparison and Example Selection", "sec_num": "3.4" }, { "text": "In addition, there are no exact chunking or cover algorithms. Our matching algorithm is sentence-based.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Comparison and Example Selection", "sec_num": "3.4" }, { "text": "2) The algorithm for calculating the similarity:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Comparison and Example Selection", "sec_num": "3.4" }, { "text": "There is no exact definition for the similarity between sentences. Many researchers have addressed this issue and presented similarity algorithms based on words. Some of the algorithms [e.g., Sergei 1993 ] firstly calculate the word similarity according to the word font, word meaning, and semantic distance of words, and then calculate the sentence similarity based on word similarity. Other algorithms [Brown 1997; Carl 1999; Markman et al. 1996; Mclean 1992; Mosleh et al. 1999; Zhang et al. 1995] are based on syntax rules, characters and hybrid methods.", "cite_spans": [ { "start": 192, "end": 203, "text": "Sergei 1993", "ref_id": "BIBREF19" }, { "start": 404, "end": 416, "text": "[Brown 1997;", "ref_id": "BIBREF0" }, { "start": 417, "end": 427, "text": "Carl 1999;", "ref_id": "BIBREF1" }, { "start": 428, "end": 448, "text": "Markman et al. 1996;", "ref_id": "BIBREF15" }, { "start": 449, "end": 461, "text": "Mclean 1992;", "ref_id": "BIBREF16" }, { "start": 462, "end": 481, "text": "Mosleh et al. 1999;", "ref_id": "BIBREF17" }, { "start": 482, "end": 500, "text": "Zhang et al. 1995]", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Similarity Comparison and Example Selection", "sec_num": "3.4" }, { "text": "Our similarity algorithm is based on the phrases in the sentence; it has the following features:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Comparison and Example Selection", "sec_num": "3.4" }, { "text": "a) The example base consists of a variety of sentences whose phrase structures are different.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Comparison and Example Selection", "sec_num": "3.4" }, { "text": "b) The phrases of a sentence are the fundamental calculating cells for aligning the content words of the input sentence and example sentence, i.e., calculate the similarity between the same positional phrase in the input and example sentence. For the same positional phrases, the similarity calculation is based on the content words. This is based on the principle that in a natural language sentence, the content words form the framework of the sentence and depict the central meaning of the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Comparison and Example Selection", "sec_num": "3.4" }, { "text": "c) The system does not need lexical, syntax, and semantic analysis to perform similarity comparison.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Comparison and Example Selection", "sec_num": "3.4" }, { "text": "d) The system can deal with a variety of Cantonese inputs, such as sentences, sub-sentences, and phrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Comparison and Example Selection", "sec_num": "3.4" }, { "text": "3) The efficiency of this algorithm:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Comparison and Example Selection", "sec_num": "3.4" }, { "text": "Normally, there will be a lot of example sentences in the example base. The algorithm proposed here has to calculate the similarity between the input sentence and every sentence in the example base. So the efficiency of the algorithm is very important.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Comparison and Example Selection", "sec_num": "3.4" }, { "text": "The example base contains the different structures of Cantonese sentences. For sentence with the same structure, we select the shortest one as an example sentence. So the example base will keep the smallest number of sentences yet maintain the largest number of sentence structure types. In addition, the similarity algorithm is not recursive, and it saves computing time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Comparison and Example Selection", "sec_num": "3.4" }, { "text": "Each translation example in the example base consists of four components: a Cantonese sentence, a tagged Cantonese sentence, an English sentence, and a tagged English sentence. A Cantonese-English translation example is given as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Example Base", "sec_num": "3.4.1" }, { "text": "\u4ed6\u9a0e\u55ae\uf902\u8fd4\u5de5\u3002; \u4ed6/r \u9a0e/v \u55ae\uf902/n \u8fd4\u5de5/v\u3002/w; he goes to work by bike. he/He goes to work/V by/P bike/N ./W;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Example Base", "sec_num": "3.4.1" }, { "text": "In the example base, the four components of an example sentence have no relationship with each other and don't need to align Cantonese to English sentences. All the Cantonese sentences in the example base are segmented and tagged. Cantonese segmentation is based on English translation, i.e. if the English translation is a phrase; then the corresponding Cantonese part is segmented as a word, such as \"\u8fd4\u5de5\". This part of the English sentence serves as a translation template, the tagged Cantonese sentence and tagged English sentence are to construct a target (see section 3-5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Example Base", "sec_num": "3.4.1" }, { "text": "Similarity comparison is used to choose the most similar Cantonese example sentence in the example base with the input sentence, and then its corresponding English translation sentence will serve as the translation template to translate the input Cantonese sentence. The similarity of two sentences is calculated on the basis of a phrase in the parsed input sentence and the parsed example sentence. The parts-of-speech within the same phrase, in the phrase structure pattern of the input sentence, and in each example sentence in the bilingual corpus are compared. In case of a mismatch between the parts-of-speech, a penalty score is incurred, and the comparison proceeds for the next part-of-speech within the same phrase. The score calculation progresses from the left-most phrase structure to the last one of the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Comparison", "sec_num": "3.4.2" }, { "text": "In fact, the similarity comparison mechanism is mainly based on the content words in the sentence. The example base can only store Cantonese framework sentences. For sentences that have the same phrase structure, the shortest is stored in the example base so as to avoid information redundancy in the example base. The mathematical model of this procedure is as follows [Wu and Liu 1999; Zhou and Liu 1997] :", "cite_spans": [ { "start": 370, "end": 387, "text": "[Wu and Liu 1999;", "ref_id": "BIBREF22" }, { "start": 388, "end": 406, "text": "Zhou and Liu 1997]", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Similarity Comparison", "sec_num": "3.4.2" }, { "text": "Suppose A=w 1 w 2 ...w n =p A1 p A2 ......p Ak , B=w 1 w 2 ...w m =p B1 p B2 ......p Bl , where w Ai (w Bj ), p Ai (p Bj )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Comparison", "sec_num": "3.4.2" }, { "text": "is the i th (j th ) Cantonese word and phrase, respectively, in sentence A (B). F is the whole feature set of a certain word category, E is a subset of F, and |E| stands for the number of features in E. fea k (w), sub_pos(w), and pos(w) represent the k th feature, sub-category, and part-of-speech of word w, respectively. Ss(S 1 ,S 2 ) represents the metric between S 1 and S 2 ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Comparison", "sec_num": "3.4.2" }, { "text": "( ) are the function words in phrases A i and B i , respectively; and len(p Ai ) and len(p Bi ) are the total number of words contained in phrase p Ai and p Bi , respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Comparison", "sec_num": "3.4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "max( , ) 1 2 1 ( , ) , k l Ai Bi i Ss S S Sp p p = = \u2211 ,", "eq_num": "(2)" } ], "section": "Similarity Comparison", "sec_num": "3.4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ), ( ) 0 ( , )", "eq_num": "( ), ( ) 0 ( , ) ( , ) Ai Bi" } ], "section": "Similarity Comparison", "sec_num": "3.4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u23a7 \u2212 = \u23aa \u23aa = \u2212 = \u23a8 \u23aa + \u23aa \u23a9 ,", "eq_num": "(3)" } ], "section": "Similarity Comparison", "sec_num": "3.4.2" }, { "text": "( ) 1.5, 1.1, ( ) ( ) 0.3, ( ) 0 ( ) 0 ( , ) ( ) 0 f f Bi Ai f f Bi Ai f f f f Bi Ai Bi Ai", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Comparison", "sec_num": "3.4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "= = \u2212 = < > = <> ( ) ( ) 0 0.6, f Bi n p otherwise \u23a7 \u23aa \u23aa \u23aa \u23aa \u23a8 \u23aa \u23aa = \u23aa \u23aa \u2212 \u23a9 ,", "eq_num": "(4)" } ], "section": "Similarity Comparison", "sec_num": "3.4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 1 1 0.5* 1 1 0.5* 1.5, 1.2,", "eq_num": "( ) ( ) ( ) ( ) 1" } ], "section": "Similarity Comparison", "sec_num": "3.4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ".1,", "eq_num": "( ) ( ) ( ) ( ) 1" } ], "section": "Similarity Comparison", "sec_num": "3.4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "= = = = = = \u222a \u222a .0, ( )= ( ) 0.8, ( ) ( ) ( ) { , } ( ) { , } 0.6, ,", "eq_num": ", c c" } ], "section": "Similarity Comparison", "sec_num": "3.4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2260 \u2208 \u2208 0.4, ,", "eq_num": ", ( ) ( ) 1." } ], "section": "Similarity Comparison", "sec_num": "3.4.2" }, { "text": "We set the weights in equations 4 and 5 based on the results of many experiments. We think that the function word and content word have the equal function in the comparison of sentences, so they have the same similarity score, i.e. 1.5. In equation 4 (for function words), if the parts-of-speech of the function words in A i and B i are equal, we think we can simply exchange the function word in the example sentence with the source function word, which will not affect the translation sequence. In this case, we give the higher similarity score of 1.1. If there is a function word in A i , and no function word in the corresponding location in B i , we think the structures of both A i and B i are not equal, so we assign a negative similarity. Otherwise, the function words of A i and B i are totally different, so the lower negative weight is given. Equation 5 is used to calculate the content word similarity. All content words have their own semantic features, which can be used to calculate their similarity. If the parts-of-speech of the content word in Ai and B i are equal, and if most of their features are equal, then we give the higher similarity weight, 1.2; otherwise, their identical features are less than half of the whole feature set F, and we think they belong to different categories, so we assign a weight of 1.1. If their features are totally unequal and their POSs are equal, we think the difference between A i and B i is semantic, so the weight is 1.0. If the parts-of-speech of the content words of A i and B i are not equal and belong to (n,r), we think this difference doesn't affect the translation sequence, so the weight is 0.8. When the content words of A i and B i are equal and the function words before them are not equal, we think this may affect the translation result, so a 0.6 weight is given. If the POSs of the content words in A i and B i are equal and the function words before them are not equal, we think their similarity is low, so the weight is 0.4. Otherwise, they are totally different. Because the content word plays the main function in determining meaning of the sentence, we give a weight of -1.5. This procedure calculates the similarity between the input sentence and every sentence in the example base, and selects the example sentence whose score is the highest as the best matching sentence. If an input sentence matches both a fragment and a full sentence that contains (or does not completely contain) the fragment, or that matches two examples that are syntactically identical but lexically different, then the highest score of the example sentence will be selected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Comparison", "sec_num": "3.4.2" }, { "text": "The example base was created by Yu Shiwen of Beijing University and more Cantonese sentence pairs have been has added. Now, there are about 9000 Cantonese and English sentence pairs, and all the sentences have been annotated with parts-of-speech. The average sentence length for Cantonese is 11 characters and for English is 14 words. Moreover, many sub-dictionaries of nouns, verbs, adjectives, pronouns, classifiers, and prepositions, etc. are employed. There are many specific features that are helpful for sentence comparison in each of these dictionaries. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Comparison", "sec_num": "3.4.2" }, { "text": "This stage involves using the Cantonese and English phrase structure relations of the example translation as a template to build the target English sentence. The SST of the source Cantonese sentence contains the following types of nodes: 1) Bilingual corresponding Node (BN): it provides a correspondence between the example English sentence tree and translation template tree (see Figure 4 ). The nodes \"\u662f(be)\" , \"\u5b78\u751f(student)\" , and \"\u4e00(a)\u500b\" belong to BN .", "cite_spans": [], "ref_spans": [ { "start": 382, "end": 390, "text": "Figure 4", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Target Construction", "sec_num": "3.5" }, { "text": "2) Single corresponding Node (SN): this type of node only has a corresponding node in the example English sentence tree and has no corresponding node in the translation template tree. An example is the node\"\u6211(I) \" in the above source sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target Construction", "sec_num": "3.5" }, { "text": "3) Non-corresponding Node (NN): this type of node provides no correspondence between the example English sentence tree and translation template tree (see Figure 5 ). There are two types of NNSs: a) NN c : the word depicted by this node is a content word. See the node \"\uf981\u5152(daughter)\" in the following example.", "cite_spans": [], "ref_spans": [ { "start": 154, "end": 162, "text": "Figure 5", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Target Construction", "sec_num": "3.5" }, { "text": "b) NN f : the word depicted by this node is a function word. See the node \"\u548c(and)\" in the following example.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target Construction", "sec_num": "3.5" }, { "text": "4) Tense Node (TN): this type of node can determine the tense of a target English sentence. Table 2 shows Cantonese words that can represent the tense of the corresponding English sentence.", "cite_spans": [], "ref_spans": [ { "start": 92, "end": 99, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Target Construction", "sec_num": "3.5" }, { "text": "Source sentence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target Construction", "sec_num": "3.5" }, { "text": "\u662f/v \u6211/r \u5b78\u751f/n \u4e00\u500b/q Example sentence \u662f/v \u4ed6/r \u5b78\u751f/n \u4e00\u500b/q translation template is/V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target Construction", "sec_num": "3.5" }, { "text": "he/R student/N a/T The future indefinite \u6703(be able to), \u5c07(shall), \u5c31\u8981(going to), \u7d42\u5c06(eventually), \u5c07\u6703(will be able to), \u5373\u5c07(be about to), \u5c31\u6703(will be able to), \u5c31\u5feb(soon), \u5c31\uf92d(come soon), \u5feb\u8981(soon), \u660e\u65e5(tomorrow), \u660e\uf98e(next year)...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target Construction", "sec_num": "3.5" }, { "text": "Type, Voice, and Mood Node (TVMN): this type of node can determine the voice and mood of a target English sentence. Table 3 shows Cantonese words that can represent the tense of the corresponding English sentence.", "cite_spans": [], "ref_spans": [ { "start": 116, "end": 123, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "5)", "sec_num": null }, { "text": "For the above different types of nodes in the SST, the system applies different replacement rules to translate the phrases stored in these nodes. For the node BN, m=0; i.e., the system does not need any replacement action because the source word has the corresponding target word in the translation template.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5)", "sec_num": null }, { "text": "For the node SN, replacement-action ::= look(ew), look(sw), repl (E-ew, E-sw) .", "cite_spans": [ { "start": 65, "end": 77, "text": "(E-ew, E-sw)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Table 3. The correspondence between English sentence types and Cantonese words.", "sec_num": null }, { "text": "Here, look is the action of looking up the bilingual dictionary; repl is the action of replacing the translation template; ew and sw are the Cantonese words in the example sentence and source sentence, respectively; E-ew and E-sw are the English words corresponding to ew and sw, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 3. The correspondence between English sentence types and Cantonese words.", "sec_num": null }, { "text": "For the node NN, replacement-action ::= look(sw),loca(sw), inst(E-sw).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 3. The correspondence between English sentence types and Cantonese words.", "sec_num": null }, { "text": "Here, loca is the action of determining where to insert E-sw in the translation template; inst is the action if inserting E-sw in the translation template.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 3. The correspondence between English sentence types and Cantonese words.", "sec_num": null }, { "text": "For the node TN, replacement-action ::= look(sw v ), chan(E-sw v ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 3. The correspondence between English sentence types and Cantonese words.", "sec_num": null }, { "text": "Here, sw v is the current verb in the source sentence, and chan is the action of changing E-sw v , for example, E-sw+...\"ing\uff02for the present continuous tense, E-sw+\"ed \uff02for the past tense, E-sw +\"will\uff02+ sw for the future tense, and so on.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 3. The correspondence between English sentence types and Cantonese words.", "sec_num": null }, { "text": "For the node TVMN, replacement-action ::= recv(E-sw v ), chan(tran-tmplate).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 3. The correspondence between English sentence types and Cantonese words.", "sec_num": null }, { "text": "Here, recv is the action of recovering the verb of the template, chan(tran-tmplate) is the action of changing the voice of the translation template, such as \"do\" + subj+verb , \"will\"+ subj+verb , \" have\" + subj+verb for query sentence, or \" do not\" +verb, \"did not\"+ verb for a negative sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 3. The correspondence between English sentence types and Cantonese words.", "sec_num": null }, { "text": "The process of target construction can be described as follows (see Figure 6 for an example): 1) Recovering the words in the translation template: Because the criterion of similarity matching is based on content words, and because in a Cantonese sentence, the function words determine the word form change of its corresponding English sentence, when the system gets an example sentence from the example base, the chance of having an example sentence with a different tense and voice from that of the source sentence is quite high. So the system first deletes the tense and voice of the translation template, and then adds the tense and voice corresponding to the source sentence.", "cite_spans": [], "ref_spans": [ { "start": 68, "end": 76, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Table 3. The correspondence between English sentence types and Cantonese words.", "sec_num": null }, { "text": "Translation template: he worked in the factory.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For example,", "sec_num": null }, { "text": "he work in the factory.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For example,", "sec_num": null }, { "text": "2) The replacement rules are applied to change the translation template and generate the target sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For example,", "sec_num": null }, { "text": "3) Experimental results", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For example,", "sec_num": null }, { "text": "The LangCompMT05 system was realized using MS Visual C++ for Windows. Users can easily interact with the system to perform translation. Table 4 lists some experiential results. They indicate that the accuracy of the system is 80.6% (see Table 5 ). The test sentences were created by the authors. Four translation experts manually scored the system's translation results. The score range was from 0 to 100, and we got the accuracy of the system by averaging the scores. The average translation time per sentence was 36 seconds.", "cite_spans": [], "ref_spans": [ { "start": 136, "end": 143, "text": "Table 4", "ref_id": null }, { "start": 237, "end": 244, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "For example,", "sec_num": null }, { "text": "Most of the translation errors are due to the following cases:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For example,", "sec_num": null }, { "text": "1) The preposition and noun in the sentences are replaced with error words. The corrected translation for \"\u5728\u684c\u4e0a\" is \"on the desk\", not \"in the desk\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For example,", "sec_num": null }, { "text": "2) Some Cantonese phrasal words has no corresponding English words. \"\u6025\u6025\u8173\"\ufe50for example, is a special Cantonese phrasal word. An insufficient knowledge base is the cause of most of the problems in natural language processing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For example,", "sec_num": null }, { "text": "3) Segmentation errors also cause the translation errors. For example, \"\u662f/\u975e\u5e38/\u5e38/\u6df7\u6dc6(Is extremely confused)\", \"\u5979/\u662f/\u975e\u5e38/\u6f02\uf977/\u7684 (She is very pretty)\". 4) POS errors also cause the translation errors. POS tagging is mainly statistic-based, and it selects categories that often occur in the corpus. For example, \"\u66f8/n \u5728/p \u684c/n \u4e0a/u (The book is in the desk)\", \"\u4ed6/r \u4e0a/u \u5c71/n (He is climbing up the mountain)\". This type of error can be solved by means of syntactic analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For example,", "sec_num": null }, { "text": "Source sentence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Determining each type of node", "sec_num": null }, { "text": "\u5de5\u4f5c/v /BN \u5c06/d /TN \u548c/c /NN f \uf963\u4eac/np/SN C \u6211/r/NN \u5979/r/BN \u5728/p /BN", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Determining each type of node", "sec_num": null }, { "text": "Example sentence: The price changes according to the market reaction. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Determining each type of node", "sec_num": null }, { "text": "\u5de5\u4f5c/v \u6b63/d \u5979/r \u4e0a\u6d77/np \u5728/p", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Determining each type of node", "sec_num": null }, { "text": "We have proposed an integrated method for Cantonese-English machine translation that makes use of morphological knowledge, syntax analysis, translation examples, and target-generation-based rules. The principles and algorithms used in this MT system have been well tested. The source sentence is segmented first, then it is tagged and parsed it, and the SST of the source sentence formed for its structural representation. Finally, using the computational linguistic method, an example sentence is selected from the EB; its corresponding English translation sentence is used as the translation template, and the target sentence (English) is generated based on rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "4." }, { "text": "Machine translation especially in the Cantonese-English domain is quite a difficulty task. Based on our research on the LangCompMT05 system, we have proposed an integrated MT method that is mainly based on an example-based machine translation method, and we believe that this integrated method is feasible for solving many translation problems. With the computational method, we find that it is possible to acquire bilingual knowledge from a small-scale, representable EB. We have proposed a number of algorithms, such as a Cantonese segmentation algorithm, similarity calculation algorithm, and a target sentence construction algorithm. We have created databases, which contain many Cantonese words and related information. For example, our Cantonese dictionary contains part-of-speech and word frequency information. The EB stores many Cantonese-English sentence pairs that have been segmented and tagged with POSs. The bilingual dictionary stores the Cantonese words and corresponding English words. This information source will be valuable for future development of other NLP systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "4." } ], "back_matter": [ { "text": "She has gone to Beijing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Automated Dictionary Extraction for Knowledge-Free Example-Based Translation", "authors": [ { "first": "R", "middle": [ "D" ], "last": "Brown", "suffix": "" } ], "year": 1997, "venue": "Proceedings Of the seventh International Conference on Theoretical and Methodological Issues in Machine Translation", "volume": "", "issue": "", "pages": "23--25", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brown, R. D., \"Automated Dictionary Extraction for Knowledge-Free Example-Based Translation,\" In Proceedings Of the seventh International Conference on Theoretical and Methodological Issues in Machine Translation, Santa Fe, 1997, pp. 23-25.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Inducing Translation Templates for Example-based Machine Translation", "authors": [ { "first": "M", "middle": [], "last": "Carl", "suffix": "" } ], "year": 1999, "venue": "proceedings of Machine Translation Summit VII99", "volume": "", "issue": "", "pages": "250--258", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carl, M., \"Inducing Translation Templates for Example-based Machine Translation,\" In proceedings of Machine Translation Summit VII99, 1999, pp. 250-258.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Word Sense Disambiguation vs. Statistical Machine Translation", "authors": [ { "first": "M", "middle": [], "last": "Carpuat", "suffix": "" }, { "first": "D", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2005, "venue": "43rd Annual Meeting of the Association for Computational Linguistics (ACL-2005)", "volume": "", "issue": "", "pages": "58--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carpuat, M., and D. Wu, \"Word Sense Disambiguation vs. Statistical Machine Translation,\" 43rd Annual Meeting of the Association for Computational Linguistics (ACL-2005). Ann Arbor, MI: Jun 2005, pp. 58-75.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Machine Translation: An Integrated Approach", "authors": [ { "first": "K", "middle": [ "H" ], "last": "Chen", "suffix": "" }, { "first": "Chen", "middle": [ "H H" ], "last": "", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the Sixth International Conference on Theoretical and Methodological Issues in Machine Translation", "volume": "", "issue": "", "pages": "287--294", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, K.H., and Chen H.H., \"Machine Translation: An Integrated Approach,\" In Proceedings of the Sixth International Conference on Theoretical and Methodological Issues in Machine Translation, 1995, pp. 287-294.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A Study on Word Similarity using Context Vector Models", "authors": [ { "first": "K", "middle": [], "last": "Chen", "suffix": "" }, { "first": "J", "middle": [], "last": "You", "suffix": "" } ], "year": 2002, "venue": "International Journal of Computational Linguistics and Chinese Language Processing", "volume": "7", "issue": "2", "pages": "37--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, K., and J. You, \"A Study on Word Similarity using Context Vector Models,\" International Journal of Computational Linguistics and Chinese Language Processing, 7(2), 2002, pp. 37-58.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Aligning Parallel Texts: Do methods Developed for English-French generalization Asia Language?", "authors": [ { "first": "K", "middle": [], "last": "Church", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Church, K., \"Aligning Parallel Texts: Do methods Developed for English-French generalization Asia Language?\" Technical Reported from Tsinghua University, 1994.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "K-vec: A New Approach for Aligning Parallel Texts", "authors": [ { "first": "P", "middle": [], "last": "Fung", "suffix": "" }, { "first": "K", "middle": [ "W" ], "last": "Chen", "suffix": "" } ], "year": null, "venue": "COLING-94", "volume": "", "issue": "", "pages": "1096--1104", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fung, P., and K. W. Chen, \"K-vec: A New Approach for Aligning Parallel Texts,\" COLING-94, pp.1096-1104.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "An Example-Based Method for Transfer-Driven MT", "authors": [ { "first": "O", "middle": [], "last": "Furuse", "suffix": "" }, { "first": "H", "middle": [], "last": "Iida", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "139--148", "other_ids": {}, "num": null, "urls": [], "raw_text": "FuRusE, O., and H. Iida, \"An Example-Based Method for Transfer-Driven MT,\" TMI-92, pp. 139-148", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Ambiguities in Automatic Chinese Word-Segmentation", "authors": [ { "first": "M", "middle": [], "last": "Hou", "suffix": "" }, { "first": "J", "middle": [ "J" ], "last": "Sun", "suffix": "" }, { "first": "Z", "middle": [ "X" ], "last": "Chen", "suffix": "" } ], "year": 2001, "venue": "Proceedings of 3rd national conference on computing linguistics", "volume": "", "issue": "", "pages": "81--87", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hou, M. , J. J. Sun, and Z. X. Chen, \"Ambiguities in Automatic Chinese Word-Segmentation,\" In Proceedings of 3rd national conference on computing linguistics, 2001, pp. 81-87.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Learning case-based knowledge for disambiguating Chinese word segmentation: A preliminary study", "authors": [ { "first": "C", "middle": [], "last": "Kit", "suffix": "" }, { "first": "K", "middle": [], "last": "Pan", "suffix": "" }, { "first": "H", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2002, "venue": "COLING2002 workshop:SIGHAN-1", "volume": "", "issue": "", "pages": "33--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kit, C., K. Pan, and H. Chen, \"Learning case-based knowledge for disambiguating Chinese word segmentation: A preliminary study,\" In COLING2002 workshop:SIGHAN-1, 2002, pp. 33-39.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Example-based machine translation: A new paradigm", "authors": [ { "first": "C", "middle": [], "last": "Kit", "suffix": "" }, { "first": "H", "middle": [], "last": "Pan", "suffix": "" }, { "first": "J", "middle": [ "J" ], "last": "Webster", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "57--78", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kit, C., H. Pan, and J. J. Webster, \"Example-based machine translation: A new paradigm,\" Translation and Information Technology, ed. By S.W. Chan, translation department, Chinese University of HK Press, Hong Kong, 2000, pp. 57-78.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Machine Translation in the Year 2004", "authors": [ { "first": "K", "middle": [], "last": "Knight", "suffix": "" }, { "first": "D", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing", "volume": "", "issue": "", "pages": "45--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Knight, K., and D. Marcu, \"Machine Translation in the Year 2004,\" In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, March 18-23,2005, pp. 45-50.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Similarity Based Chinese Synonym Collocation Extraction", "authors": [ { "first": "W", "middle": [], "last": "Li", "suffix": "" }, { "first": "Q", "middle": [], "last": "Lu", "suffix": "" }, { "first": "R", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2005, "venue": "International Journal of Computational Linguistics and Chinese Language Processing", "volume": "10", "issue": "1", "pages": "123--144", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, W., Q. Lu, and R. Xu, \"Similarity Based Chinese Synonym Collocation Extraction,\" International Journal of Computational Linguistics and Chinese Language Processing, 10(1), March 2005, pp. 123-144.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Contemporary Chinese Language Word Segmentation Specification for Information Processing and Automatic Word Segmentation Methods", "authors": [ { "first": "Y", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Q", "middle": [ "K" ], "last": "Tan", "suffix": "" }, { "first": "X", "middle": [], "last": "Shen", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liu, Y., Q. K. Tan, and X. Shen, Contemporary Chinese Language Word Segmentation Specification for Information Processing and Automatic Word Segmentation Methods, Tsinghua University Press, Beijing, 1994.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Towards a Unified Approach to Memory-and Statistical-Based Machine ranslation", "authors": [ { "first": "D", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2001, "venue": "Proceedings of ACL-2001", "volume": "", "issue": "", "pages": "59--70", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcu, D., \"Towards a Unified Approach to Memory-and Statistical-Based Machine ranslation,\" In Proceedings of ACL-2001, Toulouse, France, July 2001, pp.59-70.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Commonalties and Differences in Similarity Comparisons", "authors": [ { "first": "B", "middle": [ "A" ], "last": "Markman", "suffix": "" }, { "first": "D", "middle": [], "last": "Gentner", "suffix": "" } ], "year": 1996, "venue": "Memory and Cognition", "volume": "24", "issue": "2", "pages": "235--249", "other_ids": {}, "num": null, "urls": [], "raw_text": "Markman, B.A., and D. Gentner, \"Commonalties and Differences in Similarity Comparisons,\" Memory and Cognition, 24(2), 1996, pp. 235-249.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Example-based Machine Translation Using Connectionist Matching", "authors": [ { "first": "I", "middle": [], "last": "Mclean", "suffix": "" } ], "year": 1992, "venue": "Proceedings Of TMI-92", "volume": "", "issue": "", "pages": "35--43", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mclean, I., \"Example-based Machine Translation Using Connectionist Matching,\" In Proceedings Of TMI-92, Montreal, 1992, pp. 35-43.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Example-based Machine Translation Based on the Synchronous SSTC Annotation Schema", "authors": [ { "first": "H", "middle": [ "A A" ], "last": "Mosleh", "suffix": "" }, { "first": "E", "middle": [ "K" ], "last": "Tang", "suffix": "" } ], "year": 1999, "venue": "Proceedings of Machine Translation Summit VII'99", "volume": "", "issue": "", "pages": "244--249", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mosleh, H. A. A., and E. K. Tang, \"Example-based Machine Translation Based on the Synchronous SSTC Annotation Schema,\" In Proceedings of Machine Translation Summit VII'99, 1999, pp. 244-249.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Unknown Word Detection and Segmentation of Chinese Using Statistical and Heuristic Knowledge", "authors": [ { "first": "J", "middle": [ "Y" ], "last": "Nie", "suffix": "" }, { "first": "M.-L", "middle": [], "last": "Hannan", "suffix": "" }, { "first": "W", "middle": [ "Y" ], "last": "Jin", "suffix": "" } ], "year": 1995, "venue": "Communications of COLIPS", "volume": "5", "issue": "1&2", "pages": "47--57", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nie, J. Y., M.-L. Hannan, and W. Y. Jin, \"Unknown Word Detection and Segmentation of Chinese Using Statistical and Heuristic Knowledge,\" Communications of COLIPS, 5(1&2), 1995, pp. 47-57.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Two Approaches to Matching in EBMT", "authors": [ { "first": "N", "middle": [], "last": "Sergei", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "47--57", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sergei, N., \"Two Approaches to Matching in EBMT,\" TMI-93, 1993, pp. 47-57.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Example-based machine translation", "authors": [ { "first": "H", "middle": [ "L" ], "last": "Somers", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "611--627", "other_ids": {}, "num": null, "urls": [], "raw_text": "Somers, H. L., \"Example-based machine translation,\" Eds. by R. Dale, H. Moisl and H. Somers, New York:, pp. 611-627.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Future Directions of Machine Translation", "authors": [ { "first": "J", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": null, "venue": "Proceedings of 11 th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "80--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsujii, J., \"Future Directions of Machine Translation,\" In Proceedings of 11 th International Conference on Computational Linguistics, Bonn,pp. 80-86.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A Cantonese-English Machine Translation System PolyU-MT-99", "authors": [ { "first": "Y", "middle": [], "last": "Wu", "suffix": "" }, { "first": "J", "middle": [], "last": "Liu", "suffix": "" } ], "year": 1999, "venue": "Proceedings of Machine Translation Summit VII 99", "volume": "", "issue": "", "pages": "481--486", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu, Y. and J. Liu, \"A Cantonese-English Machine Translation System PolyU-MT-99,\" In Proceedings of Machine Translation Summit VII 99, Singapore, 1999, pp. 481-486.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Similarity Comparison between Chinese Sentences", "authors": [ { "first": "L", "middle": [ "N" ], "last": "Zhou", "suffix": "" }, { "first": "J", "middle": [], "last": "Liu", "suffix": "" }, { "first": "S", "middle": [ "W" ], "last": "Yu", "suffix": "" } ], "year": 1997, "venue": "Proceedings of ROLING'97", "volume": "", "issue": "", "pages": "277--281", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhou, L.N., J. Liu, and S. W. Yu, \"Similarity Comparison between Chinese Sentences,\" In Proceedings of ROLING'97, Taiwan, 1997, pp. 277-281.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Study and implementation of combined techniques for automatic extraction of word translation pairs: An analysis of the contributions of word heuristics to a statistical method", "authors": [ { "first": "L", "middle": [ "N" ], "last": "Zhou", "suffix": "" }, { "first": "J", "middle": [], "last": "Liu", "suffix": "" }, { "first": "S", "middle": [ "W" ], "last": "Yu", "suffix": "" } ], "year": 1998, "venue": "International Journal on Computer Processing of Oriental Languages", "volume": "11", "issue": "4", "pages": "339--351", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhou, L.N., J. Liu, and S. W. Yu, \"Study and implementation of combined techniques for automatic extraction of word translation pairs: An analysis of the contributions of word heuristics to a statistical method,\" International Journal on Computer Processing of Oriental Languages, 11(4), 1998, pp. 339-351.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "A Word-Based Approach for Measuring the Similarity between two Chinese Sentence", "authors": [ { "first": "M", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "S", "middle": [], "last": "Li", "suffix": "" }, { "first": "T", "middle": [ "J" ], "last": "Zhao", "suffix": "" }, { "first": "M", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 1995, "venue": "Proceedings of national conference of 3rd Computational Linguistics", "volume": "", "issue": "", "pages": "152--158", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang, M., S. Li, T. J. Zhao, and M. Zhou, \"A Word-Based Approach for Measuring the Similarity between two Chinese Sentence,\" In Proceedings of national conference of 3rd Computational Linguistics, Beijing, 1995, pp. 152-158.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Reordering constraints for phrase-based statistical machine translation", "authors": [ { "first": "R", "middle": [], "last": "Zens", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" }, { "first": "T", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "T", "middle": [], "last": "Sumita", "suffix": "" } ], "year": null, "venue": "Proceedings of COLING-2004", "volume": "", "issue": "", "pages": "23--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zens, R., H. Ney, T. Watanabe, and T. Sumita, \"Reordering constraints for phrase-based statistical machine translation,\" In Proceedings of COLING-2004, Geneva,Switzerland4, pp. 23-29.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Figure 1. The architecture of the LangCompMT05 system", "uris": null, "num": null }, "FIGREF1": { "type_str": "figure", "text": "The parsing tree of a sentence", "uris": null, "num": null }, "FIGREF2": { "type_str": "figure", "text": "For example: \uf901\u591a\u696d\u5167\u4eba\u58eb /NP \uf95a\uf9ba/VP \u9019\u500b\u898f\u5b9a /NP (More professional people have read the regulation.) \u5b78\u751f\u5011/NP \u501f\uf9ba/VP \u4f60\u7684\uf9fe\u58fa /NP (Students borrowed your teapot.)", "uris": null, "num": null }, "FIGREF3": { "type_str": "figure", "text": "the parsed Cantonese sentence \"S=[\u4ed6/r]NP[\u662f/v[\u4e00\u500b/q \u5b78\u751f/n]NP]VP(He is a student)\", the example sentence could be \"S=[\u5979/r]NP[\u662f/v[\u4e00\u500b/q \u5de5\u4eba/n]NP]VP (She is a worker)\".", "uris": null, "num": null }, "FIGREF4": { "type_str": "figure", "text": "An example of a BN in the SST.", "uris": null, "num": null }, "FIGREF5": { "type_str": "figure", "text": "An example of an NN in the SST. Table 2. The correspondence between English sentence tense and Cantonese words. English sentence tense Corresponding Cantonese words The present continuous \u6b63(just), \u6b63\u5728(in progress of), \u5373\u6642(at present), \u5373\u523b(immediately), \u5728\u9032\ufa08(in progress)... The present perfect \u5df2(already), \u5df2\u7d93(already), \u7d93\u5df2(already), \u66fe\u7d93(ever) ... The past indefinite \u904e(over), \uf9ba(end), \u904e\u53bb(past), \u4ee5\u5f80(previously), \u4ee5\u524d(ago), \u4ece\u524d (aforetime), \u4e0a\u6b21(last time), \u6628\u65e5(yesterday) ...", "uris": null, "num": null }, "FIGREF6": { "type_str": "figure", "text": "\uf9fd\u9ebc? (what), \u5462?, \u54ea(which), \u54ea\u4e9b(which kind of), \u54ea\u6a23(which kind of), \u54ea\uf9e8(where), \u662f\u5426(whether), \u600e\u9ebc (how), \u600e\u6a23(what about), \u600e\u53ef(why) The imperative sentence v+...+\u5475!, v+...+\u5427!, v+...+\u7f77!, \u7981\u6b62(forbid), \uf967\u8981(don't), \uf967\u51c6(disapprove), \u5225(do not), \uf967\u8a31(disallow) The exclamatory sentence \u554a !(oh), \u5427 !, \u5509 !(alas), \u5440 !(oh!), \u54c7 , \u5475 , \u591a \u9ebc +...+!(how+...+!), \u5566!,... The negative sentence \uf967(not), \u6c92(no), \uf967\u8a31(disallow), \uf967\u8981(not), \uf967\u51c6(not), \u5225(not), \uf967\u53ef(cannot), \uf967\u80fd(cannot), \uf967\u5f97(need not), \uf967 \u9867(in spite of), \u5225\u8981(must not), ... The passive voice sentence \u88ab(be), \u906d(by), \u906d\u4eba(by someone), \u906d\u5230(be), \u906d\u53d7(be), \u53d7\u5230(by) ...... in RB 2 are formulated as follows: Rule ::= fore-condition | replacement-action; fore-condition ::= condition 1 |condition 2 |...|condition n ; replacement-action ::= action 1 ,action 2 ,...,action m .", "uris": null, "num": null }, "TABREF0": { "html": null, "content": "
AttributeExample1Example2
Cantonese word\u53ea\u662f\u6307\u65e5\u53ef\u5f85
Categoryd, c, vV
English word1OnlyCan be expected soon
English word2However
English word3be only
Frequency0.024160.00046
7) The example model consists of four parts: a Cantonese sentence, a tagged Cantonese
sentence, a corresponding English sentence, and a tagged corresponding Eng lish
sentence.
8) The system is portable and extendable. Its dictionaries, rule bases, and algorithms are in
separate modules (see
", "type_str": "table", "num": null, "text": "" }, "TABREF5": { "html": null, "content": "
Sp(p Ai p , c Bi pare the content words
in phrases Ai and Bi respectively; andf Ai p , f Bi p
\u23a7 \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23a8 \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u2212 \u23aa \u23aa \u23a95,c Ai c Ai d p when the words before p and p is function words and c Bi c Bi c c Ai Bi p they are not equal and POS p POS p otherwise = =.(5)
", "type_str": "table", "num": null, "text": "Ai ,p Bi ) is the similarity score between phrases p Ai and p Bi ; c" }, "TABREF7": { "html": null, "content": "
Source sentence typeNumber of Test sentences Translation accuracy (%)
Positive10081.0%
Negative8082.2%
Passive5081.6%
Present tense5084.0%
Descriptive sentencePresent continuous tense3583.6%
Present perfect tense9079.9%
Future indefinite tense4082.9%
Present tense6578.9%
Interrogative sentencePresent continuous tense Present perfect tense70 6080.6% 80.8%
Future indefinite tense5075.5%
ImperativePositive8079.7%
sentenceNegative4576.8%
Exclamatory sentence5081.9%
Total86580.6%
", "type_str": "table", "num": null, "text": "" } } } }