Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O06-1021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:07:48.324489Z"
},
"title": "Learning to Parse Bilingual Sentences Using Bilingual Corpus and Monolingual CFG",
"authors": [
{
"first": "Chung-Chi",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Tsing Hua University",
"location": {
"settlement": "HsinChu",
"country": "Taiwan, Taiwan"
}
},
"email": ""
},
{
"first": "Jason",
"middle": [
"S"
],
"last": "Chang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Tsing Hua University",
"location": {
"settlement": "HsinChu",
"country": "Taiwan"
}
},
"email": "jason.jschang@gmail.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a new method for learning to parse a bilingual sentence using Inversion Transduction Grammar trained on a parallel corpus and a monolingual treebank. The method produces a parse tree for a bilingual sentence, showing the shared syntactic structures of individual sentence and the differences of word order within a syntactic structure. The method involves estimating lexical translation probability based on a word-aligning strategy and inferring probabilities for CFG rules. At runtime, a bottom-up CYK-styled parser is employed to construct the most probable bilingual parse tree for any given sentence pair. We also describe an implementation of the proposed method. The experimental results indicate the proposed model produces word alignments better than those produced by Giza++, a state-of-the-art word alignment system, in terms of alignment error rate and F-measure. The bilingual parse trees produced for the parallel corpus can be exploited to extract bilingual phrases and train a decoder for statistical machine translation.",
"pdf_parse": {
"paper_id": "O06-1021",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a new method for learning to parse a bilingual sentence using Inversion Transduction Grammar trained on a parallel corpus and a monolingual treebank. The method produces a parse tree for a bilingual sentence, showing the shared syntactic structures of individual sentence and the differences of word order within a syntactic structure. The method involves estimating lexical translation probability based on a word-aligning strategy and inferring probabilities for CFG rules. At runtime, a bottom-up CYK-styled parser is employed to construct the most probable bilingual parse tree for any given sentence pair. We also describe an implementation of the proposed method. The experimental results indicate the proposed model produces word alignments better than those produced by Giza++, a state-of-the-art word alignment system, in terms of alignment error rate and F-measure. The bilingual parse trees produced for the parallel corpus can be exploited to extract bilingual phrases and train a decoder for statistical machine translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The amount of information available in English on the Internet has grown exponentially for the past few years. Although a myriad of data are at our disposal, non-native speakers often find it difficult to wade through all of it since they may not be familiar with the terms or idioms being used in the texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1."
},
{
"text": "To ease the situation, a number of online machine translation (MT) systems such as SYSTRAN and Google Translate provide translation of source text on demand. Moreover, online dictionaries have mushroomed to provide access at any time and everywhere for second language learners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1."
},
{
"text": "MT systems and bilingual dictionary are designed to provide the services for non-English speakers or to ease learning difficulties for second language learners. Both require a lexicon which can be derived from aligning words in a parallel corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1.2."
},
{
"text": "Furthermore, second language learners can benefit by learning from example sentences with translations. By looking at bilingual examples, we acquire knowledge of the usage and meaning of word in context. With word alignment result of a sentence pair, it is much easier to grab the essential concepts of unfamiliar foreign words in a sentence pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1.2."
},
{
"text": "For instance, consider the English sentence \"These factors will continue to play a positive role after its return\" with its segmented Chinese translation \" \" shown in Figure 1 , where the solid dark lines are word alignment results of them and , e f stand for two sentences in two languages , E F respectively. If we don't know the usage of \"play\" in the sense of \"perform,\" in this example sentence pair with the help of word alignment, we would quickly understand such meaning and learn useful expressions like \"play \u2026 role\" meaning \" \u2026 \" in Chinese. Table 1 shows the word alignment result of above example sentenece pair. In Table 1 we use 0, and ! to denote the corresponding translation does not exist for a particular word, that is, this word in one language is translated into no words in another and we use ,",
"cite_spans": [],
"ref_spans": [
{
"start": 167,
"end": 175,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 553,
"end": 560,
"text": "Table 1",
"ref_id": null
},
{
"start": 629,
"end": 636,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1.2."
},
{
"text": "i j e f to stand for the words at the position of , i j in sentence , e f respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1.2."
},
{
"text": "If we look more closely to the example sentence in Figure 1 , we would notice that the beginning half \"These factors will continue to play a positive role\" is translated into the back of the Chinese sentence whileas the ending half \"after its return\" is translated into the beginning. This phenomenon is very common while translating one language into another. A simple observation is that if one language is SVO-structured and another SOV-structured, the \"VO\" part of the first language would constantly be reversely translated into \"OV\" of the second because of the reverse ordering of syntactic structures in \"V\" and \"O\" in these languages. We call it inverted word order during translation. More often than inverted cases, we have straight word order such as when \"positive role\" is translated into \"",
"cite_spans": [],
"ref_spans": [
{
"start": 51,
"end": 59,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Bilingual Parsing",
"sec_num": "1.3."
},
{
"text": "\". It would occur more frequently if two languages have identical word orientation for a syntactic structure, such as adjectives modifying nouns in English and Chinese noun phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Parsing",
"sec_num": "1.3."
},
{
"text": "In this paper, we propose a new method of learning to recognize straight and inverted phrases in bilingual parsing by using a parallel corpus and a monolingual treebank. The parallel text will be exploited to provide lexical translation information and project the syntactic information available in the source-language treebank onto the target language. This way we can leverage the monolingual treebank and avoid the difficult problem of inducing a bilingual grammar from scratch. We identify production rules derived from the treebank based on the part of speech information of the source text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Parsing",
"sec_num": "1.3."
},
{
"text": "This information is simultaneously projected to the target language by exploiting the cross-language lexical information produced by a word-aligning method. The relation of straight or inverted word orders between the syntax of the two languages at all phrase levels can be captured and modeled during the process. At runtime, these production rules are used to parse bilingual sentences, simultaneously determining the syntactic structures and word order relationships of languages involved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Parsing",
"sec_num": "1.3."
},
{
"text": "Thus, the proposed model commits to common linguistic labels for words and phrases found in an English treebank, such as NN (noun), VB (verb), JJ (adjective), NP (noun phrase), VP (verb phrase), ADJP (adjective phrase), PP (prepositional phrase). Furthermore, we assume straight and inverted linguistic phenomena, when projected to the target language, should render a reasonable structural explanation of the target language. We extend ITG productions (Wu 1997) to carry out this process of projection. Take word-aligned sentences in Figure 1 for example. It is possible to match the part of speech information of the source language sentence against the right hand sides of the production rules induced from a tree bank and identify the instances of applying specific rules such as NP JJ NN ! ;",
"cite_spans": [
{
"start": 453,
"end": 462,
"text": "(Wu 1997)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 535,
"end": 543,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Bilingual Parsing",
"sec_num": "1.3."
},
{
"text": "\"positive\" JJ ! and \"role.\" NN ! Moreover, by exploiting the word alignment information, it is not difficult to infer that such syntactic structure is also present in the target language with similar rules such as NP JJ NN ! ; JJ ! \" ,\" and NN ! \" .\" By combining and tallying such information, we are likely to derive ITG productions such as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Parsing",
"sec_num": "1.3."
},
{
"text": "; JJ ! \"positive/ \" and NN ! \"role/ .\" Here, the square bracket pair, \"[\" and \"]\" signifies that a straight synchronous nominal share between English and Mandarin Chinese. Similarly, we would also find out the inverted prepositional phrases like PP IN NP ! ; IN ! \"after/ \" and NP ! \"its return/ \" where \"<\" and \">\" indicate cross-language inverted structure. See Figure 3 for more details. Additionally, the occurrence counts of these straight or inverted structures can be tallied and used in estimating the probabilistic parameters of the ITG model.",
"cite_spans": [],
"ref_spans": [
{
"start": 364,
"end": 372,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "NP JJ NN !",
"sec_num": null
},
{
"text": "Intuitively, with rules like those shown in Figure 2 learned from a parallel corpus and a monolingual treebank, we should be able to extend a CYK-style parser to derive bilingual parse tree as shown in Figure 3 , where the symbol indicates word order of the subtrees in the target language is inverted. According to the theory of ITG, the probability of a bilingual parse tree consists of the lexical translation probability and the probability for the straight or inverted production rules involved. Consequently, very little syntax information is incorporated into the process of bilingual parsing. In contrast to Wu's experiment, we use regular context-free grammar rules in our experiments.",
"cite_spans": [],
"ref_spans": [
{
"start": 44,
"end": 52,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 202,
"end": 210,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "NP JJ NN !",
"sec_num": null
},
{
"text": "More recently, Yamada and Knight (2001) suggested the syntax differences in languages are really a better way to model translation. In their work, the English sentence goes through a parser to generate a full parse tree. Subtrees of each node are reordered, function words are inserted and finally the tree is linearized to produce the target sentence. The parse tree of an English sentence is generated independently from the target sentence. Although the monolingual parse might be correct, it may be difficult to project the structures onto the target language. Instead, our model has grammar rules that specify bilingual syntactic information including constituent labels and word ordering, which enables us to extend a CYK parser to parse bilingual sentences simultaneously. Chiang (2005) introduced lexicalized labelless hierarchical bilingual phrase structure to model translation without any linguistic commitment. Since he does not assign any syntactic category to hierarchical phrase pairs, the rules he obtain are not generalized into linguistics-motivated constituents but anchored at certain words. These lexicalized rewrite rules specify the differences in hierarchical structure of two languages without generalization. Therefore, the size of the grammar tends to be very large (2.2M rules). The rules do not represent some general ideas of languages such as word classes like verb, noun, or adjective, but rather have to do with specific words. In any case, the word classes like verb, noun, and adjective and the phrase categories like verb phrase (VP), noun phrase (NP) and adjective phrase (ADJP) would provide a more general way to reflect the parallel and differences of languages. Chiang also posed the hypothesis that syntactic phrases are better for machine translation (MT) and predicted the future trend of MT is to move towards a more syntactically-motivated grammar.",
"cite_spans": [
{
"start": 15,
"end": 39,
"text": "Yamada and Knight (2001)",
"ref_id": "BIBREF14"
},
{
"start": 780,
"end": 793,
"text": "Chiang (2005)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2."
},
{
"text": "With that in mind, we exploit part-of-speech information and linguistic phrase categories to model the syntactic relation between two languages, which is designed to have a higher degree of generality, unlike Chiang's lexicalized labelless production rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2."
},
{
"text": "In contrast to previous work in STM, the proposed method not only automatically identifies the hidden structural information of two languages but models variations of ordering counterparts within them. Moreover, a much-smaller set of flexible context-free grammar rules obtained from a very large-scale parallel corpus. Syntactic information indicated by those rules is exploited to parse bilingual sentences. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2."
},
{
"text": "The model is aimed at statistically derived ITG rules with probability and making use of those rules for bilingual parsing and word alignments. We focus on the process of bilingual parsing which exploits the syntactic information such as shared syntactic structures and word order relationships in two languages using a parallel corpus and a monolingual treebank. Figure 4 : Flowchart of the proposed training process.",
"cite_spans": [],
"ref_spans": [
{
"start": 364,
"end": 372,
"text": "Figure 4",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "3.1."
},
{
"text": "The training process can be illustrated using the flowchart in Figure 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 63,
"end": 71,
"text": "Figure 4",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Proposed Training Process",
"sec_num": "3.2."
},
{
"text": "Given a sentence-aligned corpus ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Training Process",
"sec_num": "3.2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) { } ,",
"eq_num": ", 1 ,"
}
],
"section": "Proposed Training Process",
"sec_num": "3.2."
},
{
"text": "In the first stage of the training process, for every sentence-aligned pair ( ) , e f in corpus C , we tag sentence e using a POS tagger and generate ( )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tagging and Segmenting",
"sec_num": "3.2.1"
},
{
"text": "e e e = L with tag sequence ( )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tagging and Segmenting",
"sec_num": "3.2.1"
},
{
"text": "1 2 , , , m t t t L ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tagging and Segmenting",
"sec_num": "3.2.1"
},
{
"text": "where i e stands for the i th word in e with m words and i t stands for the POS tag of the word i e . Further, we segment sentence f to obtain ( )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tagging and Segmenting",
"sec_num": "3.2.1"
},
{
"text": "1 2 , , , n f f f L",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tagging and Segmenting",
"sec_num": "3.2.1"
},
{
"text": ", where j f stands for the j th word in f with n words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tagging and Segmenting",
"sec_num": "3.2.1"
},
{
"text": "Take sentence pair whose record number is 193 in Figure 1 for instance. Table 3 shows the lemmatized and tagged result of the English sentence, while Table 4 shows the segmentation result of the Chinese sentence. The POS information of sentence e will then be projected onto the target language based on word alignments described in next subsection.",
"cite_spans": [],
"ref_spans": [
{
"start": 49,
"end": 57,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 72,
"end": 79,
"text": "Table 3",
"ref_id": null
},
{
"start": 150,
"end": 157,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tagging and Segmenting",
"sec_num": "3.2.1"
},
{
"text": "In the second training stage, we obtain a word-aligning set A for corpus C by applying any existing word-level alignment method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initial Word Alignments",
"sec_num": "3.2.2"
},
{
"text": "For notation convenience, we use 8-tuple ( ) as the derivation leading to the bilingual structure and rel as the cross-language word order relations (straight or inverted) of constituents of rhs . The right hand side, rhs , can be either a sequence of nonterminals or a single terminating bilingual word pair and the word order relation, rel , is either S (straight) or I (inverted which can be obtained from word alignment result. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initial Word Alignments",
"sec_num": "3.2.2"
},
{
"text": "In the final stage of the training process, we map the part of speech information and tree structures available in treebank of language E onto language F based on word alignment result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm for Probability Estimation",
"sec_num": "3.2.3"
},
{
"text": "We exploit following algorithm to identify syntactic structures of E and model the syntactic relation between and E F . The resulting ITG grammar will then be used in a bottom-up CYK parser for parsing bilingual sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm for Probability Estimation",
"sec_num": "3.2.3"
},
{
"text": "The algorithm begins with a set H initialized as word-aligning result A . Then recursively select two elements from H . If these two tuples have contiguous word sequence on source-language side and exhibit straight or inverted relation between source and target language during the mapping process, a new tuple representing these two is added into H . In the end, we exploit the occurrence in H to estimate following probabilities:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm for Probability Estimation",
"sec_num": "3.2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "[ ] ( ) 1 2 P L R R ! , (",
"eq_num": ") 1 2"
}
],
"section": "Algorithm for Probability Estimation",
"sec_num": "3.2.3"
},
{
"text": "P L R R ! and ( ) P L t ! .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm for Probability Estimation",
"sec_num": "3.2.3"
},
{
"text": "In this algorithm, we follow the notation described in section 3.2.1 and use W to stand for the number of entries in set W ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm for Probability Estimation",
"sec_num": "3.2.3"
},
{
"text": "p Q for the frequency of p in set Q and ! for the tolerance of straight/inverted phenomenon within source and target languages. 2 1 2 1 2 1 2 , , , Consider the word alignment results in Table 6 as an example, the algorithm described above will identity syntactic structures and model syntax relations of languages. The overall projecting process is as follows.",
"cite_spans": [],
"ref_spans": [
{
"start": 128,
"end": 147,
"text": "2 1 2 1 2 1 2",
"ref_id": null
},
{
"start": 193,
"end": 200,
"text": "Table 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "( ) count ;",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= H A For (",
"eq_num": ") ( ) 1"
}
],
"section": "Algorithm for Probabilistic Estimation",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "i = ! ) For every L L L ! \" # G If ( 2 1 2 1 j j j ! + \" \" + ) ( ) { } 1 2 1 2 , , , , , , , S r i i j j L L L ! = \" H H If ( 2 1 2 1 j j j ! + \" \" + ) ( ) { } 1 2 1 2 , , , , , , , I r i i j j L L L ! = \" H H If ( 2 1 1 i i = ! ) For every L L L ! \" # G If ( 2 1 2 1 j j j ! + \" \" + ) ( ) { } 1 2 1 2 , , , , , , , S r i i j j L L L ! = \" H H If ( 2 1 2 1 j j j ! + \" \" + ) ( ) { } 1 2 1 2 , , , , , , , I r i i j j L L L ! = \" H H For (",
"eq_num": ") 1"
}
],
"section": "Algorithm for Probabilistic Estimation",
"sec_num": null
},
{
"text": "Initially, for sentence pair 1, we have the following in A .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm for Probabilistic Estimation",
"sec_num": null
},
{
"text": "( 1,1,1,1,1 (1,1,2,1,2,NP,JJ NN,S), (1,3,4,3,4,VP,VBZ NNS,S) . After the second round, we have (1,1,4,1,4 ,S,NP VP,S) where syntactic label S means simple declarative clause in linguistic sense. Table 7 illustrates some derived grammar rules and entries inserted into H from sentence pair 9, 62 and 249.",
"cite_spans": [],
"ref_spans": [
{
"start": 2,
"end": 11,
"text": "1,1,1,1,1",
"ref_id": "FIGREF0"
},
{
"start": 12,
"end": 60,
"text": "(1,1,2,1,2,NP,JJ NN,S), (1,3,4,3,4,VP,VBZ NNS,S)",
"ref_id": "FIGREF0"
},
{
"start": 95,
"end": 105,
"text": "(1,1,4,1,4",
"ref_id": "FIGREF0"
},
{
"start": 195,
"end": 202,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Algorithm for Probabilistic Estimation",
"sec_num": null
},
{
"text": "We then describe how we implement a bilingual parser which makes use of syntactic structures and preferences of word order within languages specified by automatically trained ITG rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bottom-up Parsing",
"sec_num": "3.3."
},
{
"text": "We follow Wu's (1997) ",
"cite_spans": [
{
"start": 10,
"end": 21,
"text": "Wu's (1997)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bottom-up Parsing",
"sec_num": "3.3."
},
{
"text": "definition of ( ) stuv i !",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bottom-up Parsing",
"sec_num": "3.3."
},
{
"text": "to denote the probability of the most likely parse tree with syntactic label i and containing substring pair ( )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bottom-up Parsing",
"sec_num": "3.3."
},
{
"text": "1 2 1 2 , s s t u u v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bottom-up Parsing",
"sec_num": "3.3."
},
{
"text": "e e e f f f , and a set of probabilities such as ( )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bottom-up Parsing",
"sec_num": "3.3."
},
{
"text": "+ + + + L L in bilingual sentence ( ) , e f .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bottom-up Parsing",
"sec_num": "3.3."
},
{
"text": "P , L t ! [ ] ( ) 1 2 P L R R ! and ( ) 1 2 P L R R !",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3.3.1"
},
{
"text": "associated with ITG, we utilize dynamic programming technique to find the most probable derivation to parse the bilingual sentence ( ) , e f . Basically, we try to calculate the value of ( ) 0 0 m n S ! and backtrack by using following three steps, where S is the start symbol.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3.3.1"
},
{
"text": "Step 1: Initial step",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3.3.1"
},
{
"text": "( ) ( ) 1, , 1, P i i j j i i i j t t e f ! \" \" = # for 1 ,1 i m j n ! ! ! ! ( ) ( ) 1, , 1, P i i j j i j L L e f ! \" \" = # for 1 ,1 , i i m j n L t ! ! ! ! \" # G ( ) ( ) 1, , , P i i j j i i i t t e ! \" # = $ for 1 , 0 i m j n ! ! ! ! ( ) ( ) 1, , , P i i j j i L L e ! \" # = $ for 1 , 0 , i i m j n L t ! ! ! ! \" # G ( ) ( ) , , 1, P i i j j j NN NN f ! \" # = $ for 0 ,1 i m j n ! ! ! !",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3.3.1"
},
{
"text": "Step 2: Recurrent step (bottom-up approach)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3.3.1"
},
{
"text": "We proceed similar to Wu's algorithm. However, we observe that the length of the translation of a substring of source sentence should be bounded. We use the upper and lower bounds of lengths to prune search space and speed up computation. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3.3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "If 1 t s ratio ratio v u ! \" \" ! [ ] ( ) ( )( ) ( )( ) [ ] ( ) ( ) ( ) { } 0 max P stuv sSuU StUv j k s S t u U v S s t S U u v U i i j k j k ! ! ! \" \" # # # # $ $ + $ $ % = & ' ' PJ PK where (",
"eq_num": ") 1"
}
],
"section": "Implementation",
"sec_num": "3.3.1"
},
{
"text": "_ stuv i low probability ! = If 1 t s ratio ratio v u ! \" \" ! ( ) ( )( ) ( )( ) ( ) ( ) ( ) { } 0 max P stuv sSUv StuU j k s S t u U v S s t S U u v U i i j k j k ! ! ! \" \" # # # # $ $ + $ $ % = & ' ' PJ PK",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3.3.1"
},
{
"text": "where ( ) Step 3: Reconstructing step",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3.3.1"
},
{
"text": "We exploit depth-first-traversal to construct the most probable bilingual parse tree for sentence pair ( )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3.3.1"
},
{
"text": ", . e f",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3.3.1"
},
{
"text": "Take sentence pair in Figure 1 for example.",
"cite_spans": [],
"ref_spans": [
{
"start": 22,
"end": 30,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Example Parse",
"sec_num": "3.3.2"
},
{
"text": "At initial step, we would build the leaf nodes of the bilingual parse tree using probability like P(DT ! these/ ), P(NNS ! factors/ ), P(NP ! factors/ ), L , P(IN ! after/ ), P(PP ! after/ ), P(PRP$ ! its/ ), P(NP ! its/ ) and etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example Parse",
"sec_num": "3.3.2"
},
{
"text": "At recurrent step, we find the most likely derivation of nodes using statistics derived so far. Take nodes in Figure 3 for instance. We will derive (these factors, ) as a noun phrase using After reconstructing step, the most probable bilingual parse tree of the sentence pair is constructed. Figure 3 illustrates the tree structures derived for the example bilingual sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 118,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 292,
"end": 300,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Example Parse",
"sec_num": "3.3.2"
},
{
"text": "Our model is aimed at capturing shared syntactic structures and preferences in word order between two languages. The context-free grammar rules obtained in training process identity syntactic structures and model relations of syntax of languages involved. These rules can be exploited to produce better word-level alignments and most probable bilingual parse trees since syntactic information is taken into consideration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4."
},
{
"text": "In this section, we first present the details of training our model in Section 4.1. Then, we describe the evaluation metrics for the performance of the trained model in Section 4.2. The evaluation results are reported in Section 4.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4."
},
{
"text": "We used the news portion of Hong Kong Parallel Text (Hong Kong news) distributed by Linguistic Data Consortium (LDC) as our sentence-aligned corpus C . The corpus consists of 739,919 English and Chinese sentence pairs. English sentence is considered to be the source while Chinese sentence is the target. The average sentence length is 24.4 words for English and 21.5 words for Chinese. Table 8 and Table 9 show the statistics of number of sentences in this corpus according to sentence length. For monolingual treebank corpus G , we made use of PTB section 23 production rules distributed by Andrew B. Clegg (http://textmining.cryst.bbk.ac.uk/acl05/). There are 2,184 distinct grammar rules.",
"cite_spans": [],
"ref_spans": [
{
"start": 387,
"end": 394,
"text": "Table 8",
"ref_id": null
},
{
"start": 399,
"end": 406,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training Setting",
"sec_num": "4.1."
},
{
"text": "The statistics of G is shown in Table 10 while Table 11 illustrates some examples of grammar rules in G . , , , , , , r i i j j L det in H satisfy the criterion 2 1 3 i i ! \" . For the straight case to hold, the two Chinese fragments need to be contiguous or have a function word in-between while they need to be contiguous for the inverted case to hold.",
"cite_spans": [],
"ref_spans": [
{
"start": 32,
"end": 40,
"text": "Table 10",
"ref_id": null
},
{
"start": 47,
"end": 55,
"text": "Table 11",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training Setting",
"sec_num": "4.1."
},
{
"text": "Since the pieces have come to together, we follow the steps specified in Table 2 to learn ITG rules. Table 12 shows some of the grammar rules trained and associated estimations. Table 12 . Examples of grammar rules trained and their probabilities. In Table 12 we notice that the adjective-noun structure has much more straight cases than inverted. In other words, adjectives modify nouns in much the same manner in English and Chinese. In general, the statistics suggests that Chinese, much like English, is SVO with only relatively small number of exceptional cases.",
"cite_spans": [],
"ref_spans": [
{
"start": 73,
"end": 80,
"text": "Table 2",
"ref_id": null
},
{
"start": 101,
"end": 109,
"text": "Table 12",
"ref_id": null
},
{
"start": 178,
"end": 186,
"text": "Table 12",
"ref_id": null
},
{
"start": 251,
"end": 259,
"text": "Table 12",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training Setting",
"sec_num": "4.1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 2 L R R ! [ ] ( ) 1 2 P L R R ! ( ) 1 2 P L R R ! [ ] ( ) 1 2 count L R R ! (",
"eq_num": ") 1"
}
],
"section": "Training Setting",
"sec_num": "4.1."
},
{
"text": "Another point worth mentioning is that the overwhelming predominance of straight over inverted is not observed in the rule of PP IN NP ! . For this grammar rule, the straight cases like \"in August\", \" \" and the inverted cases such as \"before midnight\", \" \" are about the same order of magnitude. Consequently, it seems that there is no decisive preference of translation orientation for prepositional phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Setting",
"sec_num": "4.1."
},
{
"text": "We evaluated the trained ITG rules based on the performance of word alignment. We took the leaf nodes as word-level alignments and evaluate the proposed model in terms of agreement with human-annotated word alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.2."
},
{
"text": "We used the metrics of alignment error rate (AER) proposed by Och and Ney (2000) , in which the quality of a word alignment result where S (sure) is the set which contains alignments that are not ambiguous and P (possible) is the set consisting of the alignments that might or might not exist ( ) ! S P . For that the human-annotated alignments may contain many-to-one and one-to-many relations. Furthermore, whether a word-level alignment is in P or S is determined by human experts who perform the annotation work.",
"cite_spans": [
{
"start": 62,
"end": 80,
"text": "Och and Ney (2000)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.2."
},
{
"text": "( ) { } , i j = A ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.2."
},
{
"text": "For testing, we randomly selected 62 sentence pairs from the corpus of Hong Kong News. For the sake of time, we only selected sentence pairs in which the length of English and Chinese sentences does not exceed 15. From Table 8 and Table 9 , we know the upper bound of 15 would cover approximately 40% of sentence pairs in HKN. We manual annotated the word alignment information in these bilingual sentences. The ratio of P and S of the test data is 1.2.",
"cite_spans": [],
"ref_spans": [
{
"start": 219,
"end": 226,
"text": "Table 8",
"ref_id": null
},
{
"start": 231,
"end": 238,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Result",
"sec_num": "4.3."
},
{
"text": "We chose a freely-distributed word-aligning system, Giza++, as the baseline for evaluation. The adopted setting to run Giza++ is IBM model 4, the direction is from English to Chinese same as our model treating English as source language and the alignment units of Chinese are words not characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "4.3.1"
},
{
"text": "As preliminary evaluation, we examined whether syntactic consideration would lead to better word-level alignments. Figure 5 shows some alignments produced by the system and Giza++ and Table 13 displays evaluation results on alignments of the test data produced by both systems. Table 13 shows that although the precision is 87% for Giza++, the low recall leads to high alignment error rate and poor F-measure. However, our system with lower precision increased recall by 48.6%, which achieved a 29.2% alignment error reduction. From this experiment, we showed the proposed model with ITG rules allows for a wide range of ordering variations with a realistic position distortion penalty, which attributes to significantly better word alignment results.",
"cite_spans": [],
"ref_spans": [
{
"start": 115,
"end": 123,
"text": "Figure 5",
"ref_id": "FIGREF11"
},
{
"start": 184,
"end": 194,
"text": "Table 13",
"ref_id": "TABREF13"
},
{
"start": 280,
"end": 288,
"text": "Table 13",
"ref_id": "TABREF13"
}
],
"eq_spans": [],
"section": "Word-level Evaluation",
"sec_num": "4.3.2"
},
{
"text": "Since the proposed model takes lexical and syntactic aspects of languages into consideration, the proposed method can be used to improve an existing word-aligning system that utilizes few linguistic information of languages. For that we evaluated the proposed method on top of the alignment results of Giza++, a freely-available state-of-the-art word alignment system. In other words, the and C G corpora are the same as the previous experiment but we adopted Giza++ as the word-aligning method in the training process. Figure 6 shows some word alignment results produced by Giza++ with ITG and Giza++. Table Figure 6 . Alignments produced by Giza++ with ITG (left) and Giza++ (right). The use of ITG results in significant improvement for recall and F-measure of Giza++ by 56.8%",
"cite_spans": [],
"ref_spans": [
{
"start": 520,
"end": 528,
"text": "Figure 6",
"ref_id": null
},
{
"start": 603,
"end": 618,
"text": "Table Figure 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Word-level Evaluation",
"sec_num": "4.3.2"
},
{
"text": "and 34.6% leading to substantial alignment error reduction (37.5%) while precision suffers only slightly (0.1%).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-level Evaluation",
"sec_num": "4.3.2"
},
{
"text": "We further evaluated base phrases of the generated bilingual parse trees. We take into consideration the correctness of syntactic label and phrase alignment of a base phrase. Table 15 is how we rated a base phrase produced by our method concerning syntactic label and phrase alignment. Table 15 means that if human judges assess the constituent label and alignment of the generated base phrase are both correct, it will be rated as correct (1 point). The second row means that if the syntactic label is correct but alignment is not quite right, human judges will rate the base phrase as partially correct (0.5 point). However, if the label is wrongly tagged but the phrase alignment is right, it will also be rated as partially correct (0.5 point). In the worse case, the label and alignment are not quite correct, 0 point is given to that base phrase.",
"cite_spans": [],
"ref_spans": [
{
"start": 175,
"end": 183,
"text": "Table 15",
"ref_id": "TABREF4"
},
{
"start": 286,
"end": 294,
"text": "Table 15",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Phrase-level Evaluation",
"sec_num": "4.3.3"
},
{
"text": "The average score of the base phrases generated by Giza++ with ITG was 0.82, showing that our method produced satisfactory result in constituent label of base phrases and alignments in phrase level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase-level Evaluation",
"sec_num": "4.3.3"
},
{
"text": "Improvements of the proposed method and future researches have presented themselves along the way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5."
},
{
"text": "Currently, we only focus on CFG with two right-hand-side constituents. Nonetheless, in linguistic sense, it is undesirable to divide the structure of ( ) NP CC NP into ( ) NP CC and ( ) ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Evaluating and integrating Treebank parsers on a biomedical corpus",
"authors": [
{
"first": "B",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Adrian",
"middle": [],
"last": "Clegg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shepherd",
"suffix": ""
}
],
"year": 2005,
"venue": "Association for Computational Linguistics Workshop on software",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew B. Clegg and Adrian Shepherd. 2005. \"Evaluating and integrating Treebank parsers on a biomedical corpus.\" In Association for Computational Linguistics Workshop on software 2005.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A probability model to improve word alignment",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "88--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Cherry and Dekang Lin. 2003. A probability model to improve word alignment. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics, volume 1, pages 88-95.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A hierarchical phrase-based model for statistical machine translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43 rd Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "263--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2005. \"A hierarchical phrase-based model for statistical machine translation.\" In Proceedings of the 43 rd Annual Meeting of the ACL, pages 263-270.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Machine translation using probabilistic synchronous dependency insertion grammars",
"authors": [
{
"first": "Yuan",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of 43 rd Annual Meetings of the ACL",
"volume": "",
"issue": "",
"pages": "541--548",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuan Ding and Martha Palmer. 2005. \"Machine translation using probabilistic synchronous dependency insertion grammars.\" In Proceedings of 43 rd Annual Meetings of the ACL, pages 541-548.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Alignment model adaptation for domain-specific word alignment",
"authors": [
{
"first": "Wu",
"middle": [],
"last": "Hua",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhanyi",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43 rd Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "467--474",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wu Hua, Haifeng Wang, and Zhanyi Liu. 2005. \"Alignment model adaptation for domain-specific word alignment.\" In Proceedings of the 43 rd Annual Meeting of the ACL, pages 467-474.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Multitext grammars and synchronous parsers",
"authors": [
{
"first": "I",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Melamed",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Meeting of the North American chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Dan Melamed. 2003. \"Multitext grammars and synchronous parsers.\" In Proceedings of the 2003 Meeting of the North American chapter of the Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Improved statistical alignment models",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 38 th Annual Conference of the Association for Computational Linguistics (ACL-00)",
"volume": "",
"issue": "",
"pages": "440--447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2000. \"Improved statistical alignment models.\" In Proceedings of the 38 th Annual Conference of the Association for Computational Linguistics (ACL-00), pages 440-447.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Improve alignment models for statistical machine translation",
"authors": [
{
"first": "F Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Tillmann",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 1999,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F Franz Josef Och, C. Tillmann, and H. Ney. 1999. \"Improve alignment models for statistical machine translation.\" In 1999 EMNLP.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Extentions to HMM-based statistical word alignment models",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Tolga",
"middle": [],
"last": "Ilhan",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Processing Language",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, H. Tolga Ilhan and Christopher D. Manning. 2002. \"Extentions to HMM-based statistical word alignment models.\" In Proceedings of the Conference on Empirical Methods in Natural Processing Language.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "HMM-based word alignment in statistical translation",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Tillmann",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 16th conference on Computational linguistics",
"volume": "2",
"issue": "",
"pages": "836--841",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. \"HMM-based word alignment in statistical translation.\" In Proceedings of the 16th conference on Computational linguistics, volume 2, pages 836-841",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Structure alignment using bilingual chunking",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jin-Xia",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Chang-Ning",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 19 th international conference on Computational linguistics",
"volume": "1",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Wang, Ming Zhou, Jin-Xia Huang, and Chang-Ning Huang. 2002. \"Structure alignment using bilingual chunking.\" In Proceedings of the 19 th international conference on Computational linguistics, volume 1, pages 1-7.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Improving word alignment models using structured monolingual corpora",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "198--205",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Wang and Ming Zhou. \"Improving word alignment models using structured monolingual corpora.\" In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 198-205.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Grammar inference and statistical machine translation",
"authors": [
{
"first": "Ye-Yi",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ye-Yi Wang. 1998. \"Grammar inference and statistical machine translation.\" Ph.D. thesis.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "3",
"pages": "377--403",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekai Wu. 1997. \"Stochastic inversion transduction grammars and bilingual parsing of parallel corpora.\" Computational Linguistics, 23(3):377-403.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A syntax-based statistical translation model",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39 th Annual Conference of the Association for Computational Linguistics (ACL-01)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Yamada and Kevin Knight. 2001. \"A syntax-based statistical translation model.\" In Proceedings of the 39 th Annual Conference of the Association for Computational Linguistics (ACL-01).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Syntax-based alignment: supervised or unsupervised",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20 th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Zhang and Daniel Gildea. 2004. \"Syntax-based alignment: supervised or unsupervised?\" In Proceedings of the 20 th International Conference on Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Stochastic lexicalized inversion transduction grammar for alignment",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43 rd Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "475--482",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Zhang and Daniel Gildea. 2005. \"Stochastic lexicalized inversion transduction grammar for alignment.\" In Proceedings of the 43 rd Annual Meeting of the ACL, pages 475-482.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "An example sentence pair.Table 1. The word alignment of the example sentence pair.",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "Example grammar rules for the sentence pair.",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "A bilingual parse tree for example sentence pair.",
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"text": "statistical translation model (STM) is a mathematical model in which process of human translation from one language into another is modeled statistically. Model parameters are estimated using a corpus of translation pairs with or without human supervision. STMs have been used in various researches and applications including statistical machine translation, word alignment of a sentence-aligned corpus and the automatic construction of a dictionary, just to name a few. For this point of view, a better STM cross language for processing is essential and fundamental for those applications. Brown et al. (1988) first described a STM, or the alignment of sentence and word pairs in different languages. This and subsequent IBM models are based on noisy channel which converts or translates a sequence of words in one language into another. IBM model 1 can be trained using EM algorithm: starting with a uniform distribution among all translation candidate pairs and ending with convergent probabilities. While IBM model 1 does not utilize position information, the subsequent IBM models take positions into account when modeling for the translation process. (take an English-Chinese sentence pair for example, the first English word more likely translates into the first word in the Chinese sentence) Another model called Hidden Markov model (HMM) is designed to capture localization effect in aligning the words in parallel texts. Vogel et al. (1996), motivated by the idea that words are not distributed arbitrarily over the sentence positions but tend to form clusters, presented a first-order HMM which makes the alignment probabilities explicitly dependent on the alignment position of the previous word. Nonetheless, Toutanova et al. (2002) pointed out that word order variations (large jumps) between languages seem to be a problem. Neither IBM models nor HMMs explicitly utilize any linguistic information. However, other researchers have experimented with incorporation of part of speech (POS) information or context-specific features into STM. Exploiting POS tags of the two languages, Toutanova et al. (2002) introduced tag translation probabilities and tag sequences for jump probabilities to improve HMM-based word alignment models in modeling local word order differences. Cherry and Lin (2003) made use of dependency trees of a language to model features and constraints that are based on linguistic intuitions. In contrast, our model which uses POS information and tree structures from a treebank of a language to derive relation of syntax of two languages based on initial word alignments takes into consideration positions and linguistic characteristics such as word order and syntactic structures. Wang (1998) enhanced the IBM models by introducing phrases, and Och et al. (1999) made use of templates to capture phrasal sequences in a sentence. While flat structures of languages beyond words are being used in above researches, often researchers attempted using nested structures. Those studies can be divided into two approaches according to whether they are linguistically syntax-based or not. Either ways, both approaches try to model structural differences between two languages. Wu (1997) described an Inversion Transduction Grammar to model translation. However, only a lesser version, bracketing transduction grammar (BTG) with three structural labels A,B,C and a start symbol S, was experimented to perform bilingual parsing. Nevertheless, BTG accommodates a wide range of ordering variation between languages and imposes a realistic position distortion penalty. In other words, a system with structural-like, or hierarchical-like rules that specify the constituents and the order of the counterparts in both language is good at resolving the word alignment relations within a sentence pair. However, in their experiments, constituent categories are almost not differentiated, and thus their influences on ordering preferences of the counterparts are not taken into consideration.",
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"uris": null,
"text": "promising method for learning to parse a bilingual sentence using Inversion Transduction Grammars is based on training on a monolingual treebank and a parallel corpus. We project part of speech information and syntactic structures from a treebank of source language onto target language based on initial word alignment results of a parallel corpus to obtain and estimate the probabilities for ITG rules. During the projection process, word order relationships (straight and inverted) of shared syntactic constructs between two languages are identified and modeled. At runtime, the derived ITG rules drive a CYK-style parser to construct bilingual parse trees and hopefully lead to better word alignment results at the leaf nodes.",
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"uris": null,
"text": "and are an aligned sentence pair r e f r n e f = ! ! Cwhere r is the record number of the sentence pair and n is the total number of sentence pairs in C , a source-language grammar G , we map part of speech information and syntactic structures of source language onto target language words using word alignment result. During the mapping process, we exploit occurrence of syntactic structures and the differences of word order of the right-hand-side constituents to estimate probabilities. The proposed training process is elaborated as follows.",
"num": null,
"type_str": "figure"
},
"FIGREF6": {
"uris": null,
"text": "Segments for Chinese sentence of sentence pair 193.",
"num": null,
"type_str": "figure"
},
"FIGREF7": {
"uris": null,
"text": "r has L rhs !",
"num": null,
"type_str": "figure"
},
"FIGREF9": {
"uris": null,
"text": "consisting of possible syntactic labels for substring pair , consisting of possible syntactic labels for substring pair ,",
"num": null,
"type_str": "figure"
},
"FIGREF10": {
"uris": null,
"text": "where , i j are positions of the sentence pair , e f respectively and , 0 i j ! , is evaluated using",
"num": null,
"type_str": "figure"
},
"FIGREF11": {
"uris": null,
"text": "Alignments produced by our system (left) and Giza++ (right).",
"num": null,
"type_str": "figure"
},
"FIGREF12": {
"uris": null,
"text": "in that it is an indivisible syntactic-meaningful construct. Therefore, one of our future goals is to incorporate grammar rules with more constituents on the right hand side, such as NP NP CC NP ! , and their related probabilistic estimations into our model. Moreover, to make the structures of the bilingual parse trees more complete and rational, we would include a meaningful label for target-language words translated into no words in the source and grammar rules with the label in the future. It is also interesting to see how produced bilingual parse trees would influence the performance of the actual decoding process of machine translation and facilitate bilingual phrase extraction. In conclusion, we have presented a robust method for learning ITG rules which specify the syntactic structures and relations of syntax of two languages involved. The proposed method exploits both lexical and syntax information to derive a structural model of the translation process. At runtime, a bottom-up CYK-styled implementation parses bilingual sentences simultaneously by exploiting trained ITG rules. Experiments show that our model consisting of grammar rules with linguistics-motivated labels and preferences of ordering counterparts in languages produces much more satisfying word alignment results compared with a state-of-the-art word-aligning system.",
"num": null,
"type_str": "figure"
},
"TABREF1": {
"text": "-language treebank, we extend G into ITG rewrite rules for bilingual parsing.",
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"4\">Problem Statement: Given a sentence-aligned corpus</td><td>C</td><td>=</td><td>( {</td><td>, , r e f</td><td>)</td><td>1</td><td>} r n ! !</td><td>where r is the</td></tr><tr><td colspan=\"10\">record number of the aligned sentence pair ( ) , e f and n is the total number of sentence pairs in</td></tr><tr><td>parallel corpus C , and a grammar</td><td>G</td><td>=</td><td colspan=\"4\">{ lhs rhs lhs rhs ! !</td><td colspan=\"3\">} is a grammar rule on side E</td><td>derived</td></tr><tr><td>from a source</td><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"type_str": "table"
},
"TABREF2": {
"text": "Outline of the training process. Lemmas and tags for English sentence of sentence pair 193.",
"num": null,
"html": null,
"content": "<table><tr><td>(1)</td><td colspan=\"2\">Tag source-language sentences and segment target-language sentences</td></tr><tr><td/><td>(Section 3.2.1)</td><td/></tr><tr><td>(2)</td><td colspan=\"2\">Apply a word-aligning strategy to obtain word alignment result</td></tr><tr><td/><td>(Section 3.2.2)</td><td/></tr><tr><td>(3)</td><td colspan=\"3\">Apply the algorithm of projecting linguistic information of source language onto target</td></tr><tr><td/><td colspan=\"2\">language and estimating related probabilities of grammar rules found</td></tr><tr><td/><td>(Section 3.2.3)</td><td/></tr><tr><td/><td>position ( i )</td><td>lemma ( i e )</td><td>tag ( i t )</td></tr><tr><td/><td>1</td><td>these</td><td>DT</td></tr><tr><td/><td>2</td><td>factor</td><td>NNS</td></tr><tr><td/><td>3</td><td>will</td><td>MD</td></tr><tr><td/><td>4</td><td>continue</td><td>VB</td></tr><tr><td/><td>5</td><td>to</td><td>TO</td></tr><tr><td/><td>6</td><td>play</td><td>VB</td></tr><tr><td/><td>7</td><td>a</td><td>DT</td></tr><tr><td/><td>8</td><td>positive</td><td>JJ</td></tr><tr><td/><td>9</td><td>role</td><td>NN</td></tr><tr><td/><td>10</td><td>after</td><td>IN</td></tr><tr><td/><td>11</td><td>its</td><td>PRP$</td></tr><tr><td/><td>12</td><td>return</td><td>NN</td></tr></table>",
"type_str": "table"
},
"TABREF3": {
"text": "). Followings are some examples using the 8-tuple representation. The tuple ( ) PP IN NP denotes an inverted prepositional phrase (after its return, ). The tuple (193,8,8,9,9,JJ,positive/ ,S) denotes a terminal bilingual adjective (positive, )",
"num": null,
"html": null,
"content": "<table><tr><td/><td/><td/><td>193,1, 2, 4, 5,</td><td>, NP DT NN</td><td>,S</td><td>denotes</td></tr><tr><td colspan=\"3\">a straight bilingual noun phrase (these factors,</td><td colspan=\"2\">) in sentence 193. Similarly, the tuple</td></tr><tr><td>( 193,10,12,1, 3, ,</td><td>, I</td><td>)</td><td/></tr></table>",
"type_str": "table"
},
"TABREF4": {
"text": "Some alignments after applying a word-aligning strategy.",
"num": null,
"html": null,
"content": "<table><tr><td># of sentence pair</td><td>i</td><td>j</td><td>i e</td><td>f</td><td>j</td><td>t</td><td>i</td></tr><tr><td>406</td><td>10</td><td>5</td><td>in</td><td/><td/><td colspan=\"2\">IN</td></tr><tr><td>406</td><td>11</td><td>8</td><td>overseas</td><td/><td/><td colspan=\"2\">JJ</td></tr><tr><td>406</td><td>12</td><td>18</td><td>Chinese</td><td/><td/><td colspan=\"2\">JJ</td></tr><tr><td>406</td><td>13</td><td>10</td><td>community</td><td/><td/><td colspan=\"2\">NN</td></tr></table>",
"type_str": "table"
},
"TABREF6": {
"text": "Some alignments by applying an aligning strategy on corpus C .",
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"10\">2 , , , , , , , 1 2 r i i j j L rhs rel ! H</td><td/><td/></tr><tr><td colspan=\"13\">If ( rhs t ! )// t stands for terminating bilingual word pair</td></tr><tr><td>P</td><td>(</td><td>L</td><td>!</td><td>[</td><td colspan=\"2\">1 R R 2</td><td>] )</td><td>=</td><td colspan=\"4\">( count *,*,*,*,*, , ( L R R 1 2 H</td><td>) , S ;</td><td>H</td><td>)</td></tr><tr><td>P</td><td>(</td><td>L</td><td>!</td><td/><td colspan=\"2\">1 R R 2</td><td>)</td><td>=</td><td colspan=\"4\">( count *,*,*,*,*, , ( L R R 1 2 H</td><td>) , I ;</td><td>H</td><td>)</td></tr><tr><td>Else</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>P</td><td>(</td><td>L</td><td colspan=\"3\">) ! = t</td><td colspan=\"5\">( count *,*,*,*,*, , , S ; ) ( L t H</td><td>H</td><td>)</td></tr><tr><td colspan=\"6\"># of sentence pair</td><td/><td/><td/><td>i</td><td>j</td><td/><td>i e</td><td>f</td><td>j</td><td>t</td><td>i</td></tr><tr><td colspan=\"2\">1</td><td/><td/><td/><td/><td/><td/><td/><td>1</td><td>1</td><td/><td>solemn</td><td>JJ</td></tr></table>",
"type_str": "table"
},
"TABREF11": {
"text": "Statistics on English side. Statistics on Chinese side.As for word alignment, we used bidirectional ranking (BDR) as the word-aligning strategy in training process, which means in a sentence pair,",
"num": null,
"html": null,
"content": "<table><tr><td>sentence length</td><td>number of sentence</td><td>percentage</td></tr><tr><td>0~5</td><td>93,354</td><td>12.6%</td></tr><tr><td>6~10</td><td>118,513</td><td>16.0%</td></tr><tr><td>11~15</td><td>70,634</td><td>9.5%</td></tr><tr><td>16~20</td><td>66,431</td><td>9.0%</td></tr><tr><td>21~25</td><td>74,813</td><td>10.1%</td></tr><tr><td>26~30</td><td>71,902</td><td>9.7%</td></tr><tr><td>31~35</td><td>63,816</td><td>8.6%</td></tr><tr><td>36~</td><td>180,456</td><td>24.4%</td></tr></table>",
"type_str": "table"
},
"TABREF13": {
"text": "Alignment results of the test data. Our system vs. Giza++.",
"num": null,
"html": null,
"content": "<table><tr><td/><td>Recall</td><td>Precision</td><td>AER</td><td>F-measure</td></tr><tr><td>The proposed method</td><td>0.55</td><td>0.80</td><td>0.34</td><td>0.65</td></tr><tr><td>Giza++</td><td>0.37</td><td>0.87</td><td>0.48</td><td>0.52</td></tr></table>",
"type_str": "table"
},
"TABREF14": {
"text": "Alignment results of the test data. Giza++ with ITG vs. Giza++.",
"num": null,
"html": null,
"content": "<table><tr><td/><td>Recall</td><td>Precision</td><td>AER</td><td>F measure</td></tr><tr><td>Giza++ with ITG</td><td>0.58</td><td>0.87</td><td>0.30</td><td>0.70</td></tr><tr><td>Giza++</td><td>0.37</td><td>0.87</td><td>0.48</td><td>0.52</td></tr></table>",
"type_str": "table"
},
"TABREF15": {
"text": "Points of phrase-level evaluation.",
"num": null,
"html": null,
"content": "<table><tr><td>syntactic label</td><td>phrase alignment</td><td>point</td></tr><tr><td>O</td><td>O</td><td>1.0</td></tr><tr><td>O</td><td>X</td><td>0.5</td></tr><tr><td>X</td><td>O</td><td>0.5</td></tr><tr><td>X</td><td>X</td><td>0.0</td></tr><tr><td>The first row in</td><td/><td/></tr></table>",
"type_str": "table"
}
}
}
}