Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O03-5004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:01:59.923982Z"
},
"title": "Preparatory Work on Automatic Extraction of Bilingual Multi-Word Units from Parallel Corpora",
"authors": [
{
"first": "Boxing",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Zhongguancun Rd",
"location": {
"postCode": "100080",
"settlement": "Beijing",
"country": "China"
}
},
"email": "chenbx@iis.ac.cn"
},
{
"first": "Limin",
"middle": [],
"last": "Du",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Zhongguancun Rd",
"location": {
"postCode": "100080",
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automatic extraction of bilingual Multi-Word Units is an important subject of research in the automatic bilingual corpus alignment field. There are many cases of single source words corresponding to target multi-word units. This paper presents an algorithm for the automatic alignment of single source words and target multi-word units from a sentence-aligned parallel spoken language corpus. On the other hand, the output can be also used to extract bilingual multi-word units. The problem with previous approaches is that the retrieval results mainly depend on the identification of suitable Bi-grams to initiate the iterative process. To extract multi-word units, this algorithm utilizes the normalized association score difference of multi target words corresponding to the same single source word, and then utilizes the average association score to align the single source words and target multi-word units. The algorithm is based on the Local Bests algorithm supplemented by two heuristic strategies: excluding words in a stop-list and preferring longer multi-word units.",
"pdf_parse": {
"paper_id": "O03-5004",
"_pdf_hash": "",
"abstract": [
{
"text": "Automatic extraction of bilingual Multi-Word Units is an important subject of research in the automatic bilingual corpus alignment field. There are many cases of single source words corresponding to target multi-word units. This paper presents an algorithm for the automatic alignment of single source words and target multi-word units from a sentence-aligned parallel spoken language corpus. On the other hand, the output can be also used to extract bilingual multi-word units. The problem with previous approaches is that the retrieval results mainly depend on the identification of suitable Bi-grams to initiate the iterative process. To extract multi-word units, this algorithm utilizes the normalized association score difference of multi target words corresponding to the same single source word, and then utilizes the average association score to align the single source words and target multi-word units. The algorithm is based on the Local Bests algorithm supplemented by two heuristic strategies: excluding words in a stop-list and preferring longer multi-word units.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In the natural language processing field, which includes machine translation, machine assistant translation, bilingual lexicon compilation, terminology, information retrieval, natural language generation, second language teaching etc., the automatic extraction of bilingual multi-word units (steady collocations, multi-word phrases, multi-word terms etc.) is an and during the mid-and late-1990's, many researchers began to research the automatic construction of a bilingual translation lexicon [Fung 1995; Wu et al. 1995; Hiemstra 1996; Melamed 1996 etc.] Their works have focused on the alignment of single words. At the same time, the extraction of multi-word units in singular languages has been also studied. Church utilized mutual information to evaluate the degree of association between two words [Church 1990 ]; hence, mutual information has played an important role in multi-word unit extraction research, and it is used most often with this technology by means of a statistical method. Many researchers [Smadja 1993; Nagao et al. 1994; Kita et al. 1994; Zhou et al. 1995; Shimohata et al. 1997; Yamamoto et al. 1998 ] have utilized mutual information (or the transformation of mutual information) as an important parameter to extract multi-word units. The shortcoming of these methods is that low frequency multi-word units are easy to eliminate, and the output of extraction mainly depends on the verification of suitable Bi-grams when the iterative algorithm initiates.",
"cite_spans": [
{
"start": 495,
"end": 506,
"text": "[Fung 1995;",
"ref_id": "BIBREF3"
},
{
"start": 507,
"end": 522,
"text": "Wu et al. 1995;",
"ref_id": "BIBREF17"
},
{
"start": 523,
"end": 537,
"text": "Hiemstra 1996;",
"ref_id": "BIBREF5"
},
{
"start": 538,
"end": 556,
"text": "Melamed 1996 etc.]",
"ref_id": null
},
{
"start": 805,
"end": 817,
"text": "[Church 1990",
"ref_id": "BIBREF1"
},
{
"start": 1014,
"end": 1027,
"text": "[Smadja 1993;",
"ref_id": "BIBREF13"
},
{
"start": 1028,
"end": 1046,
"text": "Nagao et al. 1994;",
"ref_id": "BIBREF10"
},
{
"start": 1047,
"end": 1064,
"text": "Kita et al. 1994;",
"ref_id": "BIBREF6"
},
{
"start": 1065,
"end": 1082,
"text": "Zhou et al. 1995;",
"ref_id": null
},
{
"start": 1083,
"end": 1105,
"text": "Shimohata et al. 1997;",
"ref_id": "BIBREF11"
},
{
"start": 1106,
"end": 1126,
"text": "Yamamoto et al. 1998",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Background of Automatic Extraction of Bilingual Multi-Word Units",
"sec_num": "1.1"
},
{
"text": "Automatic extraction of bilingual multi-word units is based on the automatic extraction of bilingual word and multi-word units in singular languages. Research in this field has also proceeded [Smadja et al. 1996; Haruno et al. 1996; Melamed 1997 etc] , but the problem with this approach is that it relies on statistical methods more than the characteristics of the language per se and is mainly limited to the extraction of noun phrases.",
"cite_spans": [
{
"start": 192,
"end": 212,
"text": "[Smadja et al. 1996;",
"ref_id": "BIBREF14"
},
{
"start": 213,
"end": 232,
"text": "Haruno et al. 1996;",
"ref_id": "BIBREF4"
},
{
"start": 233,
"end": 250,
"text": "Melamed 1997 etc]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Background of Automatic Extraction of Bilingual Multi-Word Units",
"sec_num": "1.1"
},
{
"text": "Because of the above problems and the fact that Chinese-English corpuses are commonly small, we provide an algorithm that uses the average association score and normalized association score difference. We also apply the Local Bests algorithm, stopword filtration and longer unit preference methods to extract Chinese or English multi-word units.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Background of Automatic Extraction of Bilingual Multi-Word Units",
"sec_num": "1.1"
},
{
"text": "In research on the results produced by single-English-word to single-Chinese-word alignment, we have found an interesting phenomenon: During the phase of Chinese word segmentation, if the translation of an English word (\"A\") comprises of several Chinese words (\"BCD\"), the mutual information and the t-score for each \"B-A, C-A, D-A\" mapping are both very high and close to each other. Thus, we can use the average association score and the normalized association score difference to extract the translation equivalent pairs of single-English-word to multiple-Chinese-word mappings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Object of Our Research",
"sec_num": "1.2"
},
{
"text": "For example, when names and professional terms are translated, \"Patterson\" is translated as \" ,\" which includes three entries in a Chinese dictionary (\" ,\" \" ,\" and \" \");",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Object of Our Research",
"sec_num": "1.2"
},
{
"text": "\"Internet\" is translated as \" ,\" which includes three entries in a Chinese dictionary",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Object of Our Research",
"sec_num": "1.2"
},
{
"text": "Multi-Word Units from Parallel Corpora (\" ,\" \" ,\" and \" \"). Furthermore, the same situation occurs with some non-professional terms. For example, \"my\" is translated as \" .\" Also, the same rule applies to Chinese-English translation. For example, \" \" is translated as \"get funny,\" and \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preparatory Work on Automatic Extraction of Bilingual",
"sec_num": null
},
{
"text": "\" as \"get fresh.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preparatory Work on Automatic Extraction of Bilingual",
"sec_num": null
},
{
"text": "Therefore, the research presented in this paper is focused on single-source-word to multi-target-word-unit alignment. The alignment of bilingual multi-word units will be the focus of our future research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preparatory Work on Automatic Extraction of Bilingual",
"sec_num": null
},
{
"text": "The method we use to align single source words with target multi-word units from a parallel corpus can be divided into the following steps (we use the mutual information and t-score as the association score):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2."
},
{
"text": "(1) Word segmentation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2."
},
{
"text": "We do word segmentation first because Chinese has no word delimiters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2."
},
{
"text": "(2) Calculating the co-occurrence frequency:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2."
},
{
"text": "If a word pair appears once in an aligned bilingual sentence pair, one co-occurrence is counted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2."
},
{
"text": "(3) Computing the association score of single word pairs:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2."
},
{
"text": "We calculate the mutual information and t-score of the source words and their co-occurrence target words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2."
},
{
"text": "(4) Calculating the average association score and normalized association score:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2."
},
{
"text": "We calculate the average mutual information and normalized mutual information difference, and the average t-score and normalized t-score difference of every source word and its co-occurrence target words' N-gram (N: 2-7, since most phrases have of 2-6 words).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2."
},
{
"text": "(5) The Local Bests algorithm:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2."
},
{
"text": "We utilize the Local Bests algorithm to eliminate non-local best target multi-word units.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2."
},
{
"text": "(6) Stop-word list filtration:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2."
},
{
"text": "Some words cannot be used as the first or the last word of a multi-word unit, so we use the stop-word list to filter these multi-word units.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2."
},
{
"text": "(7) Bigger association score preference:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2."
},
{
"text": "After the above filtration, from among the remaining multi-word units, we choose N items with the maximal average mutual information and average t-score as the candidate target translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2."
},
{
"text": "(8) Longer unit preference:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2."
},
{
"text": "We extract multi-word units but not words, so if the longer word string C 1 entirely contains another shorter word string C 2 , then string C 1 is taken as the translation of the source word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2."
},
{
"text": "(9) Lexicon classification:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2."
},
{
"text": "According to the above four parameters, we classify the lexicons into four levels of translation lexicons.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2."
},
{
"text": "We will use \"Glasgow:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2."
},
{
"text": ",\" which appears in the corpus as shown in Figure 1 , as an example to explain the whole process.",
"cite_spans": [],
"ref_spans": [
{
"start": 43,
"end": 51,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2."
},
{
"text": "The reasons why we choose \"Glasgow\" are: (1) the occurrence frequency of \"Glasgow\" is quite low, only two times, which is easily ignored by the previous algorithm;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1. Sentence Example.",
"sec_num": null
},
{
"text": "(2) the Chinese translation of \"Glasgow\" is unique, so the correct extraction of this lemma can prove the accuracy of our algorithm; (3) \"Glasgow\" contains four single-character words, and it will be found later that our algorithm is more effective with multi-word units made up of two words, so here we use \"Glasgow\" to prove that our algorithm is also effective with multi-word units made up of more than two words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1. Sentence Example.",
"sec_num": null
},
{
"text": "We used the \"maximum probability word segmentation method\" [Chen 1999 ] and The Grammatical Knowledge-base of Contemporary Chinese published by Peking University [Yu 1998 ]. The idea behind this method is: first find out all the possible words in the input Chinese string on a vocabulary basis and then find out all the possible segmentation paths, from which we can find the best path (with the maximal probability) as the output. We randomly sampled 1000 sentences to check: if we did not take \"un-listed words that are divided\" as an error, then the precision rate was 98.88%; but if it was being taken as an error, the precision rate was ",
"cite_spans": [
{
"start": 59,
"end": 69,
"text": "[Chen 1999",
"ref_id": "BIBREF0"
},
{
"start": 162,
"end": 170,
"text": "[Yu 1998",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Word Segmentation",
"sec_num": "2.1"
},
{
"text": "There were many translation sentence pairs in the corpus. For each possible word pair in these translation sentence pairs, the higher the probability of appearance it had, the higher the probability it had of being the correct translation word pair. We built a co-occurrence model to count the number of appearances: it was counted as a co-occurrence each time the word pair appears in a sentence pair. The reasons are as follows: First, the length of a sentence in spoken language is usually shorter than that in a written language; for example, in the corpus DECC1.0, the average length of English sentences is 7.07 words, and the average length of ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculate the Co-occurrence Frequency",
"sec_num": "2.2"
},
{
"text": "Having calculated the word pair's co-occurrence frequency and the frequency of every word, we use formulas (1) and (2) to calculate the mutual information",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculate the Mutual Information and T-Score",
"sec_num": "2.3"
},
{
"text": ") , ( T S MI and t-score ) , ( T S t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculate the Mutual Information and T-Score",
"sec_num": "2.3"
},
{
"text": "of any source word and its single target word. As for the association verifying score [Fung 1995] , the higher the t-score, the higher the degree of association between S and T:",
"cite_spans": [
{
"start": 86,
"end": 97,
"text": "[Fung 1995]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Calculate the Mutual Information and T-Score",
"sec_num": "2.3"
},
{
"text": ") Pr( ) Pr( ) , Pr( log ) , ( T S T S T S MI = , ) , Pr( ) Pr( ) Pr( ) , Pr( ) , ( 1 T S T S T S T S t N \u2212 \u2248 . 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculate the Mutual Information and T-Score",
"sec_num": "2.3"
},
{
"text": "Here, N is the total number of sentence pairs in the corpus, S is the source word, T is the target word, and Pr(.) is the probability of the source word or target word. For the \"Glasgow\" example, the outcome of Formula (1) is shown in Figure 4 , and the outcome of Formula (2) is shown in Figure 5 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 235,
"end": 243,
"text": "Figure 4",
"ref_id": "FIGREF4"
},
{
"start": 289,
"end": 297,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Calculate the Mutual Information and T-Score",
"sec_num": "2.3"
},
{
"text": "The Average Association Score (AAS) is the average association score of the source word and every word in the target language N-gram. It can measure the association degree between the source language and target language. The Normalized Difference (ND) is the normalized difference for the association score of the source word and every word in the target language N-gram. It can measure the internal association of the target multiword units. Therefore, we use the AAS and ND to build the association model of the single source word and target multiword units. We compute the average mutual information, normalized mutual information difference, average t-score, and normalized t-score difference of the consecutive Chinese word string N-gram (N: 2-7), which co-occurs with \"Glasgow.\" Vintar's research indicated that the length of 95% of English phrases and Slavic phrases is between 2-6 words [Vintar et al. 2001] , and from our experience, we can conclude that Chinese multiword units of more than 6 words are also very rare. To reduce the complexity of calculation, we only consider multiword units with 6 words or less. Suppose a Chinese word string C (chunk) is expressed by the following symbols: ",
"cite_spans": [
{
"start": 895,
"end": 915,
"text": "[Vintar et al. 2001]",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Calculate the Average Association Score and its Normalized Difference",
"sec_num": "2.4"
},
{
"text": "n i W W W W C ... ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculate the Average Association Score and its Normalized Difference",
"sec_num": "2.4"
},
{
"text": "\u2211 = = n i i T W MI n T C AMI 1 ) , ( 1 ) , ( , 4 \u2211 = \u2212 \u00d7 = n i i T C AMI T W MI T C AMI n T C MID 1 | ) , ( ) , ( | ) , ( 1 ) , ( , 5 \u2211 = = n i i T W t n T C AT 1 ) , ( 1 ) , ( , 6 \u2211 = \u2212 \u00d7 = n i i T C AT T W t T C AT n T C TD 1 | ) , ( ) , ( | ) , ( 1 ) , ( . 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculate the Average Association Score and its Normalized Difference",
"sec_num": "2.4"
},
{
"text": "Here, t(.) is the t-score, MI(.) is the mutual information, T is the target word. The results obtained using formulae (4)-(7) are shown in Table 1 . (There were 108 outputs from each parameter; we chose only 16 that were connected with the correct answer \"Glasgow\" and could be used to explain the algorithm.)",
"cite_spans": [],
"ref_spans": [
{
"start": 139,
"end": 146,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Calculate the Average Association Score and its Normalized Difference",
"sec_num": "2.4"
},
{
"text": "Currently, the algorithms for extracting multiword units are mainly based on setting a global threshold for some association score (mutual information, entropy, mutual expectation etc.), and if only the association score of the checked word string is bigger or smaller than that threshold, then the word string is considered to be a multiword unit. However, the threshold method has many limitations because the threshold will change with the type of language, the size of the corpus, and the difference of the selected association score, and because of the threshold cannot be easily chosen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Bests Algorithm",
"sec_num": "2.5"
},
{
"text": "The Local Bests algorithm [Silva et al. 1999 ] is a more robust, flexible and finely tuned approach to the extraction of multiword units, which is based on the local context, rather than on the use of global threshold methods. If a word string (n-gram) is a multiword unit, there should be stronger internal association, and the association score will be high. Also, as a local structure, a multiword unit can show the best association in a local context. Thus, when we find the association score of a word string that is high in a local context, we may consider it as a phrase. For example, there is a strong internal association within the Bi-gram <ice, cream>, i.e., between the words ice and cream. On the other hand, one cannot say that there is a strong internal association within the Bi-gram <the, in>. Therefore, let us suppose that there is a function S(.) that can measure the internal association of each n-gram. In our algorithm, it is better if AMI and AT are bigger, and if MID and TD are smaller; every n-gram of the local best co-occurring with \"Glasgow\" is shown in boldface in Table 1 .",
"cite_spans": [
{
"start": 26,
"end": 44,
"text": "[Silva et al. 1999",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 1096,
"end": 1103,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Local Bests Algorithm",
"sec_num": "2.5"
},
{
"text": "As we can see in the table, the normalized mutual Information difference of \" \" is not a global best score, but it is a local best score, so we may exclude this Multi-Word Unit if we use the global threshold but not the local best algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Bests Algorithm",
"sec_num": "2.5"
},
{
"text": "There are still two main problems with using the Local Bests algorithm to extract multiword units: (1) A fraction of the extracted multiword units are not correct, such as \" \" and \" ,\" with improper words at the beginning or the end of a multiword unit; the same is true with English multiword units, such as \"and, or\" appearing at the beginning of a multiword unit, and \"the, may, if\" at the end of a multiword unit. (2) For a source word, several multiword units are extracted, but not all of them are correct translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 1. AMI, MID, AT and TD of Chinese N-gram (N=2~7) co-occurring with \"Glasgow.\"",
"sec_num": null
},
{
"text": "We utilize a stop-word list to solve the first problem, and the methods based on the association score best and longer unit preference are used to solve the second.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 1. AMI, MID, AT and TD of Chinese N-gram (N=2~7) co-occurring with \"Glasgow.\"",
"sec_num": null
},
{
"text": "A stop-word is a word that cannot be used at the beginning or the end of a multiword unit. By analyzing the parts of speech and the characteristics of specific words arrangements, we manually create four types of stop-word lists: non-beginning and non-ending Chinese words, and non-beginning and non-ending English words. Samples of lists are shown in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 352,
"end": 359,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Stop-word List Filtration",
"sec_num": "2.6"
},
{
"text": "Using the stop-word lists to filter multiword units, we can the first problem mentioned above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 2. Stopword List.",
"sec_num": null
},
{
"text": "The association score (mutual information and t-score) is a measure used to judge whether the source word and the target multiword unit are translations of each other, so if a source word corresponds to several target multiword units, then the target multiword unit with a higher association score is more likely to be a translation of this source word. Then we can choose from among the remaining multiword units after two filtrations and take N items with the maximal average mutual information and average t-score as the candidate target translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Association Score Best Filtration",
"sec_num": "2.7"
},
{
"text": "According to the results of sample tests, after local bests filtration, the association score of the correct target translation is usually among the best three scores, so we assume that N equals 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Association Score Best Filtration",
"sec_num": "2.7"
},
{
"text": "A short unit is more likely to be a word [Tanapong et al. 2000] , but for the following reasons, we apply the Longer Units Preference: (1) Our algorithm determines that the multiword units of two words, especially the two words of the maximal association score with the source word, have the higher average association score and the lower association score difference. For example we can see that \" \" is better than \" \" based on four parameters. 2We extract multiword units but not words, and if a longer word string has the local best result, then this word string is a comparatively steady structure. Therefore, if a longer words string C 1 entirely contains another shorter word string C 2 , then string C 1 is taken as the translation of the source word. This method might choose Multi-Word Units that are longer than necessary, a situation we call \"translation units expansion,\" but it is useful for the extraction of bilingual",
"cite_spans": [
{
"start": 41,
"end": 63,
"text": "[Tanapong et al. 2000]",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Longer Units Preference",
"sec_num": "2.8"
},
{
"text": "Multi-Word Units, and it is can be used in the phase of bilingual Multi-Word Unit extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Longer Units Preference",
"sec_num": "2.8"
},
{
"text": "Thus, the work of extracting a multiword unit translation of every source word is basically accomplished. There are four parameters used in the algorithm. The Average Association Score can measure the association degree between the source language and target language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon Classification",
"sec_num": "2.9"
},
{
"text": "The Normalized Difference can measure the internal association of the target multiword units.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon Classification",
"sec_num": "2.9"
},
{
"text": "If a pair of bilingual word strings can match more parameters after Local Best and N-bests association score filtering, then it must have higher probability of being correct. Based on the four parameters, four bilingual lexicons are constructed, and they can be subjected to the merge application or intersection application according to different application requirements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon Classification",
"sec_num": "2.9"
},
{
"text": "We calculate four outcome tables using Formulae (4), (5), (6) and (7), each of them based on a certain measure. Then we pick translation word pairs from those four tables to form five lexicons. The 1 st level lexicon composed of word pairs which has appeared only once in the tables; the 2 nd level lexicon composed of word pairs which has appeared twice in the tables; and the same rule applies to the 3 rd and 4 th level lexicons. The higher level one word pair belongs to, the more precision it has. The 0 th lexicon is a union of the other four lexicons; that is, any word pairs that have appeared in the tables go into the 0 th lexicon. If a source word has several target entries, we calculate the co-occurrence frequency of every entry with the source word in the corpus and then normalize the probability of every entry.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon Classification",
"sec_num": "2.9"
},
{
"text": "The bilingual corpus we used was DECC1.0, which consists mostly of daily life dialogues, including 14,974 aligned bilingual sentence pairs and a total of 1,039,183 bytes. In this corpus,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Corpus",
"sec_num": "3.1"
},
{
"text": "Multi-Word Units from Parallel Corpora there are 7,491 English word types and 7,344 Chinese word types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preparatory Work on Automatic Extraction of Bilingual",
"sec_num": null
},
{
"text": "Taking English as the source language and Chinese as the target language, we provide an example of the 4th level lexicon and the 0th level lexicon in Figures 6 and Figure 7 . There is no uniform method for calculating the precision of translation lexicons, so we take the following approach: the corpus is the measure -if and only if the lexicon entry has an exact match in the corpus, it is taken as correct. For example, the meaning of \"fifty-fifty\" in the English-Chinese dictionary is \" , , ,\" and in the corpus the corresponding translation of \"fifty-fifty\" is \" ,\" so we consider that the translation \"fifty-fifty: \" in Figure 6 is correct, but in Figure 7 , \"Adam: \" is considered to be incorrect because in the corpus, the pair is \"Adam:",
"cite_spans": [],
"ref_spans": [
{
"start": 150,
"end": 172,
"text": "Figures 6 and Figure 7",
"ref_id": null
},
{
"start": 626,
"end": 634,
"text": "Figure 6",
"ref_id": null
},
{
"start": 654,
"end": 662,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Lexicon Evaluation",
"sec_num": "3.2"
},
{
"text": ".\" The recall rate is the number of English words in each lexicon divided by the number of all the English words in the whole corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon Evaluation",
"sec_num": "3.2"
},
{
"text": "The F-measure is an important parameter for balancing precision and recall [Langlais et al. 1998 ]. Table 3 shows the precision, recall and F-measure results of the English-Chinese, Chinese-English 0~4 level lexicons. For lexicons that had more than 200 entries, we randomly chose 200 entries from each of them; for those that had less than 200 entries, we used all the entries for calculation: \"E-C\" lexicons take the single-English-word as the source language and the multi-Chinese-word unit as the target language, and vice versa.",
"cite_spans": [
{
"start": 75,
"end": 96,
"text": "[Langlais et al. 1998",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 100,
"end": 107,
"text": "Table 3",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Lexicon Evaluation",
"sec_num": "3.2"
},
{
"text": "precision recall precision recall F + \u00d7 = 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon Evaluation",
"sec_num": "3.2"
},
{
"text": "By analyzing the precision and recall results, and the lemmas of all levels of lexicons, we reached the following conclusions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the Result",
"sec_num": "3.3"
},
{
"text": "(1) There are many lemmas satisfying one qualification (viz. the 1st level lexicon).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the Result",
"sec_num": "3.3"
},
{
"text": "Almost every English word and Chinese word and expression has at least one target word string satisfying the local best and other qualifications, but the precision of the 1st level lexicon is very low. This shows that (1) depending on a single qualification is not sufficient to construct a bilingual lexicon with high precision, and that (2) not every source word has a corresponding target phrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the Result",
"sec_num": "3.3"
},
{
"text": "(2) Compared with the 1st level lexicon, the precision of the 2nd level lexicon is greatly increased. According to the sketchy statistics, the two qualifications satisfied by most of the correct portion of the 2nd level lexicon are mutual information and t-score, which shows that for a certain parameter (mutual information or t-score), simultaneously using the difference and average value can improve the results greatly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the Result",
"sec_num": "3.3"
},
{
"text": "(3) Compared with the 2nd level lexicon, the precision of the 3rd level lexicon is also greatly increased and recall is decreased, which shows that after one parameter has been satisfied, if a qualification of another parameter can be also satisfied, then the translation is very likely to be correct. In similar works, many other researchers needed to consider multiple parameters, and the selection of parameters was very important. From early works on word alignment and our current work on phrase extraction, we find that a combination of mutual information and t-score provides a reliable measure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the Result",
"sec_num": "3.3"
},
{
"text": "Multi-Word Units from Parallel Corpora (4) Only a little manual collation work is needed to make the 4th level lexicon practical.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preparatory Work on Automatic Extraction of Bilingual",
"sec_num": null
},
{
"text": "The English-Chinese 4th level lexicon has only 98 lemmas, which, except for some common phrases with high appearance frequency, are mainly personal names, place names and specialized terms; and all of these terms have low appearance frequency, many occurring only once. This shows that for the extraction of low frequency phrases, our algorithm also is good.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preparatory Work on Automatic Extraction of Bilingual",
"sec_num": null
},
{
"text": "(5) The higher the lexicon's level, the lower its recall rate. This shows that the cases of single source words corresponding to a target word string are comparatively few.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preparatory Work on Automatic Extraction of Bilingual",
"sec_num": null
},
{
"text": "On the other hand, it shows that our corpus is too small. If the corpus could be increased, the result would be better.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preparatory Work on Automatic Extraction of Bilingual",
"sec_num": null
},
{
"text": "(6) There are cases of \"translation unit expansion\" in all levels of lexicons; for example, in the 4th level lexicon for \"Apollo: ,\" \"Apollo\" corresponds to \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preparatory Work on Automatic Extraction of Bilingual",
"sec_num": null
},
{
"text": ",\" but there is only one sentence pair in which \"Apollo\" appears in the whole corpus ( Figure 8 ). In addition, \" \" exists as a sense unit, so according to the longer units preference method, our algorithm selected \"",
"cite_spans": [],
"ref_spans": [
{
"start": 87,
"end": 95,
"text": "Figure 8",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "Preparatory Work on Automatic Extraction of Bilingual",
"sec_num": null
},
{
"text": ".\" It should be made clear that, although \"Apollo: \" is an incorrect lemma, it provides a basis for constructing a translation lexicon in which the source language and the target language are both multi-word phrases. Especially in the 0th level lexicon, we can see that the two translations of \"moon\" are \" \" and \" ,\" from which, using a certain algorithm, we can extract the correct phrase \"Apollo's trip to the moon: ,\" and this will be the focus of our future research. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preparatory Work on Automatic Extraction of Bilingual",
"sec_num": null
},
{
"text": "Because there are many cases of single source words corresponding to target multi-word units, for example, English personal names and place names, we have provided an algorithm for the automatic alignment of single source words and target multi-word units from a sentence-aligned parallel spoken language corpus, which makes a translation lexicon more practical. It will be of great help for machine translation, especially Chinese-English translation. On the other hand, the outputs can also be used to extract bilingual multi-word units. Compared with other similar researches, this algorithm differs in the following ways:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4.1"
},
{
"text": "(1) It utilizes the normalized association score difference as the criterion for extracting phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4.1"
},
{
"text": "(2) It simultaneously uses the Local Bests algorithm, stop-word filtration, and the longer units preference method to extract phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4.1"
},
{
"text": "(3) Classify lexicon. Different levels of lexicons can be applied to obtain practical translation lexicons or can be used as the basis for further research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4.1"
},
{
"text": "Mutual information has been used in many other similar researches, but these processes are mainly based on algorithms of iterating the Bi-gram calculation, and the retrieval results mostly depend on the identification of suitable Bi-grams for the initiation of the iterative process. Errors can accumulate during the iteration process, thus greatly affecting the precision of multi-word phrase extraction [Dias et al. 2000] . Our algorithm solves this problem by calculating the normalized association score difference of the target words corresponding to the same source word. The use of t-score increases the precision of the phrase translation lexicon, and the classification of the lexicon reduces the number of the incorrect entries in the high level lexicon effectively, which makes the translation lexicon more practical.",
"cite_spans": [
{
"start": 405,
"end": 423,
"text": "[Dias et al. 2000]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4.1"
},
{
"text": "Currently, \"translation unit expansion\" is a common problem, and we shall utilize the outcome to extract bilingual multi-word units in our future research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Research Plan",
"sec_num": "4.2"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Beijing: Beijing Language and Culture University Press, (It's published in Chinese)",
"authors": [
{
"first": "X",
"middle": [
"H"
],
"last": "Chen",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "97--103",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, X.H. \"Automatic Analysis of Contemporary Chinese Using Visual C++,\" Beijing: Beijing Language and Culture University Press, (It's published in Chinese) 1999, pp.97-103.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Word Association Norms, Mutual Information & Lexicography",
"authors": [
{
"first": "K",
"middle": [
"W"
],
"last": "Church",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Hanks",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational Linguistics",
"volume": "16",
"issue": "1",
"pages": "22--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Church, K.W. and P. Hanks. \"Word Association Norms, Mutual Information & Lexicography.\" Computational Linguistics, 16(1) 1990, pp.22-29.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Normalization of Association Measures for Multiword Lexical Unit Extraction",
"authors": [
{
"first": "G",
"middle": [],
"last": "Dias",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Guillor\u00e9",
"suffix": ""
},
{
"first": "L",
"middle": [
"J"
],
"last": "Pereira",
"suffix": ""
}
],
"year": 2000,
"venue": "International Conference on Artificial and Computational Intelligence for Decision Control and Automation in Engineering and Industrial Applications (ACIDCA'2000)",
"volume": "",
"issue": "",
"pages": "207--216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dias, G., Guillor\u00e9,S. and Pereira L.J.G. \"Normalization of Association Measures for Multiword Lexical Unit Extraction.\" International Conference on Artificial and Computational Intelligence for Decision Control and Automation in Engineering and Industrial Applications (ACIDCA'2000). Monastir, Tunisia, 2000, pp. 207-216.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A Pattern Matching Method for Finding Noun and Proper Noun Translations from Noisy Parallel Corpora",
"authors": [
{
"first": "P",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "236--243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fung P. \"A Pattern Matching Method for Finding Noun and Proper Noun Translations from Noisy Parallel Corpora.\" Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics, Boston, USA. 1995, pp. 236-243.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning Bilingual Collocations by Word-level Sorting",
"authors": [
{
"first": "M",
"middle": [],
"last": "Haruno",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ikehara",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Yamazaki",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "525--530",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haruno M., Ikehara S. and Yamazaki T. \"Learning Bilingual Collocations by Word-level Sorting. COLING96. 1996, pp. 525~530.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Using Statistical Methods to Create a Bilingual Dictionary",
"authors": [
{
"first": "D",
"middle": [],
"last": "Hiemstra",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiemstra, D. \"Using Statistical Methods to Create a Bilingual Dictionary.\" Master's Thesis, University of Twente. 1996.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Comparative Study of Automatic Extraction of Collocation from Corpora: Mutual Information vs. Cost Criteria",
"authors": [
{
"first": "K",
"middle": [],
"last": "Kita",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Kato",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Omoto",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Yano",
"suffix": ""
}
],
"year": 1994,
"venue": "Journal of Natural Language Processing",
"volume": "1",
"issue": "1",
"pages": "21--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kita, K., Kato, Y., Omoto T. and Yano Y. \"A Comparative Study of Automatic Extraction of Collocation from Corpora: Mutual Information vs. Cost Criteria.\" Journal of Natural Language Processing, 1 (1), 1994, pp. 21-33.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Methods and Practical Issues in Evaluating Alignment Techniques",
"authors": [
{
"first": "P",
"middle": [],
"last": "Langlais",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Simard",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "V\u00e9ronis",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of COLING-ACL",
"volume": "",
"issue": "",
"pages": "711--717",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Langlais P., Simard M. and V\u00e9ronis J. \"Methods and Practical Issues in Evaluating Alignment Techniques.\" Proceedings of COLING-ACL, 1998, Montr\u00e9al, Canada, pp. 711-717.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automatic Construction of Clean Broad-Coverage Translation Lexicons",
"authors": [
{
"first": "I",
"middle": [
"D"
],
"last": "Melamed",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "125--134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melamed I. D. \"Automatic Construction of Clean Broad-Coverage Translation Lexicons.\" Conference of the Association for Machine Translation in Americas, Montreal, Canada. 1996, pp. 125-134.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatic Discovery of Non-Compositional Compounds in Parallel Data",
"authors": [
{
"first": "I",
"middle": [],
"last": "Melamed",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 2nd Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "97--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melamed I. D. \"Automatic Discovery of Non-Compositional Compounds in Parallel Data.\" Proceedings of the 2nd Conference on Empirical Methods in Natural Language Processing. Providence, RI. USA. 1997, pp. 97-108.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A New Method of N-gram Statistics for Large Number of n and Automatic Extraction of Words and Phrases from Large Text Data of Japanese",
"authors": [
{
"first": "M",
"middle": [],
"last": "Nagao",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Mori",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 15th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "611--615",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nagao, M. and Mori, S. \"A New Method of N-gram Statistics for Large Number of n and Automatic Extraction of Words and Phrases from Large Text Data of Japanese.\" Proceedings of the 15th International Conference on Computational Linguistics. 1994. pp.611-615.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Retrieving Collocations by Co-occurrences and Word Order Constraints",
"authors": [
{
"first": "Sayori",
"middle": [],
"last": "Shimohata",
"suffix": ""
},
{
"first": "Toshiyuki",
"middle": [],
"last": "Sugio",
"suffix": ""
},
{
"first": "Junji",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 1997,
"venue": "35th Conference of the Association for Computational Linguistics (ACL'97)",
"volume": "",
"issue": "",
"pages": "476--481",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sayori Shimohata, Toshiyuki Sugio and Junji Nagata \"Retrieving Collocations by Co-occurrences and Word Order Constraints.\" 35th Conference of the Association for Computational Linguistics (ACL'97), Madrid, 1997, pp. 476-481.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Using Localmaxs Algorithm for Extraction of Contiguous and Non-contiguous Multiword Lexical Units",
"authors": [
{
"first": "J",
"middle": [
"F"
],
"last": "Silva",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Dias",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Guillor",
"suffix": ""
},
{
"first": "J",
"middle": [
"G"
],
"last": "Lopes",
"suffix": ""
}
],
"year": 1999,
"venue": "9th Portuguese Conference in Artificial Intelligence, Lecture Notes",
"volume": "",
"issue": "",
"pages": "113--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Silva J.F., Dias G., Guillor S. and Lopes J.G.P. \"Using Localmaxs Algorithm for Extraction of Contiguous and Non-contiguous Multiword Lexical Units.\" 9th Portuguese Conference in Artificial Intelligence, Lecture Notes, Springer-Verlag, Universidade de Evora, Evora, Portugal, 1999, pp. 113-132.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Retrieving Collocations from Text: Xtract",
"authors": [
{
"first": "F",
"middle": [],
"last": "Smadja",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "1",
"pages": "143--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Smadja, F. \"Retrieving Collocations from Text: Xtract.\" Computational Linguistics, 1993. Vol.19, No.1. pp. 143-177.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Translation Collocations for Bilingual Lexicons: a Statistical Approach",
"authors": [
{
"first": "F",
"middle": [],
"last": "Smadja",
"suffix": ""
},
{
"first": "K",
"middle": [
"R"
],
"last": "Mckeown",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
}
],
"year": 1996,
"venue": "pp. 1~38. Boxing Chen and Limin Du",
"volume": "22",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Smadja F., McKeown K.R. and Hatzivassiloglou V. \"Translation Collocations for Bilingual Lexicons: a Statistical Approach.\" Computational Linguistics 1996, 22(1), pp. 1~38. Boxing Chen and Limin Du",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Towards Building a Corpus-based Dictionary for Non-word-boundary Language",
"authors": [
{
"first": "Tanapong",
"middle": [],
"last": "Potipiti",
"suffix": ""
},
{
"first": "Virach",
"middle": [],
"last": "Sornlertlamvanich",
"suffix": ""
},
{
"first": "Thatsanee",
"middle": [],
"last": "Charoenporn",
"suffix": ""
}
],
"year": 2000,
"venue": "Workshop on Terminology Resources and Computation, Workshop Proceedings of the Second International Conference on Language Resources and Evaluation (LREC2000)",
"volume": "",
"issue": "",
"pages": "82--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tanapong Potipiti, Virach Sornlertlamvanich and Thatsanee Charoenporn. \"Towards Building a Corpus-based Dictionary for Non-word-boundary Language.\" Workshop on Terminology Resources and Computation, Workshop Proceedings of the Second International Conference on Language Resources and Evaluation (LREC2000), Athens, Greece. 2000, pp. 82-86.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Using Parallel Corpora for Translation-Oriented Term Extraction",
"authors": [
{
"first": "Spela",
"middle": [],
"last": "Vintar",
"suffix": ""
}
],
"year": 2001,
"venue": "Babel Journal",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vintar, Spela. \"Using Parallel Corpora for Translation-Oriented Term Extraction.\" Babel Journal, John Benjamins Publishing. 2001.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Large-Scale Automatic Extraction of an English-Chinese Translation Lexicon",
"authors": [
{
"first": "D",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Xia",
"suffix": ""
}
],
"year": 1995,
"venue": "Machine Translation",
"volume": "",
"issue": "",
"pages": "285--313",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wu, D. and Xia, X. \"Large-Scale Automatic Extraction of an English-Chinese Translation Lexicon.\" Machine Translation (4). 1995, pp. 285-313.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Using suffix arrays to compute term frequency and document frequency for all substrings in a corpus",
"authors": [
{
"first": "M",
"middle": [],
"last": "Yamamoto",
"suffix": ""
},
{
"first": "K",
"middle": [
"W"
],
"last": "Church",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 6th Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "28--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yamamoto, M. and Church, K.W. \"Using suffix arrays to compute term frequency and document frequency for all substrings in a corpus.\" Proceedings of the 6th Workshop on Very Large Corpora, Montreal, Canada, 1998, pp.28-37.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The Grammatical Knowledge-base of Contemporary Chinese",
"authors": [
{
"first": "S",
"middle": [
"W"
],
"last": "Yu",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu, S.W. The Grammatical Knowledge-base of Contemporary Chinese, Beijing: Tsinghua University Press, (It's published in Chinese) 1998.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Automatic Suggestion of Significant Terms for a Predefined Topic",
"authors": [
{
"first": "J",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Dapkus",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Third Workshop on Very Large Corpora. Cambridge1995",
"volume": "",
"issue": "",
"pages": "131--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhou, J. and Dapkus, P. \"Automatic Suggestion of Significant Terms for a Predefined Topic.\" Proceedings of the Third Workshop on Very Large Corpora. Cambridge1995, pp.131-147.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Center for Speech Interaction Technology Research, Institute of Acoustics, Chinese Academy of Sciences Boxing Chen and Limin Du important aspect of the automatic alignment of bilingual corpus technology. Since the 1980's, the technique of automatic alignment of a bilingual corpus has undergone great improvement;"
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "88.74%. The unlisted words in DECC1.0 (Daily English-Chinese Corpus) were mainly the Chinese translations of foreign personal names and place names. The main focus of our research here was the aggregation of single Chinese characters that are produced throughPreparatory Work on Automatic Extraction of BilingualMulti-Word Units from Parallel Corpora segmentation. The results of word segmentation are shown inFigure 2: Word Segmentation Results."
},
"FIGREF2": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Chinese sentences is 6.87 words and expressions. Secondly, the corresponding sense units of English-Chinese sentence pairs in spoken language are not always aligned in terms of position, as shown inFigure 3."
},
"FIGREF3": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Example of Word Alignment."
},
"FIGREF4": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Mutual Information ScoreFigure 5. T-Score."
},
"FIGREF5": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "of AMI (Average Mutual Information), MID (Mutual Information Difference), AT (Average T-score) and TD (T-score Difference) are as follows:"
},
"FIGREF6": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "the set of all the (n-1)-grams contained in the N-gram word string C (Chunk), and let 1 + \u2126 n be the set of all the (n+1)-grams containing this N-gram word string C. Suppose the bigger the association score S(.), the better the result. The Local Bests algorithm can be described as follows: C) = 2 and S(C) > S(y)) or (length(C) > 2 and S(x) \u2264 S(C) and S(C) > S(y)) then word string C is a multiword unit.Here, S(.) is the internal association score of the Multi-Word Units, and length (C) is the number of words included in C."
},
"FIGREF7": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Figure 6. 4 th level lexicon. Figure 7. 0 th level lexicon."
},
"FIGREF8": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Sentence pair in a corpus with \"Apollo.\"(7) Another fact that affects the precision is that the corpus we used contains 171 bilingual proverbs, and such sentence pairs can rarely be translated word for word, as demonstrated by the example shown inFigure 9."
},
"FIGREF9": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Bilingual proverb."
},
"TABREF0": {
"content": "<table><tr><td>i th level lexicons</td><td>precision %</td><td>Recall %</td><td>F-measure</td></tr><tr><td>0th E-C</td><td>41.394</td><td>98.63</td><td>0.583</td></tr><tr><td>1st E-C</td><td>23.535</td><td>84.22</td><td>0.368</td></tr><tr><td>2nd E-C</td><td>52.388</td><td>31.56</td><td>0.394</td></tr><tr><td>3rd E-C</td><td>78.323</td><td>5.18</td><td>0.097</td></tr><tr><td>4th E-C</td><td>94.900</td><td>1.36</td><td>0.027</td></tr><tr><td>0th C-E</td><td>38.266</td><td>96.94</td><td>0.549</td></tr><tr><td>1st C-E</td><td>18.943</td><td>82.58</td><td>0.308</td></tr><tr><td>2nd C-E</td><td>47.564</td><td>29.92</td><td>0.367</td></tr><tr><td>3rd C-E</td><td>75.092</td><td>7.54</td><td>0.137</td></tr><tr><td>4th C-E</td><td>88.293</td><td>2.83</td><td>0.055</td></tr></table>",
"text": "",
"type_str": "table",
"html": null,
"num": null
}
}
}
}