Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O03-1003",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:01:56.271572Z"
},
"title": "Bilingual Collocation Extraction Based on Syntactic and Statistical Analyses",
"authors": [
{
"first": "Chien-Cheng",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Tsing Hua University",
"location": {
"addrLine": "101, Kuangfu Road",
"settlement": "Hsinchu",
"country": "Taiwan"
}
},
"email": ""
},
{
"first": "Jason",
"middle": [
"S"
],
"last": "Chang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Tsing Hua University",
"location": {
"addrLine": "101, Kuangfu Road",
"settlement": "Hsinchu",
"country": "Taiwan"
}
},
"email": "jschang@cs.nthu.edu.tw"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we describe an algorithm that employs syntactic and statistical analysis to extract bilingual collocations from a parallel corpus. The preferred syntactic patterns are obtained from idioms and collocations in a machine-readable dictionary. Phrases matching the patterns are extract from aligned sentences in a parallel corpus. Those phrases are subsequently matched up via cross-linguistic statistical association. Statistical association between the whole collocations as well as words in collocations is used jointly to link a collocation and its counterpart collocation in the other language. We experimented with an implementation of the proposed method on a very large Chinese-English parallel corpus with satisfactory results.",
"pdf_parse": {
"paper_id": "O03-1003",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we describe an algorithm that employs syntactic and statistical analysis to extract bilingual collocations from a parallel corpus. The preferred syntactic patterns are obtained from idioms and collocations in a machine-readable dictionary. Phrases matching the patterns are extract from aligned sentences in a parallel corpus. Those phrases are subsequently matched up via cross-linguistic statistical association. Statistical association between the whole collocations as well as words in collocations is used jointly to link a collocation and its counterpart collocation in the other language. We experimented with an implementation of the proposed method on a very large Chinese-English parallel corpus with satisfactory results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Collocations like terminology tend to be lexicalized and have a somewhat more restricted meaning than the surface form suggested (Justeson and Katz 1995) .",
"cite_spans": [
{
"start": 129,
"end": 153,
"text": "(Justeson and Katz 1995)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Collocations are recurrent combinations of words that co-occur more often than chance. The words in a collocation may appear next to each other (rigid collocations) or otherwise (flexible/elastic collocations). On the other hand, collocations can be classified into lexical and grammatical collocations (Benson, Benson, Ilson, 1986) .",
"cite_spans": [
{
"start": 303,
"end": 332,
"text": "(Benson, Benson, Ilson, 1986)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Lexical collocations are formed between content words, while the grammatical collocation has to do with a content word and function words or a syntactic structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Collocations are pervasive in all types of writing and can be found in phrases, chunks, proper names, idioms, and terminology. Collocations in one language are usually difficult to translate directly into another language word by word, therefore present a challenge for machine translation systems and second language learners alike.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Automatic extraction of monolingual and bilingual collocations are important for many applications, including natural language generation, word sense disambiguation, machine translation, lexicography, and cross language information retrieval. Hank and Church (1990) pointed out the usefulness of mutual information for identifying monolingual collocations in lexicography. Justeson and Katz (1995) proposed to identify technical terminology based on preferred linguistic patterns and discourse property of repetition. Among many general methods presented by Manning and Schutze (1999) , best results can be achieved by filtering based on both linguistic and statistical constraints. Smadja (1993) presented a method called EXTRACT, based on means variance of the distance between two collocates capable of computing elastic collocations. Kupiec (1993) proposed to extract bilingual noun phrases using statistical analysis of co-occurrence of phrases. Smadja, McKeown, and Hatzivassiloglou (1996) extended the EXTRACT approach to handling of bilingual collocation based mainly on the statistical measures of Dice coefficient. Dunning (1993) pointed out the weakness of mutual information and showed that log likelihood ratios are more effective in identifying monolingual collocations especially when the occurrence count is very low. Both Smadja and Kupiec used the statistical association between the whole of collocations in two languages without looking into the constituent words. For a collocation and its paraphrasing translation counterpart, that is reasonable. For instance, with the bilingual collocation (\"\u64e0\u7834\u982d\", \"stop at nothing\") in Example 1, it is not going to help looking into the statistical association between \"stopping\" and \"\u64e0\" [ji] (sqeeze) (or \"\u7834\" [bo, broken] and \"\u982d\" [tou, head] for that matter). However, with the bilingual collocation (\"\u6e1b\u85aa\", \"pay cut\") in Example 2, considering the statistical association between \"pay\" and \"\u85aa\" [xin] (wage) as well as \"cut\" and \"\u6e1b\" [jian, reduce] certainly makes sense. Moreover, we have more data to make statistical inference between words than phrases. Therefore, measuring the statistical association of collocations based on constituent words will help to cope with the data sparseness problem. We will be able to extract bilingual collocations with high reliability even when they appear together in aligned sentences only once or twice.",
"cite_spans": [
{
"start": 252,
"end": 265,
"text": "Church (1990)",
"ref_id": "BIBREF3"
},
{
"start": 373,
"end": 397,
"text": "Justeson and Katz (1995)",
"ref_id": "BIBREF9"
},
{
"start": 558,
"end": 584,
"text": "Manning and Schutze (1999)",
"ref_id": "BIBREF12"
},
{
"start": 683,
"end": 696,
"text": "Smadja (1993)",
"ref_id": "BIBREF14"
},
{
"start": 838,
"end": 851,
"text": "Kupiec (1993)",
"ref_id": "BIBREF10"
},
{
"start": 951,
"end": 995,
"text": "Smadja, McKeown, and Hatzivassiloglou (1996)",
"ref_id": "BIBREF15"
},
{
"start": 1125,
"end": 1139,
"text": "Dunning (1993)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "They are stopping at nothing to get their kids into \"star schools\" ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 1",
"sec_num": null
},
{
"text": "Not only haven't there been layoffs or pay cuts, the year-end bonus and the performance review bonuses will go out as usual .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 2",
"sec_num": null
},
{
"text": "Since the collocations could be rigid or flexible in both languages, we can generally classify the match type of bilingual collocation into three types. In Example 1, (\"\u64e0\u7834\u982d\",\"stop at nothing\") is a pair of rigid collocations, and (\"\u628a\u2026\u9001 \u9032\", \"get \u2026 into\") is a pair of elastic collocations. In Example 3 ,(\"\u8d70\u2026\u7684\u8def\u7dda', \"take the path of\" ) gives the example for a pair of elastic and rigid collocations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u4e0d\u4f46\u4e0d\u865e\u88c1\u54e1\u3001\u6e1b\u85aa\uff0c\u5e74\u7d42\u734e\u91d1\u3001\u8003\u7e3e\u734e\u91d1\u9084\u90fd\u7167\u767c\u4e0d\u8aa4 Source: 1991/01 Filling the Iron Rice Bowl",
"sec_num": null
},
{
"text": "Lin Ku-fang, a worker in ethnomusicology, worries too, but his way is not to take the path of revolutionizing Chinese music or making it more \"symphonic\"; rather, he goes directly into the tradition, looking into it for \"good music\" that has lasted undiminished for a hundred generations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 3",
"sec_num": null
},
{
"text": "Source: 1997/05 A Contemporary Connoisseur of the Classical Age--Lin Ku-fang's",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u6c11\u65cf\u97f3\u6a02\u5de5\u4f5c\u8005\u6797\u8c37\u82b3\u4e5f\u975e\u4e0d\u611f\u5230\u6182\u5fc3\uff0c\u4f46\u4ed6\u7684\u65b9\u6cd5\u662f\uff1a\u4e0d\u8d70\u570b\u6a02\u6539\u9769\u6216 \u300c\u4ea4\u97ff\u5316\u300d \u7684\u8def\uff0c\u800c\u662f\u76f4\u63a5\u9762\u5c0d\u50b3\u7d71\u3001\u5f9e\u4e2d\u5c0b\u627e\u6b77\u767e\u4ee3\u4e0d\u8870\u7684 \u300c\u597d\u807d\u97f3\u6a02\u300d \u3002",
"sec_num": null
},
{
"text": "In this paper, we describe an algorithm that employs syntactic and statistical analyses to extract rigid lexical bilingual collocations from a parallel corpus. Here, we focus on the bilingual collocations, which have some lexical correlation between them and are rigid in both languages. To cope with the data sparseness problem, we use the statistical association between two collocations as well as that between their constituent words. In Section 2, we describe how we obtain the preferred syntactic patterns from collocation and idioms in a machine-readable dictionary. Examples will be given to show how collocations matching the patterns are extracted and aligned for a given aligned sentence pairs in a parallel corpus. We experimented with an implementation of the proposed method for the Chinese-English parallel corpus of Sinorama Magazine with satisfactory results. We describe the experiments and evaluation in Section 3. The limitations and related issues will be taken up in Section 4. We conclude and give future directions in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Canon of Chinese Classical Music",
"sec_num": null
},
{
"text": "In this chapter, we will describe how we obtain the bilingual collocation by using the preferred syntactic patterns and associative information. Consider a pair of aligned sentences in a parallel corpus such as Example 4 given below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction of Bilingual Collocations",
"sec_num": "2."
},
{
"text": "The civil service rice bowl, about which people always said \"you can't get filled up, but you won't starve to death either,\" is getting a new look with the economic downturn. Not only haven't there been layoffs or pay cuts, the year-end bonus and the performance review bonuses will go out as usual, drawing people to compete for their own \"iron rice bowl.\" In Section 2.1, we will first show how that process is carried out for Example 4 under the proposed approach. The formal description will be given in Section 2.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 4",
"sec_num": null
},
{
"text": "To extract bilingual collocations, we first run part of speech tagger on both sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An Example of Extracting Bilingual Collocations",
"sec_num": "2.1"
},
{
"text": "For instance, for Example 4, we get the results of tagging in Example 4A and 4B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An Example of Extracting Bilingual Collocations",
"sec_num": "2.1"
},
{
"text": "In the tagged English sentence, we identify phrases that follow a syntactic pattern from a set of training data of collocations. For instance, \"jj nn\" is one of the preferred syntactic structures. So, \"civil service,\" \"economic downturn,\" and \"own iron,\"\u2026etc are matched. See Table 1 for more details. For Example 4, the phrases in Example 4C and 4D are considered as potential candidates for collocations because they match at least two distinct collocations listed in LDOCE:",
"cite_spans": [],
"ref_spans": [
{
"start": 276,
"end": 283,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "An Example of Extracting Bilingual Collocations",
"sec_num": "2.1"
},
{
"text": "Example 4A",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An Example of Extracting Bilingual Collocations",
"sec_num": "2.1"
},
{
"text": "The/at civil/jj service/nn rice/nn bowl/nn ,/, about/in which/wdt people/nns always/rb said/vbd \"/`` you/ppss can/md 't/* get/vb filled/vbn up/rp ,/, but/cc you/ppss will/md 't/* starve/vb to/in death/nn either/cc ,/rb \"/'' is/bez getting/vbg a/at new/jj look/nn with/in the/at economic/jj downturn/nn ./. Not/nn only/rb have/hv 't/* there/rb been/ben layoffs/nns or/cc pay/vb cuts/nns ,/, the/at year/nn -/in end/nn bonus/nn and/cc the/at performance/nn review/nn bonuses/nn will/md go/vb out/rp as/ql usual/jj ,/, drawing/vbg people/nns to/to compete/vb for/in their/pp$ own/jj \"/`` iron/nn rice/nn bowl/nn ./. \"/''",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An Example of Extracting Bilingual Collocations",
"sec_num": "2.1"
},
{
"text": "\u4ee5\u5f80/Nd \u4e00\u5411/Dd \u88ab/P02 \u8a8d\u70ba/VE2 \u300c/PU \u5403/VC \u4e0d/Dc \u98fd/VH \u3001/PU \u9913\u4e0d\u6b7b/VR \u300d/PU \u7684/D5 \u516c\u5bb6/Nc \u98ef/Na \uff0c/PU \u503c\u6b64/Ne \u7d93\u6fdf/Na \u666f\u6c23/Na \u4f4e\u8ff7/VH \u4e4b\u969b/NG \uff0c/PU \u4e0d\u4f46/Cb \u4e0d\u865e/VK \u88c1\u54e1/VC \u3001/PU \u6e1b\u85aa/VB \uff0c/PU \u5e74\u7d42\u734e\u91d1/Na \u3001/PU \u8003\u7e3e/Na \u734e\u91d1/Na \u9084\u90fd/Db \u7167/VC \u767c/VD \u4e0d\u8aa4/VH \uff0c/PU \u56e0\u800c/Cb \u4fc3\u4f7f/VL \u4e0d\u5c11/Ne \u4eba/Na \u56de\u982d/VA \u7af6\u9010/VC \u9019/Ne \u96bb/Nf \u300c/PU \u9435 \u98ef\u7897/Na \u300d/PU",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 4B",
"sec_num": null
},
{
"text": "\"civil service,\" \"rice bowl,\" \"iron rice bow,\" \"fill up,\" \"economic downturn,\" \"end bonus,\" \"year -end bonus,\" \"go ut,\" \"performance review,\" \"performance review bonus,\" \"pay cut,\" \"starve to death,\" \"civil service rice,\" \"service rice,\" \"service rice bowl,\" \"people always,\" \"get fill,\" \"people to compete,\" \"layoff or pay,\" \"new look,\" \"draw people\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 4C",
"sec_num": null
},
{
"text": "\"\u5403\u4e0d\u98fd,\" \"\u9913\u4e0d\u6b7b,\" \"\u516c\u5bb6\u98ef,\" \"\u7d93\u6fdf\u666f\u6c23,\" \"\u666f\u6c23\u4f4e\u8ff7,\" \"\u7d93 \u6fdf\u666f\u6c23\u4f4e\u8ff7,\" \"\u88c1\u54e1,\" \"\u6e1b\u85aa,\" \"\u5e74\u7d42\u734e\u91d1,\" \"\u8003\u7e3e\u734e\u91d1,\" \"\u7af6 \u9010,\" \"\u9435\u98ef\u7897.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 4D",
"sec_num": null
},
{
"text": "Although \"new look\" and \"draw people\" are legitimate phrases, they more like \"free combinations\" than collocations. That reflects from their low log likelihood ratio values. For that, we proceed to see how tightly the two words in overlapping bigrams within a collocation associated with each other; we calculate the minimum of the log likelihood ratio values for all bigrams. With that, we filter out the candidates that its POS pattern appear only once or has minimal log likelihood ratio of less than 7.88.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 4D",
"sec_num": null
},
{
"text": "See Tables 1 and 2 for more details.",
"cite_spans": [],
"ref_spans": [
{
"start": 4,
"end": 18,
"text": "Tables 1 and 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Example 4D",
"sec_num": null
},
{
"text": "In the tagged Chinese sentence, we basically proceed the same way to identify the candidates of collocations and based on the preferred linguistic patterns of the Chinese translation of collocations in an English-Chinese MRD. However, since there is no space delimiter between words, it is at time difficult to say whether the translation is a multi-word collocation or it is a single word and should not be considered as a collocation. For that reason, we take multiword and singleton phrases (with two or more characters) into consideration. For instance, in the tagged Example 2C, we will extract and consider the following candidates as the counterparts of English collocations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 4D",
"sec_num": null
},
{
"text": "Notes that at this point, we are not pinned down on the collocations and allow overlapping and conflicting candidates such as \"\u7d93\u6fdf\u666f\u6c23,\" \"\u666f\u6c23\u4f4e\u8ff7,\" \"\u7d93\u6fdf\u666f \u6c23\u4f4e\u8ff7.\" See Tables 3 and 4 for more details. To align collocations in both languages, we follow the idea of Competitive",
"cite_spans": [],
"ref_spans": [
{
"start": 157,
"end": 171,
"text": "Tables 3 and 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Example 4D",
"sec_num": null
},
{
"text": "Linking Algorithm proposed by Melamed (1996) 1. The first, third, and fourth pairs, (\"iron rice bowl,\" \"\u9435\u98ef\u7897\"), (\"year-end bonus,\" \"\u5e74\u7d42\u734e\u91d1\"), and (\"economic downturn,\" \"\u7d93\u6fdf\u666f\u6c23\u4f4e\u8ff7\"), are selected first. And that would exclude conflicting pairs from being considered including the second, fifth pairs and so on. 2. The second, fifth entries (\"rice bowl,\" \" \u9435 \u98ef \u7897 \") and (\"economic downturn,\" \"\u503c\u6b64\u7d93\u6fdf\u666f\u6c23\") and so on, conflict with the second and third entries that are already selected. Therefore, CLASS skips over those. 3. The entries (\"performance review bonus,\" \"\u8003\u7e3e\u734e\u91d1\"), (\"civil service rice,\" \"\u516c\u5bb6\u98ef\"), (\"pay cuts,\" \"\u6e1b\u85aa\"), and (\"starve to death,\" \"\u9913\u4e0d\u6b7b\") are selected next. 4 . CLASS proceeds through the rest of the list and the other list without finding any entries that do not conflict with the seven entries selected previously.",
"cite_spans": [
{
"start": 30,
"end": 44,
"text": "Melamed (1996)",
"ref_id": null
},
{
"start": 664,
"end": 665,
"text": "4",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Example 4D",
"sec_num": null
},
{
"text": "The program terminates and output a list of seven collocations. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5.",
"sec_num": null
},
{
"text": "In this section, we describe formally how CLASS works. We assume availability of a parallel corpus and a list of collocations in a bilingual MRD. The sentences and words have been aligned in the parallel corpus. We will describe how CLASS extracts bilingual collocations in the parallel corpus. CLASS carries out a number of preprocessing steps to calculate the following information:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Method",
"sec_num": "2.2"
},
{
"text": "1. Lists of preferred POS patterns of collocation in both languages. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Method",
"sec_num": "2.2"
},
{
"text": "Extract bilingual collocations in aligned sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Collocation Linking Alignment based on Syntax and Statistics",
"sec_num": null
},
{
"text": "( 1. C is segmented and tagged with part of speech information T.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input:",
"sec_num": null
},
{
"text": "2. E is tagged with part of speech sequences S. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input:",
"sec_num": null
},
{
"text": "2 2 2 1 1 1 2 2 2 1 1 1 ) 1 ( ) 1 ( ) 1 ( ) 1 ( log 2 ) ; ( 2 2 1 1 2 k n k k n k k n k k n k p p p p p p p p y x LLR \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-likelihood ratio: LLR(x;y)",
"sec_num": null
},
{
"text": "We have experimented with an implementation of CLASS based on Longman dictionary of Contemporary English, English-Chinese Edition and the parallel corpus of Sinorama magazine. The articles from Sinorama cover a wide range of topics, reflecting the personalities, places, and events in Taiwan for the past three-decade.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "3."
},
{
"text": "We experiment on articles mainly dated from 1995 to 2002. Sentence and word alignment were carried out first for Sinorama parallel Corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "3."
},
{
"text": "Sentence alignment is a very important aspect of the CLASS. It is the basis of a good collocation alignment. We using a new alignment method based on punctuation statistics (Yeh & Chang, 2002) . The punctuation-based approach outperforms the length-based approach with precision rates approaching 98%. With the sentence alignment approach, we obtain approximately 50,000 reliably aligned sentences containing 1,756,000 Chinese words (about 2,534,000 Chinese characters) and 2,420,000 English words in total.",
"cite_spans": [
{
"start": 173,
"end": 192,
"text": "(Yeh & Chang, 2002)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "3."
},
{
"text": "The content words were aligned based on Competitive Linking Algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "3."
},
{
"text": "Alignment of content words resulted in a probabilistic dictionary with 229,000 entries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "3."
},
{
"text": "We evaluated 100 random sentence samples with 926 linking types, and the precision is 93.3%. Most of the errors occurred with English words having no counterpart in the corresponding Chinese sentence. The translators do not always translate the word for word. For instance, with the word \"water\" in Example 4, it seems that these is no corresponding pattern in the Chinese sentence. Another major cause of errors is collocations that are not translated compositionally. For instance, the word \"State\" in the Example 6 is a part of the collocation \"United States\", and \"\u7f8e\u570b\" is more highly associated with \"United\" than \"States\", therefore due to one-to-one constraint \"States\" will not be aligned with \"\u7f8e\u570b\". Most often, it will be aligned incorrectly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "3."
},
{
"text": "About 49% error links belongs to this kind.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "3."
},
{
"text": "The boat is indeed a vessel from the mainland that illegally entered Taiwan waters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 5",
"sec_num": null
},
{
"text": "The words were a \"mark\" added by the Taiwan We obtained word-to-word translation probability from the result of word alignment. The translation probability P(c|e) is given below: Let's take \"pay\" as an example. Table 6 shows the various alignment translations for \"pay\" and the translation probability. We selected 100 sentences to evaluate the performance. We focused on rigid lexical collocations. The average English sentence had 45.3 words, while the average Chinese sentence had 21.4 words. The two human judges both master student majoring in Foreign Languages identified the bilingual collocations in these sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 37,
"end": 43,
"text": "Taiwan",
"ref_id": null
},
{
"start": 211,
"end": 218,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Example 5",
"sec_num": null
},
{
"text": "P(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 5",
"sec_num": null
},
{
"text": "We then compared the bilingual collocations produced by CLASS against the answer keys. The evaluation indicates an average recall rate = 60.9 % and precision = 85.2 % (See Table 7 ). ",
"cite_spans": [],
"ref_spans": [
{
"start": 172,
"end": 179,
"text": "Table 7",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Example 5",
"sec_num": null
},
{
"text": "This paper describes a new approach to automatic acquisition of bilingual collocations from a parallel corpus. Our method is an extension of Melamed's Competitive",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "4."
},
{
"text": "Linking Algorithm for word alignment, combining both linguistic and statistical information for recognition of monolingual and bilingual collocations in a much simpler way than Smadja's work. We differ from previous work in the following ways:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "4."
},
{
"text": "1. We use a data-driven approach to extract monolingual collocations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "4."
},
{
"text": "2. Unlike Smadja and Kupiec, we do not commit to two sets of monolingual collocations. Instead, we consider many overlapping and conflicting candidate and rely on the cross linguistic statistics to revolve the issue. That limitation can be partially alleviated by matching nonconsecutive word sequence against existing lists of collocations for the two languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "4."
},
{
"text": "Another limitation has to do with bilingual collocations, which are not literal translations. For instance, \"difficult and intractable\" is not yet handled in the program, because it is not a word for word translation of \"\u6840\u50b2\u4e0d\u99b4\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "4."
},
{
"text": "\u610f\u601d\u662f\u8aaa\u4e00\u500b\u518d\u600e\u9ebc\u6840\u50b2\u4e0d\u99b4\u7684\u4eba\uff0c\u90fd\u6703\u6709\u4eba\u6709\u8fa6\u6cd5\u5236\u670d\u4ed6\u3002 This saying means that no matter how difficult and intractable a person may seem, there will always be someone else who can cut him down to size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 8",
"sec_num": null
},
{
"text": "In the experiment process, we found that the limitation may be partially solved by spliting the candidate list of bilingual collocations into two lists: one (NZ) with non-zero phrase translation probabilistic values and the other (ZE) with zero value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source: 1990/05 A Fierce Horse Ridden by a Fierce Rider",
"sec_num": null
},
{
"text": "The two lists are then sorted by the LLR values. After extracting bilingual collocations from NZ list, we could continue to go downing the ZE list and select bilingual collocations if not conflicting with previously selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source: 1990/05 A Fierce Horse Ridden by a Fierce Rider",
"sec_num": null
},
{
"text": "In the proposed method, we did no t take advantage of the correspondence of POS patterns from one language to the other. Some linking mistakes seem to be avoidable with the POS information. For example, the aligned collocation for \"issue/vb visas/nns\" is \"\u7c3d\u8b49/Na\", instead of \"\u767c/VD \u7c3d\u8b49/Na.\" However, the POS pattern \"vb nn\" appears to be more compatible with \"VD Na\" than \"Na.\" Therefore, handling these name entities in a pre-process should be helpful to avoid segment mistakes, and alignment difficulties.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source: 1990/05 A Fierce Horse Ridden by a Fierce Rider",
"sec_num": null
},
{
"text": "In this paper, we describe an algorithm that employs syntactic and statistical analyses to extract rigid bilingual collocations from a parallel corpus. Phrases matching the preferred patterns are extract from aligned sentences in a parallel corpus. Those phrases are subsequently matched up via cross-linguistic statistical association.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5."
},
{
"text": "Statistical association between the whole collocations as well as words in collocations is used jointly to link a collocation and its counterpart. We experimented with an implementation of the proposed method on a very large Chinese-English parallel corpus with satisfactory results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5."
},
{
"text": "A number of interesting future directions suggest themselves. First, it would be interesting to see how effectively we can extend the method to longer and elastic collocations and to grammatical collocations. Second, bilingual collocations that are proper names and transliterations may need additional considerations. Third, it will be interesting to see if the performance can re improved cross language correspondence between POS patterns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The BBI Combinatory Dictionary of English: A Guide to Word Combinations",
"authors": [
{
"first": "Morton",
"middle": [],
"last": "Benson",
"suffix": ""
},
{
"first": "Evelyn",
"middle": [],
"last": "Benson",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Ilson",
"suffix": ""
}
],
"year": 1986,
"venue": "John Benjamins",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benson, Morton., Evelyn Benson, and Robert Ilson. The BBI Combinatory Dictionary of English: A Guide to Word Combinations. John Benjamins, Amsterdam, Netherlands, 1986.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Looking for needles in a haystack",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Choueka",
"suffix": ""
}
],
"year": 1988,
"venue": "Actes RIAO, Conference on User-Oriented Context Based Text and Image Handling",
"volume": "",
"issue": "",
"pages": "609--623",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Choueka, Y. (1988) : \"Looking for needles in a haystack\", Actes RIAO, Conference on User-Oriented Context Based Text and Image Handling, Cambridge, p. 609-623.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic retrieval of frequent idiomatic and collocational expressions in a large corpus",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Choueka",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Neuwitz",
"suffix": ""
}
],
"year": 1983,
"venue": "Journal of the Association for Literary and Linguistic Computing",
"volume": "4",
"issue": "1",
"pages": "34--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Choueka, Y.; Klein, and Neuwitz, E.. Automatic retrieval of frequent idiomatic and collocational expressions in a large corpus. Journal of the Association for Literary and Linguistic Computing, 4(1):34-8, (1983)",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Word association norms, mutual information, and lexicography",
"authors": [
{
"first": "K",
"middle": [
"W"
],
"last": "Church",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Hanks",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational Linguistics",
"volume": "16",
"issue": "1",
"pages": "22--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Church, K. W. and Hanks, P. Word association norms, mutual information, and lexicography. Computational Linguistics, 1990, 16(1), pp. 22-29.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Termight: Identifying and translation technical terminology",
"authors": [
{
"first": "I",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
}
],
"year": 1994,
"venue": "Proc. of the 4th Conference on Applied Natural Language Processing (ANLP)",
"volume": "",
"issue": "",
"pages": "34--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dagan, I. and K. Church. Termight: Identifying and translation technical terminology. In Proc. of the 4th Conference on Applied Natural Language Processing (ANLP), pages 34-40, Stuttgart, Germany, 1994.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Accurate methods for the statistics of surprise and coincidence",
"authors": [
{
"first": "T",
"middle": [],
"last": "Dunning",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "1",
"pages": "61--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dunning, T (1993) Accurate methods for the statistics of surprise and coincidence, Computational Linguistics 19:1, 61-75.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning bilingual collocations by word-level sorting",
"authors": [
{
"first": "M",
"middle": [],
"last": "Haruno",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ikehara",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Yamazaki",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. of the 16th International Conference on Computational Linguistics (COLING '96)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haruno, M., S. Ikehara, and T. Yamazaki. Learning bilingual collocations by word-level sorting. In Proc. of the 16th International Conference on Computational Linguistics (COLING '96), Copenhagen, Denmark, 1996.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Character-based Collocation for Mandarin Chinese",
"authors": [
{
"first": "C.-R",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "K.-J",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Y.-Y.",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": null,
"venue": "ACL 2000",
"volume": "",
"issue": "",
"pages": "540--543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, C.-R., K.-J. Chen, Y.-Y. Yang, Character-based Collocation for Mandarin Chinese, In ACL 2000, 540-543.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Acquiring collocations for lexical choice between near-synonyms",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Inkpen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zaiu",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": null,
"venue": "SIGLEX Workshop on Unsupervised Lexical Acquisition",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Inkpen, Diana Zaiu and Hirst, Graeme. ``Acquiring collocations for lexical choice between near-synonyms.'' SIGLEX Workshop on Unsupervised Lexical Acquisition, 40th meeting of the Association for Computational Lin",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Technical Terminology: some linguistic properties and an algorithm for identification in text",
"authors": [
{
"first": "J",
"middle": [
"S"
],
"last": "Justeson",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Slava",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Katz",
"suffix": ""
}
],
"year": 1995,
"venue": "Natural Language Engineering",
"volume": "1",
"issue": "1",
"pages": "9--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Justeson, J.S. and Slava M. Katz (1995). Technical Terminology: some linguistic properties and an algorithm for identification in text. Natural Language Engineering, 1(1):9-27.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An algorithm for finding noun phrase correspondences in bilingual corpora",
"authors": [
{
"first": "Julian",
"middle": [],
"last": "Kupiec",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kupiec, Julian. An algorithm for finding noun phrase correspondences in bilingual corpora. In Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics, Columbus, Ohio, 1993.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Using collocation statistics in information extraction",
"authors": [
{
"first": "D",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. of the Seventh Message Understanding Conference (MUC-7)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, D. Using collocation statistics in information extraction. In Proc. of the Seventh Message Understanding Conference (MUC-7), 1998.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Foundations of Statistical Natural Language Processing",
"authors": [
{
"first": "H",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Schutze",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manning and H. Schutze. Foundations of Statistical Natural Language Processing (SNLP), C., MIT Press, 1999.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A Word-to-Word Model of Translational Equivalence",
"authors": [
{
"first": "I",
"middle": [],
"last": "Melamed",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dan",
"suffix": ""
}
],
"year": 1997,
"venue": "Procs. of the ACL97",
"volume": "",
"issue": "",
"pages": "490--497",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melamed, I. Dan. \"A Word-to-Word Model of Translational Equivalence\". In Procs. of the ACL97. pp 490-497. Madrid Spain, 1997.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Retrieving collocations from text: Xtract",
"authors": [
{
"first": "F",
"middle": [],
"last": "Smadja",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "1",
"pages": "143--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Smadja, F. 1993. Retrieving collocations from text: Xtract. Computational Linguistics, 19(1):143-177",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Translating collocations for bilingual lexicons: A statistical approach",
"authors": [
{
"first": "F",
"middle": [],
"last": "Smadja",
"suffix": ""
},
{
"first": "K",
"middle": [
"R"
],
"last": "Mckeown",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "1",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Smadja, F., K.R. McKeown, and V. Hatzivassiloglou. Translating collocations for bilingual lexicons: A statistical approach. Computational Linguistics, 22(1):1-38, 1996.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Using Punctuations for Bilingual Sentence Alignment-Preparing Parallel Corpus",
"authors": [
{
"first": "Kevin",
"middle": [
"C"
],
"last": "Yeh",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"C"
],
"last": "Chuang",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"S"
],
"last": "Chang",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin C. Yeh, Thomas C. Chuang, Jason S. Chang (2003), Using Punctuations for Bilingual Sentence Alignment-Preparing Parallel Corpus",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "\u4ed6\u5011\u64e0\u7834\u982d\u4e5f\u8981\u628a\u5b69\u5b50\u9001\u9032\u660e\u661f\u5c0f\u5b78 Source: 1995/02 No Longer Just an Academic Question: Educational Alternatives Come to Taiwan",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "for word alignment. Basically, the proposed algorithm CLASS, Collocation Linking Algorithm based on Syntax andStatistics, is a greedy method that selects collocation pairs. The pair with higher association value takes precedence over those with a lower value. CLASS also imposes a one-to-one constraint on the collocation pairs selected. Therefore, the algorithm at each step considers only pairs with words not selected before. However, CLASS differs with CLA in that it considers the association between the two candidate collocations in two aspects:Logarithmic Likelihood Ratio between the two collocations in question as a whole. Translation probability of collocation based on constituent words For Example 4, the CLASS Algorithm first calculates the counts of collocation candidates in the English and Chinese part of the corpus. The collocations are matched up randomly across from English to Chinese. Subsequently, the co-occurrence counts of these candidates across from English to Chinese are also tallied. From the monolingual collocation candidate counts and cross language concurrence counts, we produce the LLR values and the collocation translation probability derived from word alignment analysis.. Those collocation pairs with zero translation probability are ignored. The lists are sorted in descending order of LLR values, and the pairs with low LLR value are discarded. Again, for Example 4, the greedy selection process of collocation starts with the first entry in the sorted list and proceeds as follows:",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "Collocation candidates matching the preferred POS patterns. 3. N-gram statistics for both languages, N = 1, 2. Log likelihood Ratio statistics for two consecutive words in both languages. Log likelihood Ratio statistics for a pair of candidates of bilingual collocation across from one language to the other. Content word alignment based on Competitive Linking Algorithm (Melamed 1997).",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF3": {
"text": "illustrates how the method works for each aligned sentence pair (C, E) in the corpus. Initially, part of speech taggers process C and E. After that, collocation candidates are extracted based on preferred POS patterns and statistical association between consecutive words in a collocation. The collocation candidates are subsequently matched up across from one language to the other. Those pairs are sorting according to log likelihood ratio and collocation translation probability. A greedy selection process goes through the sorted list and selects bilingual collocations subject to one to one constraint. The detailed algorithm is given below: The major components in the proposed CLASS algorithm Preprocessing: Extracting preferred POS patterns P and Q in both languages Input: A list of bilingual collocations from a machine-readable dictionary Output: Perform part of speech tagging for both languages 2. Calculate the number of instances for all POS patterns in both languages 3. Eliminate the POS patterns with instance count 1.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF4": {
"text": "1) A pair of aligned sentences (C, E), C = (C 1 C 2 \u2026 C n ) and E = (E 1 E 2 \u2026 E m ) (2) Preferred POS patterns P and Q in both languages Output: Aligned bilingual collocations in (C, E)",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF5": {
"text": "# of pairs that contain x and y simultaneously. k 2 : # of pairs that contain x but do not contain y. n 1 : # of pairs that contain y n 2 : # of pairs that does not contain yp 1 = k 1 /n 1, p 2 = k 2 /n 2 , p = (k 1 +k 2 )/(n 1 +n 2 )3. Match T against P and S against Q to extract collocation candidates X 1 , X 2 ,....X k in English and Y 1 , Y 2 , ...,Y e in Chinese. Consider bilingual each collocation candidates (X i , Y j ) in turn and calculate the minimal log likelihood ratio LLR between X i and Y j Eliminate candidates with LLR smaller than a threshold (7.88).6. Match up all possible linking fromEnglish collocation candidates toChinese ones: (D 1 , F 1 ), (D 1 , F 2 ), \u2026 (D i , F j ), \u2026 ( D m , F n ).7. Calculate LLR for (D i , F j ), and discard pairs with LLR value lower than 7of words in the English collocation F j 8. The candidate list of bilingual collocations is considered only the one with non-zero collocation translation probability P(D i , F j ) values. The list is then sorted by the LLR values and collocation translation probability. 9. Go down the list and select a bilingual collocation if it is not conflicting with previous selection. 10. Output the bilingual collocation selected in Steps 10.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF6": {
"text": ",c) : number of alignment linking between a Chinese word c and an English word e count(e): number of instances of e in alignment likings.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF7": {
"text": "We combine both information related to the whole collocation as well as those of constituent words for more reliable probabilistic estimation of aligned collocations.The approach is limited by its reliance on the training data of mostly rigid collocation patterns and is not applicable to elastic collocations such as \"jump on \u2026 bandwagon.\" For instance, the program cannot handle the elastic collocation in following example: the good fortune to jump on this high-profit bandwagon and has been able to snatch a substantial lead over countries like Malaysia and mainland China, which have just started in this industry. (Source: Sinorama, 1996, Dec Issue Page 22, Stormy Waters for Taiwan's ICs)",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF8": {
"text": "of China broke relations with Australia in 1972, after the country recognized the Chinese Communists, and because of the lack of formal diplomatic relations, Australia felt it could not issue visas on Taiwan. Instead, they were handled through its consulate in Hong Kong and then sent back to Taiwan, the entire process requiring five days to a week to complete. Source: 1990/04 Visas for Australia to Be Processed in Just 24 Hours A number of mistakes are caused with the erroneous word segments process of the Chinese tagger. For instance, \"\u5927\u5b78\u53ca\u7814\u7a76\u751f\u51fa\u570b\u671f\u9593\" should be segmented as \" \u5927\u5b78 / \u53ca / \u7814\u7a76\u751f / \u51fa\u570b / \u671f\u9593\" but instead segment was \"\u5927\u5b78 / \u53ca / \u7814 \u7a76 / \u751f\u51fa / \u570b / \u671f\u9593 / \u7684 / \u5b78\u696d.\" Another major source of segmentation mistakes has to do with proper names and their transliterations. These name entities that are not included in the database are usually segmented into single Chinese character. For instance, \"...\u4e00\u66f8\u4f5c\u8005\u5289\u5b78\u929a\u6307\u51fa...\" is segmented as \" ... / \u4e00 / \u66f8 / \u4f5c\u8005 / \u5289 / \u5b78 / \u929a / \u6307\u51fa / ...,\" while \"...\u5728\u5308\u7259\u5229\u5730\u5340\u5efa\u570b\u7684\u99ac\u672d\u723e\u4eba...\" is segmented as \"...\u5728 / \u5308\u7259\u5229 / \u5730\u5340 / \u5efa\u570b / \u7684 / \u99ac / \u672d / \u723e / \u4eba / ....\"",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF1": {
"html": null,
"num": null,
"text": "The initial candidates extracted based on preferred patterns trained on collocations",
"type_str": "table",
"content": "<table><tr><td>listed in LDOCE.</td><td/><td/><td/></tr><tr><td>E-collocation Candidate</td><td colspan=\"2\">Part of Speech Pattern Count</td><td>Min LLR</td></tr><tr><td>civil service</td><td>jj nn</td><td>1562</td><td>496.156856</td></tr></table>"
},
"TABREF2": {
"html": null,
"num": null,
"text": "The candidates of English collocation based on both preferred linguistic patterns",
"type_str": "table",
"content": "<table><tr><td>and log likelihood ratio</td><td/><td/><td/></tr><tr><td>E-collocation Candidate</td><td colspan=\"2\">Part of Speech Pattern Count</td><td>Min LLR</td></tr><tr><td>civil service</td><td>jj nn</td><td>1562</td><td>496.156856</td></tr><tr><td>rice bowl</td><td>nn nn</td><td>1860</td><td>99.2231161</td></tr><tr><td>iron rice bowl</td><td>nn nn nn</td><td>8</td><td>66.3654678</td></tr><tr><td>filled up</td><td>vbn rp</td><td>84</td><td>55.2837871</td></tr><tr><td>economic downturn</td><td>jj nn</td><td>1562</td><td>51.8600979</td></tr><tr><td>*end bonus</td><td>nn nn</td><td>1860</td><td>15.9977283</td></tr><tr><td>year -end bonus</td><td>nn -nn nn</td><td>12</td><td>15.9977283</td></tr><tr><td>go out</td><td>vb rp</td><td>1790</td><td>14.6464925</td></tr><tr><td>performance review</td><td>nn nn</td><td>1860</td><td>13.5716459</td></tr><tr><td>performance review bonus</td><td>nn nn nn</td><td>8</td><td>13.5716459</td></tr><tr><td>pay cut</td><td>vb nn</td><td>313</td><td>8.53341082</td></tr><tr><td>starve to death</td><td>vb to nn</td><td>26</td><td>7.93262494</td></tr><tr><td>civil service rice</td><td>jj nn nn</td><td>19</td><td>7.88517791</td></tr><tr><td>*service rice</td><td>nn nn</td><td>1860</td><td>7.88517791</td></tr><tr><td>*service rice bowl</td><td>nn nn nn</td><td>8</td><td>7.88517791</td></tr></table>"
},
"TABREF3": {
"html": null,
"num": null,
"text": "The initial candidates extracted by the Chinese collocation recognizer.",
"type_str": "table",
"content": "<table><tr><td>C-collocation Candidate</td><td colspan=\"2\">POS Patter Count</td><td>Min LLR</td></tr><tr><td>\u4e0d\u5c11 \u4eba</td><td>Ed Na</td><td>2</td><td>550.904793</td></tr><tr><td>*\u88ab \u8a8d\u70ba</td><td>PP VE</td><td>6</td><td>246.823964</td></tr><tr><td>\u666f\u6c23 \u4f4e\u8ff7</td><td>Na VH</td><td>97</td><td>79.8159904</td></tr><tr><td>\u7d93\u6fdf \u666f\u6c23 \u4f4e\u8ff7</td><td>Na Na VH</td><td>3</td><td>47.2912274</td></tr><tr><td>\u7d93\u6fdf \u666f\u6c23</td><td>Na Na</td><td>429</td><td>47.2912274</td></tr><tr><td>\u516c\u5bb6 \u98ef</td><td>Nc Na</td><td>63</td><td>42.6614685</td></tr><tr><td>*\u4e0d \u98fd</td><td>Dc VH</td><td>24</td><td>37.3489687</td></tr><tr><td>\u8003\u7e3e \u734e\u91d1</td><td>Na Na</td><td>429</td><td>36.8090448</td></tr><tr><td>\u4e0d\u865e \u88c1\u54e1</td><td>VJ VA</td><td>3</td><td>17.568518</td></tr><tr><td>\u56de\u982d \u7af6\u9010</td><td>VA VC</td><td>26</td><td>14.7120606</td></tr><tr><td>*\u9084\u90fd \u7167</td><td>Db VC</td><td>18</td><td>14.1291893</td></tr><tr><td>*\u767c \u4e0d\u8aa4</td><td>VD VH</td><td>2</td><td>13.8418648</td></tr><tr><td>*\u4f4e\u8ff7 \u4e4b\u969b</td><td>VH NG</td><td>10</td><td>11.9225789</td></tr><tr><td>*\u503c\u6b64 \u7d93\u6fdf \u666f\u6c23</td><td>VA Na Na</td><td>2</td><td>9.01342071</td></tr><tr><td>*\u503c\u6b64 \u7d93\u6fdf</td><td>VA Na</td><td>94</td><td>9.01342071</td></tr><tr><td>*\u7167 \u767c</td><td>VC VD</td><td>2</td><td>6.12848087</td></tr><tr><td>*\u4eba \u56de\u982d</td><td>Na VA</td><td>27</td><td>1.89617179</td></tr></table>"
},
"TABREF4": {
"html": null,
"num": null,
"text": "The result of Chinese collocation candidates extracted which are picked out. (the ones which have no Min LLR are singleton phrases)",
"type_str": "table",
"content": "<table><tr><td>C-collocation Candidate</td><td>POS</td><td>Patter Count</td><td>Min LLR</td></tr><tr><td>\u4e0d\u5c11 \u4eba</td><td>Ed Na</td><td>2</td><td>550.904793</td></tr><tr><td>*\u88ab \u8a8d\u70ba</td><td>PP VE</td><td>6</td><td>246.823964</td></tr><tr><td>\u666f\u6c23 \u4f4e\u8ff7</td><td>Na VH</td><td>97</td><td>79.8159904</td></tr><tr><td>\u7d93\u6fdf \u666f\u6c23 \u4f4e\u8ff7</td><td>Na Na VH</td><td>3</td><td>47.2912274</td></tr><tr><td>\u7d93\u6fdf \u666f\u6c23</td><td>Na Na</td><td>429</td><td>47.2912274</td></tr><tr><td>\u516c\u5bb6 \u98ef</td><td>Nc Na</td><td>63</td><td>42.6614685</td></tr><tr><td>*\u4e0d \u98fd</td><td>Dc VH</td><td>24</td><td>37.3489687</td></tr><tr><td>\u8003\u7e3e \u734e\u91d1</td><td>Na Na</td><td>429</td><td>36.8090448</td></tr><tr><td>\u4e0d\u865e \u88c1\u54e1</td><td>VJ VA</td><td>3</td><td>17.568518</td></tr><tr><td>\u56de\u982d \u7af6\u9010</td><td>VA VC</td><td>26</td><td>14.7120606</td></tr><tr><td>*\u9084\u90fd \u7167</td><td>Db VC</td><td>18</td><td>14.1291893</td></tr><tr><td>*\u767c \u4e0d\u8aa4</td><td>VD VH</td><td>2</td><td>13.8418648</td></tr><tr><td>*\u4f4e\u8ff7 \u4e4b\u969b</td><td>VH NG</td><td>10</td><td>11.9225789</td></tr><tr><td>*\u503c\u6b64 \u7d93\u6fdf \u666f\u6c23</td><td>VA Na Na</td><td>2</td><td>9.01342071</td></tr><tr><td>*\u503c\u6b64 \u7d93\u6fdf</td><td>VA Na</td><td>94</td><td>9.01342071</td></tr><tr><td>\u4e4b\u969b</td><td>NG</td><td>5</td><td/></tr></table>"
},
"TABREF5": {
"html": null,
"num": null,
"text": "The result of Chinese collocation candidates extracted which are picked out. The shaded collocation pairs are selected by the CLASS (Greedy Alignment Linking E).",
"type_str": "table",
"content": "<table><tr><td>English collocations</td><td>Chinese collocations</td><td>LLR</td><td>Collocation Translation Prob.</td></tr><tr><td>iron rice bowl</td><td>\u9435\u98ef\u7897</td><td>103.3</td><td>0.0202</td></tr><tr><td>rice bowl</td><td>\u9435\u98ef\u7897</td><td>77.74</td><td>0.0384</td></tr><tr><td>year-end bonus</td><td>\u5e74\u7d42\u734e\u91d1</td><td>59.21</td><td>0.0700</td></tr><tr><td>economic downturn</td><td>\u7d93\u6fdf \u666f\u6c23 \u4f4e\u8ff7</td><td>32.4</td><td>0.9359</td></tr></table>"
},
"TABREF6": {
"html": null,
"num": null,
"text": "Figures issued by the American Immigration Bureau show that most Chinese immigrants had set off from Kwangtung and Hong Kong, which is why the majority of overseas Chinese in the United States to this day are of Cantonese origin.",
"type_str": "table",
"content": "<table><tr><td>Garrison Command before sending</td></tr><tr><td>it back.</td></tr><tr><td>\u7de8\u6309\uff1a\u6b64\u8239\u7684\u78ba\u662f\u5927\u9678\u5077\u6e21\u4f86\u53f0\u8239\u96bb\uff0c\u90a3\u516b\u500b\u5b57\u53ea\u4e0d\u904e\u662f\u8b66\u7e3d\u5728\u9063\u8fd4\u524d\u7d66</td></tr><tr><td>\u5b83\u52a0\u7684\u300c\u8a18\u865f\u300d\uff01</td></tr><tr><td>Source: 1990/10 Letters to the Editor</td></tr><tr><td>Example 6</td></tr><tr><td>\u7531\u7f8e\u570b\u79fb\u6c11\u5c40\u767c\u8868\u7684\u6578\u5b57\u4f86\u770b\uff0c\u4e2d\u570b\u79fb\u6c11\u4ee5\u5f9e\u5ee3\u6771\u3001\u9999\u6e2f\u51fa\u6d77\u8005\u6700\u591a\uff0c\u6545</td></tr><tr><td>\u5230\u73fe\u5728\u70ba\u6b62\uff0c\u7f8e\u570b\u83ef\u50d1\u4ecd\u4ee5\u539f\u7c4d\u5ee3\u6771\u8005\u4f54\u5927\u591a\u6578\u3002</td></tr><tr><td>Source: 1990/09 All Across the World: The Chinese Global Village</td></tr></table>"
},
"TABREF7": {
"html": null,
"num": null,
"text": "The aligned translations for the English word \"pay\" and their translation probability Translation Count Translation Prob. Translation Count Translation Prob.",
"type_str": "table",
"content": "<table><tr><td>\u4ee3\u50f9</td><td>34</td><td>0.1214</td><td>\u82b1\u9322</td><td>7</td><td>0.025</td></tr><tr><td>\u9322</td><td>31</td><td>0.1107</td><td>\u51fa\u9322</td><td>6</td><td>0.0214</td></tr><tr><td>\u8cbb\u7528</td><td>21</td><td>0.075</td><td>\u79df</td><td>6</td><td>0.0214</td></tr><tr><td>\u4ed8\u8cbb</td><td>16</td><td>0.0571</td><td>\u767c\u7d66</td><td>6</td><td>0.0214</td></tr><tr><td>\u9818</td><td>16</td><td>0.0571</td><td>\u4ed8\u51fa</td><td>5</td><td>0.0179</td></tr><tr><td>\u7e73</td><td>16</td><td>0.0571</td><td>\u85aa\u8cc7</td><td>5</td><td>0.0179</td></tr><tr><td>\u652f\u4ed8</td><td>13</td><td>0.0464</td><td>\u4ed8\u9322</td><td>4</td><td>0.0143</td></tr><tr><td>\u7d66</td><td>13</td><td>0.0464</td><td>\u52a0\u85aa</td><td>4</td><td>0.0143</td></tr><tr><td>\u85aa\u6c34</td><td>11</td><td>0.0393</td><td>...</td><td>...</td><td>...</td></tr><tr><td>\u8ca0\u64d4</td><td>9</td><td>0.0321</td><td>\u7a4d\u6b20</td><td>2</td><td>0.0071</td></tr><tr><td>\u8cbb</td><td>9</td><td>0.0321</td><td>\u7e73\u6b3e</td><td>2</td><td>0.0071</td></tr><tr><td>\u7d66\u4ed8</td><td>8</td><td>0.0286</td><td/><td/><td/></tr><tr><td colspan=\"6\">Before running CLASS, we obtained 10,290 English idioms, collocations, and</td></tr><tr><td colspan=\"6\">phrases together with 14,945 Chinese translations in LDOCE. After part of speech</td></tr><tr><td colspan=\"6\">tagging, we had 1,851 distinct English patterns, and 4326 Chinese patterns. To</td></tr><tr><td colspan=\"6\">calculate the statistical association of within words in a monolingual collocation and</td></tr><tr><td colspan=\"6\">across the bilingual collocations, we built N-grams for the SPC. There were 790,000</td></tr><tr><td colspan=\"6\">Chinese word bigram and 669,000 distinct English bigram. CLASS identified around</td></tr><tr><td colspan=\"6\">595,000 Chinese collocation candidates (184,000 distinct types), and 230,000 English</td></tr><tr><td colspan=\"5\">collocation candidates (135,000 distinct types) in the process.</td><td/></tr></table>"
},
"TABREF8": {
"html": null,
"num": null,
"text": "Experiment result of bilingual collocation extracted from Sinorama parallel Corpus",
"type_str": "table",
"content": "<table><tr><td># keys</td><td>#answers</td><td>#hits</td><td>#errors</td><td>Recall</td><td>Precision</td></tr><tr><td>382</td><td>273</td><td>233</td><td>40</td><td>60.9%</td><td>85.2%</td></tr></table>"
}
}
}
}