Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O03-3004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:02:00.878313Z"
},
"title": "Using Punctuations and Lengths for Bilingual Sub-sentential Alignment",
"authors": [
{
"first": "Wen-Chi",
"middle": [],
"last": "Hsien",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Tsing Hua University",
"location": {
"addrLine": "101, Kuangfu Road",
"postCode": "300",
"settlement": "Hsinchu",
"country": "Taiwan, ROC"
}
},
"email": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Yeh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Tsing Hua University",
"location": {
"addrLine": "101, Kuangfu Road",
"postCode": "300",
"settlement": "Hsinchu",
"country": "Taiwan, ROC"
}
},
"email": ""
},
{
"first": "Jason",
"middle": [
"S"
],
"last": "Chang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Tsing Hua University",
"location": {
"addrLine": "101, Kuangfu Road",
"postCode": "300",
"settlement": "Hsinchu",
"country": "Taiwan, ROC"
}
},
"email": "jschang@cs.nthu.edu.tw"
},
{
"first": "Thomas",
"middle": [
"C"
],
"last": "Chuang",
"suffix": "",
"affiliation": {},
"email": "tomchuang@cc.vit.edu.tw"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a new approach to aligning bilingual English and Chinese text at sub-sentential level by interleaving alphabetic texts and punctuations matches. With sub-sentential alignment, we expect to improve the effectiveness of alignment at word, chunk and phrase levels and provide finer grained and more reusable translation memory.",
"pdf_parse": {
"paper_id": "O03-3004",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a new approach to aligning bilingual English and Chinese text at sub-sentential level by interleaving alphabetic texts and punctuations matches. With sub-sentential alignment, we expect to improve the effectiveness of alignment at word, chunk and phrase levels and provide finer grained and more reusable translation memory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recently, there are renewed interests in using bilingual corpus for building systems for statistical machine translation (Brown et al. 1988 (Brown et al. , 1991 , including data-driven machine translation (2002) , computerassisted revision of translation (Jutras 2000) and cross-language information retrieval (Kwok 2001) . It is therefore useful for the bilingual corpus to be aligned at the sentence level and even sub-sentence level with very high precision (Moore 2002; Chuang, You and Chang 2002, Kueng and Su 2002) . Especially, for further analyses such as phrase alignment, word alignment (Ker and Chang 1997; Melamed 2000) and translation memory, high precision and quality alignment at sentence or sub-sentential levels would be very useful. Furthermore, alignment at sub-sentential level has the potential of improving the effectiveness of alignment at word, chunk and phrase levels and providing finer grained and more reusable translation memory.",
"cite_spans": [
{
"start": 121,
"end": 139,
"text": "(Brown et al. 1988",
"ref_id": null
},
{
"start": 140,
"end": 160,
"text": "(Brown et al. , 1991",
"ref_id": "BIBREF0"
},
{
"start": 205,
"end": 211,
"text": "(2002)",
"ref_id": null
},
{
"start": 255,
"end": 268,
"text": "(Jutras 2000)",
"ref_id": "BIBREF3"
},
{
"start": 310,
"end": 321,
"text": "(Kwok 2001)",
"ref_id": "BIBREF6"
},
{
"start": 461,
"end": 473,
"text": "(Moore 2002;",
"ref_id": null
},
{
"start": 474,
"end": 489,
"text": "Chuang, You and",
"ref_id": null
},
{
"start": 490,
"end": 511,
"text": "Chang 2002, Kueng and",
"ref_id": null
},
{
"start": 512,
"end": 520,
"text": "Su 2002)",
"ref_id": "BIBREF5"
},
{
"start": 597,
"end": 617,
"text": "(Ker and Chang 1997;",
"ref_id": "BIBREF4"
},
{
"start": 618,
"end": 631,
"text": "Melamed 2000)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Much work has been reported in the literature of computational linguistics studying how to align sentences. One of the most effective approaches is length-based approach proposed by Brown et al. and by Gale and Church. Length-based approach for aligning parallel corpora has commonly been used and produces surprisingly good results for the language pair of French and English at success rates well over 96%. However, it does not perform as well for alignment of two distant languages such as Chinese-English. Furthermore, for sub-sentential alignment, length-based approach gets less effectiveness than running it in sentence level since sub-sentence has less information in length.",
"cite_spans": [
{
"start": 182,
"end": 201,
"text": "Brown et al. and by",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Punctuations based approach (Yeh, Chuang and Chang 2003 ) for sentence alignment produces high accuracy rates as same as length based approach and was independent of languages. Although the ways different languages around the world use punctuations vary, symbols such as commas and full stops are used in most languages to demarcate writing, while question and exclamation marks are used to show emphasis. However, for sub-sentential alignment, punctuation-based approach has the same problem as length-based approach -no enough information in sub-sentence since sub-sentence might be very short and just include one or two punctuations within it. Yeh, Chuang and Chang (2003) examined the results of punctuation-based sentence alignment and observed:",
"cite_spans": [
{
"start": 28,
"end": 57,
"text": "(Yeh, Chuang and Chang 2003 )",
"ref_id": "BIBREF9"
},
{
"start": 648,
"end": 676,
"text": "Yeh, Chuang and Chang (2003)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "\"Although word alignment links do cross one and other a lot, they general seem not to cross the links between punctuations. It appears that we can obtain sub-sentential alignment at clause and phrase levels from the alignment of punctuation.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Building on their work, we develop a new approach to sub-sentential alignment by interleaving the alignment of text and punctuations. In the following, we first give an example for bilingual sub-sentential alignment in Section 2. Then we introduce our probability model in Section 3. Next, we describe experimental setup and results in Section 4. We conclude in Section 5 with discussion and future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Consider a pair of aligned sentences in a parallel corpus as below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": "2."
},
{
"text": "\"My goal is simply this -to safeguard Hong Kong's way of life. This way of life not only produces impressive material and cultural benefits; it also incorporates values that we all cherish. Our prosperity and stability underpin our way of life. But, equally, Hong Kong's way of life is the foundation on which we must build our future stability and prosperity.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": "2."
},
{
"text": "We can observe that although word alignment links might cross one and other a lot, there exist some text-blocks as follow that general seem not to cross the links between punctuations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u6211\u7684\u76ee\u6a19\u5f88\u7c21\u55ae\uff0c\u5c31\u662f\u8981\u4fdd\u969c\u9999\u6e2f\u7684\u751f\u6d3b\u65b9\u5f0f\u3002\u9019\u500b\u751f\u6d3b\u65b9\u5f0f\uff0c\uf967\u55ae\u5728\u7269\u8cea\u548c\u6587\u5316\u65b9\u9762\u70ba\u6211\u5011\u5e36\uf92d\uf9ba \u91cd\u5927\u7684\uf9dd\u76ca\uff0c\u800c\u4e14\uf901\u878d\u5408\uf9ba\u5927\u5bb6\u90fd\u73cd\u60dc\u7684\u50f9\u503c\u89c0\u3002\u9999\u6e2f\u7684\u5b89\u5b9a\u7e41\u69ae\u662f\u6211\u5011\u751f\u6d3b\u65b9\u5f0f\u7684\u652f\u67f1\u3002\u540c\u6a23\u5730\uff0c \u6211\u5011\u672a\uf92d\u7684\u5b89\u5b9a\u7e41\u69ae\uff0c\u4ea6\u5fc5\u9808\u4ee5\u9999\u6e2f\u7684\u751f\u6d3b\u65b9\u5f0f\u70ba\u57fa\u790e\u3002",
"sec_num": null
},
{
"text": "\"My goal is simply this -\" \"\u6211\u7684\u76ee\u6a19\u5f88\u7c21\u55ae\uff0c\" \"to safeguard Hong Kong's way of life.\" \"\u5c31\u662f\u8981\u4fdd\u969c\u9999\u6e2f\u7684\u751f\u6d3b\u65b9\u5f0f\u3002\" \"This way of life not only produces impressive material and cultural benefits;\" \"\u9019\u500b\u751f\u6d3b\u65b9\u5f0f\uff0c\uf967\u55ae\u5728\u7269\u8cea\u548c\u6587\u5316\u65b9\u9762\u70ba\u6211\u5011\u5e36\uf92d\uf9ba\u91cd\u5927\u7684\uf9dd\u76ca\uff0c\" \"it also incorporates values that we all cherish.\" \"\u800c\u4e14\uf901\u878d\u5408\uf9ba\u5927\u5bb6\u90fd\u73cd\u60dc\u7684\u50f9\u503c\u89c0\u3002\" \u2026 That's what we call sub-sentences here. From the examples above, we can define that a sub-sentence is a text-block that include at least one or more punctuations. That's an unclear definition since a sentence and a paragraph also fit the definition too. However, what we want is to find out the shortest parallel textblock pairs that fit the definition. That's why in the third pair of above examples, \"\u9019\u500b\u751f\u6d3b\u65b9\u5f0f\uff0c\" is a Chinese text-block but we have to combine it with \"\uf967\u55ae\u5728\u7269\u8cea\u548c\u6587\u5316\u65b9\u9762\u70ba\u6211\u5011\u5e36\uf92d\uf9ba\u91cd\u5927\u7684\uf9dd \u76ca\uff0c\", because we can't find any English text-block correspond to \"\u9019\u500b\u751f\u6d3b\u65b9\u5f0f\uff0c\", we have to combine the two Chinese above first, than we can find the corresponding one : \"This way of life not only produces impressive material and cultural benefits;\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u6211\u7684\u76ee\u6a19\u5f88\u7c21\u55ae\uff0c\u5c31\u662f\u8981\u4fdd\u969c\u9999\u6e2f\u7684\u751f\u6d3b\u65b9\u5f0f\u3002\u9019\u500b\u751f\u6d3b\u65b9\u5f0f\uff0c\uf967\u55ae\u5728\u7269\u8cea\u548c\u6587\u5316\u65b9\u9762\u70ba\u6211\u5011\u5e36\uf92d\uf9ba \u91cd\u5927\u7684\uf9dd\u76ca\uff0c\u800c\u4e14\uf901\u878d\u5408\uf9ba\u5927\u5bb6\u90fd\u73cd\u60dc\u7684\u50f9\u503c\u89c0\u3002\u9999\u6e2f\u7684\u5b89\u5b9a\u7e41\u69ae\u662f\u6211\u5011\u751f\u6d3b\u65b9\u5f0f\u7684\u652f\u67f1\u3002\u540c\u6a23\u5730\uff0c \u6211\u5011\u672a\uf92d\u7684\u5b89\u5b9a\u7e41\u69ae\uff0c\u4ea6\u5fc5\u9808\u4ee5\u9999\u6e2f\u7684\u751f\u6d3b\u65b9\u5f0f\u70ba\u57fa\u790e\u3002",
"sec_num": null
},
{
"text": "In this section we describe our probability model. To do so, we will first introduce some necessary notation. Let E be an English paragraph e 1 , e 2 ,\u2026,e m and C be a Chinese paragraph c 1 , c 2 ,\u2026,c n , which e i and c j is a text-blocks as described in Section 2. We define a link l(e i , c j ) to exist if e i and c j are translation ( or part of a translation ) of one another. We define null link l(e i , c 0 ) to exist if e i does not correspond to a translation of any c j . The null link l(e 0 , c j ) is defined similarly. An alignment A for two paragraphs E and C is a set of links such that every text-block in E and C participates in at least one link, and a text-block linked to e 0 or c 0 participates in no other links.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability Model",
"sec_num": "3."
},
{
"text": "We define the alignment problem as finding the alignment A that maximizes P(A|E, C). An alignment A consists of t links {l 1 , l 2 ,\u2026, l t }, where each l k = l(e ik , c jk ) for some i k and j k .We will refer to consecutive",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability Model",
"sec_num": "3."
},
{
"text": "subsets of A as } ,..., , { 1 j i i j i l l l l + =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability Model",
"sec_num": "3."
},
{
"text": ", Given this notation, P(A|E, C) can be decomposed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability Model",
"sec_num": "3."
},
{
"text": "\u220f = \u2212 = = t k k k t l C E l P F E l P F E A P 1 1 1 1 ) , , | ( ) , | ( ) , | (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability Model",
"sec_num": "3."
},
{
"text": "For each condition probability, given any pair e i and c j , the link probabilities can be determined directly from combining the probability of length-based model with punctuation-based model. From the paper of Gale and Church in 1993 for length-based model, we know the match probability is Prob( \u03b4 | match ) Prob(match) and Prob( \u03b4 | match ) can be estimated by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability Model",
"sec_num": "3."
},
{
"text": "Prob( \u03b4 | match ) = 2 ( 1 -Prob( |\u03b4| ) )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability Model",
"sec_num": "3."
},
{
"text": "Where Prob( |\u03b4| ) is the probability that random variable, z, with a standardized ( mean zero, variance one) normal distribution, has magnitude at least as large as |\u03b4|. That is, Then, from Yeh, Chuang and Chang (2003) , for punctuation-based model, we know: where e i and c i is \u03bb, one, or two punctuations, e i , c j = English and Chinese text-block pe 1 pe 2 \u2026pe m = pE, the English punctuations, pc 1 pc 2 \u2026pc n = pC, the Chinese punctuations, |pe i | and |pc i | are the number 0f punctuations in pe i and pc i respectively, P(pc i , pe i ) = probability of pc i translates into pe i , Thus, for each link l k given E, C and l, we can computing the probability as following: ",
"cite_spans": [
{
"start": 190,
"end": 218,
"text": "Yeh, Chuang and Chang (2003)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probability Model",
"sec_num": "3."
},
{
"text": "P(e i , c j ) = \u220f \u220f \u2212 = \u2212 = \u00d7 \u00d7 1 1 1 1 |) | , 0 ( ) , ( ) 0 |, (| ) , ( |) | |, (| ) , ( m i j n j j i i j i j i pc",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Where",
"sec_num": null
},
{
"text": "P( l k |E, C, l",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Where",
"sec_num": null
},
{
"text": "In order to assess the performance of our sub-sentential alignment model, we selected top ten bilingual articles from official record of proceedings of Hong Kong Legislative Council at Oct. 7, 1992 as our experimental data. For probability of punctuation, We use all the data such as punctuation translation probability (Table 1 )and category frequency Prob(match) ( Table 2) from Yeh, Chuang and Chang (2003) directly. For probability of length, we set c = 3.23 , standard variance = 0.93 and match probability as ",
"cite_spans": [
{
"start": 381,
"end": 409,
"text": "Yeh, Chuang and Chang (2003)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 320,
"end": 328,
"text": "(Table 1",
"ref_id": null
},
{
"start": 367,
"end": 375,
"text": "Table 2)",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experimental result",
"sec_num": "4."
},
{
"text": "We propose a model combining length-based approach with punctuation-based approach to do subsentential alignment and we got about 93% precision rates here. It was not bad but still had a lot of space to improve. We should change the sub-sentence match type probability first of all. We use the probability of sentence match type instead of sub-sentence match type in this experiment since we don't do subsentence training first. It causes a problem, because a sub-sentence has higher probability to include two or three text-blocks within it than a sentence do. An inverted sentence causes the second problem here, no matter length-based or punctuation-based approach you used; they cannot solve this kind of problem. We might add lexical information in it to solve this kind of problem in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and future work",
"sec_num": "5."
}
],
"back_matter": [
{
"text": " Table A . all incorrect alignments of this experiment. Shaded parts indicate imprecision in alignment results. We calculated the precision rates by dividing the number of unshaded sentences (counting both English and Chinese sentences) by total number of sentences proposed. Since we did not exclude aligned pair using a threshold, the recall rate should be the same as the precision rate. ",
"cite_spans": [],
"ref_spans": [
{
"start": 1,
"end": 8,
"text": "Table A",
"ref_id": null
}
],
"eq_spans": [],
"section": "Appendix",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Aligning sentences in parallel corpora",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Lai",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1991,
"venue": "29th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "169--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brown, P. F., J. C. Lai and R. L. Mercer (1991), 'Aligning sentences in parallel corpora', in 29th Annual Meeting of the Associa- tion for Computational Linguistics, Berkeley, CA, USA. pp. 169-176.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Aligning Sentences in Bilingual Corpora Using Lexical Information",
"authors": [
{
"first": "Stanley",
"middle": [
"F"
],
"last": "Chen",
"suffix": ""
}
],
"year": 1993,
"venue": "Lecture Notes in Artificial Intelligence",
"volume": "2499",
"issue": "",
"pages": "21--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, Stanley F. (1993), Aligning Sentences in Bilingual Corpora Using Lexical Information. In Proceedings Chuang, T., G.N. You, J.S. Chang (2002) Adaptive Bilingual Sentence Alignment, Lecture Notes in Artificial Intelligence 2499, 21-30.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A program for aligning sentences in bilingual corpus",
"authors": [
{
"first": "William",
"middle": [
"A"
],
"last": "Gale",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Kenneth",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Church",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "",
"pages": "75--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gale, William A. & Kenneth W. Church (1993), A program for aligning sentences in bilingual corpus. In Computational Linguistics, vol. 19, pp. 75-102.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An Automatic Reviser: The TransCheck System",
"authors": [
{
"first": "J-M",
"middle": [],
"last": "Jutras",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "127--134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jutras, J-M 2000. An Automatic Reviser: The TransCheck System, In Proc. of Applied Natural Language Processing, 127-134.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A class-based approach to word alignment",
"authors": [
{
"first": "Sue",
"middle": [
"J"
],
"last": "Ker",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Jason",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "",
"pages": "313--344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ker, Sue J. & Jason S. Chang (1997), A class-based approach to word alignment. In Computational Linguistics, 23:2, pp. 313-344.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Robust Cross-Domain Bilingual Sentence Alignment Model",
"authors": [
{
"first": "T",
"middle": [
"L"
],
"last": "Kueng",
"suffix": ""
},
{
"first": "Keh-Yih",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 19th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kueng, T.L. and Keh-Yih Su, 2002. A Robust Cross-Domain Bilingual Sentence Alignment Model, In Proceedings of the 19th International Conference on Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "NTCIR-2 Chinese, Cross-Language Retrieval Experiments Using PIRCS",
"authors": [
{
"first": "K",
"middle": [
"L"
],
"last": "Kwok",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Second NTCIR Workshop Meeting",
"volume": "",
"issue": "",
"pages": "14--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kwok, KL. 2001. NTCIR-2 Chinese, Cross-Language Retrieval Experiments Using PIRCS. In Proceedings of the Second NTCIR Workshop Meeting, pp. (5) 14-20, National Institute of Informatics, Japan.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A portable algorithm for mapping bitext correspondence",
"authors": [
{
"first": "I",
"middle": [],
"last": "Melamed",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dan",
"suffix": ""
}
],
"year": 1997,
"venue": "The 35th Conference of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melamed, I. Dan (1997), A portable algorithm for mapping bitext correspondence. In The 35th Conference of the Association for Computational Linguistics (ACL 1997), Madrid, Spain.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Sentence and word alignment between Chinese and English",
"authors": [
{
"first": "Scott",
"middle": [],
"last": "Piao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Songlin",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piao, Scott Songlin 2000 Sentence and word alignment between Chinese and English. Ph.D. thesis, Lancaster University.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Using Punctuations for Bilingual Sentence Alignment -Preparing Parallel Corpus for Distribution by the ACLCLP",
"authors": [
{
"first": "Kevin",
"middle": [
"C"
],
"last": "Yeh",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"C"
],
"last": "Chuang",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"S"
],
"last": "Chang",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin C. Yeh, Thomas C. Chuang, Jason S. Chang (2003), Using Punctuations for Bilingual Sentence Alignment -Preparing Parallel Corpus for Distribution by the ACLCLP",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "\u03b4 directly from the length of two portions of text, l 1 and l 2 , and the two parameters, c and s 2 . (Where c is the expected number of characters in L 2 per character in L 1 , and s 2 is the variance of the number of characters in L 2 per character in L 1 .) That is, Prob( |\u03b4| ) is computed by integrating a standard normal distribution ( with mean zero and variance 1).",
"num": null
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "k-1 ) = P( \u03b4 | match )P(match) * P(e i , c i ) , So",
"num": null
},
"TABREF1": {
"type_str": "table",
"num": null,
"text": "Punctuation Translation probability",
"content": "<table><tr><td>English</td><td>Chinese</td><td>Match</td></tr><tr><td>Pun.</td><td>Pun.</td><td>Type Counts Probability</td></tr></table>",
"html": null
},
"TABREF2": {
"type_str": "table",
"num": null,
"text": "After aligning by our model, we got 94 parallel records from the ten articles, and had precision rate at 92.55%. To calculate precision rate, we count English and Chinese sub-sentences isolated, so were the error records. For detail, refer to Appendix. Following table show the result:",
"content": "<table><tr><td/><td colspan=\"6\">: P(match) Category Frequency Prob(match)</td><td/></tr><tr><td>Match type</td><td>1-1</td><td>1-0, 0-1</td><td>1-2</td><td>2-1</td><td>1-3</td><td>1-4</td><td>1-5</td></tr><tr><td>Probability</td><td colspan=\"4\">0.65 0.000197 0.0526 0.178</td><td colspan=\"3\">0.066 0.0013 0.00013</td></tr><tr><td/><td colspan=\"5\">Table 3. Match probability of sentences</td><td/><td/></tr><tr><td/><td/><td colspan=\"2\">Match Type</td><td>Probability</td><td/><td/><td/></tr><tr><td/><td/><td>1-0</td><td/><td>0.000197</td><td/><td/><td/></tr><tr><td/><td/><td>0-1</td><td/><td>0.000197</td><td/><td/><td/></tr><tr><td/><td/><td>1-1</td><td/><td>0.6513</td><td/><td/><td/></tr><tr><td/><td/><td>2-2</td><td/><td>0.0066</td><td/><td/><td/></tr><tr><td/><td/><td>1-2</td><td/><td>0.0526</td><td/><td/><td/></tr><tr><td/><td/><td>2-1</td><td/><td>0.1776</td><td/><td/><td/></tr><tr><td/><td/><td>1-3</td><td/><td>0.0066</td><td/><td/><td/></tr><tr><td/><td/><td>3-1</td><td/><td>0.0658</td><td/><td/><td/></tr><tr><td/><td/><td>1-4</td><td/><td>0.00132</td><td/><td/><td/></tr><tr><td/><td/><td>4-1</td><td/><td>0.0132</td><td/><td/><td/></tr><tr><td>Article</td><td/><td colspan=\"2\"># of sub-sentence</td><td colspan=\"2\">errors</td><td/><td>Prec(%)</td></tr><tr><td colspan=\"2\">Official record of proceed-</td><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">ings of Hong Kong Legisla-</td><td>188</td><td/><td>14</td><td/><td/><td>92.55</td></tr><tr><td>tive Council</td><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"html": null
}
}
}
}