{
"paper_id": "O04-1027",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:00:53.383770Z"
},
"title": "Collocational Translation Memory Extraction Based on Statistical and Linguistic Information",
"authors": [
{
"first": "Jia-Yan",
"middle": [],
"last": "Jian",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Tsing Hua University",
"location": {
"addrLine": "101, Kuangfu Road",
"settlement": "Hsinchu",
"country": "Taiwan"
}
},
"email": ""
},
{
"first": "Yu-Chia",
"middle": [],
"last": "Chang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Hua University",
"location": {
"addrLine": "101, Kuangfu Road",
"settlement": "Hsinchu",
"country": "Taiwan"
}
},
"email": ""
},
{
"first": "Jason",
"middle": [
"S"
],
"last": "Chang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Tsing Hua University",
"location": {
"addrLine": "101, Kuangfu Road",
"settlement": "Hsinchu",
"country": "Taiwan"
}
},
"email": "jschang@cs.nthu.edu.tw"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we propose a new method for bilingual collocation extraction from a parallel corpus to provide phrasal translation memory. The method integrates statistical and linguistic information for effective extraction of collocations. The linguistic information includes parts of speech, chunks, and clauses. With an implementation of the method, we obtain first an extended list of collocations from monolingual corpora such as British National Corpus (BNC). Subsequently, we exploit the list to identify English collocations in Sinorama Parallel Corpus (SPC). Finally, we use word alignment techniques to retrieve the translation equivalent of English collocations from the bilingual corpus, so as to provide phrasal translation memory for machine translation system. Based on the strength of chunk and clause analyses, we are able to extract a large number of collocations and translations with much less time and effort than those required by N-gram analysis or full parsing. Furthermore, we also consider longer collocation pattern such as a preposition involved in VN collocation. In the future, we plan to extend the method to other types of collocation.",
"pdf_parse": {
"paper_id": "O04-1027",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we propose a new method for bilingual collocation extraction from a parallel corpus to provide phrasal translation memory. The method integrates statistical and linguistic information for effective extraction of collocations. The linguistic information includes parts of speech, chunks, and clauses. With an implementation of the method, we obtain first an extended list of collocations from monolingual corpora such as British National Corpus (BNC). Subsequently, we exploit the list to identify English collocations in Sinorama Parallel Corpus (SPC). Finally, we use word alignment techniques to retrieve the translation equivalent of English collocations from the bilingual corpus, so as to provide phrasal translation memory for machine translation system. Based on the strength of chunk and clause analyses, we are able to extract a large number of collocations and translations with much less time and effort than those required by N-gram analysis or full parsing. Furthermore, we also consider longer collocation pattern such as a preposition involved in VN collocation. In the future, we plan to extend the method to other types of collocation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Example-based machine translation (EBMT), a corpus-based MT method, has been recently suggested as an efficient step toward automatic translation (Nagao, 1981; Kitano, 1993 , Carl, 1999 , Andrimanankasian et al., 1999 Brown, 2000) . Under the approach, systems exploited examples similar to input and adjusted the translations to obtain the result. Translations are preprocessed and stored in a translation memory which serves as an archive of existing translation for MT system to reuse. Nowadays, there have been a number of transducers applied to convert sentences in bilingual corpus into translation patterns, which can be further exploited as a translation memory, such as Transit 1 , Deja-Vu 2 , TransSearch 3 , TOTALrecall 4 , and so on. A problem that most MT system may encounter is the collocational translation if the system intends not to literally translate the input text. This smaller syntax unit not only facilitates a more native-like translation, but also enhances the performance of recent EBMT system. Elastic collocation structure provides more flexibility in handle translation pattern as in \"\u2026not yet to take what he wants into consideration\u2026\"",
"cite_spans": [
{
"start": 146,
"end": 159,
"text": "(Nagao, 1981;",
"ref_id": "BIBREF6"
},
{
"start": 160,
"end": 172,
"text": "Kitano, 1993",
"ref_id": "BIBREF5"
},
{
"start": 173,
"end": 185,
"text": ", Carl, 1999",
"ref_id": "BIBREF2"
},
{
"start": 186,
"end": 217,
"text": ", Andrimanankasian et al., 1999",
"ref_id": null
},
{
"start": 218,
"end": 230,
"text": "Brown, 2000)",
"ref_id": "BIBREF1"
},
{
"start": 731,
"end": 732,
"text": "4",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Using valuable linguistic information-chunk and clause analyses, we can retrieve Verb-Noun collocations from a large corpus (i.e. BNC) with good quality and quantity. We further use this collocation type list to identify the concise collocational instances in a bilingual corpus (i.e. SPC). We also use word-alignment technique to extract the matching translation of verb and noun respectively, so as to obtain phrasal translation memory. The detailed approach is described in this section:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction of Collocational Translation Memory",
"sec_num": null
},
{
"text": "CoNLL-2000 5 shared task considered text chunking as a process that divides a text into syntactically correlated parts of words. With the benefits of chunk information, we can chunk the sentence into smaller syntactic structure which facilitates precise collocation extraction. It becomes easier to identify the argument-predicate relationship between each chunk, and save more time to extract as opposed to full parsing. Take a passage in CoNLL-2000 benchmark for example:",
"cite_spans": [
{
"start": 11,
"end": 12,
"text": "5",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chunk and Clause Information Integrated",
"sec_num": "2.1"
},
{
"text": "Confidence/B-NP in/B-PP the/B-NP pound/I-NP is/B-VP widely/I-VP expected/I-VP to/I-VP take/I-VP another/B-NP sharp/I-NP dive/I-NP if/B-SBAR trade/B-NP figures/I-NP for/B-PP September/B-NP Note: I-NP for noun phrase words and I-VP for verb phrase words. Most chunk types have two different chunk tags: B-CHUNK for the first word of the chunk and I-CHUNK for the other words in the same chunk.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chunk and Clause Information Integrated",
"sec_num": "2.1"
},
{
"text": "The words in the same chunk can be further grouped together (as in Table 1 ). With chunk information, we can extract the target VN collocation, \"take \u2026 dive\" from the text by considering the last word of each adjacent VP and NP chunks. We built a robust and efficient chunker from the training data of the CoNLL shared task, with over 93% precision and recall 6 . In some cases, only considering the chunk information is not enough. For example, the sentence \"\u2026the attitude he had towards the country is positive\u2026\" may cause problem. With the chunk information, the system extracts out the type have towards the country as VP + PP + NP, yet this one is erroneous because it cuts across two clauses. To avoid this case, we further take the clause information into account.",
"cite_spans": [
{
"start": 360,
"end": 361,
"text": "6",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 67,
"end": 74,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Chunk and Clause Information Integrated",
"sec_num": "2.1"
},
{
"text": "With the training data from CoNLL-2001, we built an efficient clause model based on HMM to identify the clause relation between words. The language model provides sufficient information to avoid extracting wrong VN collocation instances. Examples show as follows (additional clause tags will be attached):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chunk and Clause Information Integrated",
"sec_num": "2.1"
},
{
"text": "(1) \u2026.the attitude (S* he has *S) toward the country (2) (S* I think (S* that the people are most concerned with the question of (S* when conditions may become ripe.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chunk and Clause Information Integrated",
"sec_num": "2.1"
},
{
"text": "As a result, we can avoid the verb from being combined with the irrelevant noun as its collocate (as in (1)) or extracting the adjacent noun serving as the subject of another clause (as in (2)). When the sentences in the corpus are preprocessed with the chunk and clause identification, we can consequently assure high accuracy of collocation extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "*S)S)S)",
"sec_num": null
},
{
"text": "2 2 2 1 1 1 2 2 1 1 1 ) 1 ( ) 1 ( ) 1 ( ) 1 ( log 2 ) ; ( 2 1 1 2 k n k k n k k n k n k p p p p p p p y x LLR \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 = k 1 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-likelihood ratio : LLR(x;y)",
"sec_num": null
},
{
"text": "of pairs that contain x and y simultaneously. k 2 : of pairs that contain x but do not contain y. n 1 : of pairs that contain y n 2 : of pairs that does not contain y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-likelihood ratio : LLR(x;y)",
"sec_num": null
},
{
"text": "p 1 = k 1 / n 1 p 2 = k 2 / n 2 p = (k 1 +k 2 ) / (n 1 +n 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-likelihood ratio : LLR(x;y)",
"sec_num": null
},
{
"text": "A huge set of collocation candidates can be obtained from BNC, via the process of integrating chunk and clause information. We here consider three prevalent Verb-Noun collocation structures in corpus: VP+NP, VP+PP+NP, and VP+NP+PP. Exploiting Logarithmic Likelihood Ratio (LLR) statistics, we can calculate the strength of association between each two collocates. The collocational type with threshold higher than 7.88 (confidence level 95%) will serve as one entry in our collocation type list.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction of Collocation Types",
"sec_num": "2.2"
},
{
"text": "We subsequently identify collocation instances in the Sinorama Parallel Corpus (SPC) matching the collocation types extracted from BNC. Making use of the sequence of chunk types, we again single out the adjacent structures: VP+NP, VP+PP+NP, or VP+NP+PP. With the help of chunk and clause information, we thus find the valid instances where the expected collocation types are located, so as to build a collocational concordance. Moreover, the quantity and quality of BNC also facilitate the collocation identification in another smaller bilingual corpus with better statistic measure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction of Collocation Instances",
"sec_num": "2.3"
},
{
"text": "When accurate instances are obtained from bilingual corpus, we continue to integrate the statistical word-alignment techniques (Melamed, 1997) and dictionaries to find the translation candidates for each of the two collocates. We first locate the translation of the noun. Subsequently, we locate the verb nearest to the noun translation to find the translation for the verb. We can think of collocation with corresponding translations as a kind of translation memory (shows in Table 2 ). Chinese sentence If in this time no one shows concern for them, and directs them to correct thinking, and teaches them how to express and release emotions, this could very easily leave them with a terrible personality complex they can never resolve. Occasionally some kungfu movies may appeal to foreign audiences, but these too are exceptions to the rule.",
"cite_spans": [
{
"start": 127,
"end": 142,
"text": "(Melamed, 1997)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 477,
"end": 484,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Extracting Collocational Translation Equivalents in Bilingual Corpus",
"sec_num": "2.4"
},
{
"text": "We extracted VN collocations from the BNC which contains about 4 million sentences, and obtained 631,638 VN, 15,394 VPN, and 14,008 VNP collocation types with an implementation of the proposed method. We continued to identify 26,315VN, 3,457 VPN, and 4,406 VNP collocation instances in SPC and generated eligible translation memory via word-alignment techniques. The implementation result of BNC and SPC shows in the Table 3 , 4, and 5. That means they would already be exerting their influence by the time the microwave background was born.",
"cite_spans": [],
"ref_spans": [
{
"start": 417,
"end": 424,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Implementation and evaluation",
"sec_num": "3"
},
{
"text": "The Davies brothers, Adrian (who scored 14 points) and Graham (four), exercised an important creative influence on Cambridge fortunes while their flankers Holmes and Pool-Jones were full of fire and tenacity in the loose.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exercise influence",
"sec_num": null
},
{
"text": "Fortunately, George V had worked well with his father and knew the nature of the current political trends, but he did not wield the same influence internationally as his esteemed father.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wield influence",
"sec_num": null
},
{
"text": "The cab extended its influence into the non-government sector, funding research by the Cathedral Advisory Commission and the Royal Society for the Protection of Birds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extend influence",
"sec_num": null
},
{
"text": "The general standard of farming was good, reflecting the influence of the sons who had attained either a degree or a diploma in agriculture before returning home.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reflect influence",
"sec_num": null
},
{
"text": "To break up the Union now would diminish our influence for good in the world, just at the time when it is most needed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diminish influence",
"sec_num": null
},
{
"text": "In general, women have not benefited much in the job market from capitalist industrialization nor have they gained much influence in society outside the family through political channels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gain influence",
"sec_num": null
},
{
"text": "To try and counteract the influence of the extremists, the moderate wing of the party launched a Labour Solidarity Campaign in 1981.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Counteract influence",
"sec_num": null
},
{
"text": "Whether the curbs on police investigation will reduce police influence on the outcome of the criminal process is not easy to determine. Young and Bion , 1980b ) even when it is the age at which words are first read rather than heard that is under investigation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reduce influence",
"sec_num": null
},
{
"text": "As for each collocation type, we randomly selected 100 test sentences for manual evaluation. A human judge, who majored in Foreign Languages, assessed the result of the matching translation. The evaluation was done by judging whether the corresponding collocational translation is valid or not. The three levels of quality were set: satisfactory translation, approximant translation (partial matching), and unacceptable translation. The examples of each level are shown in Table 6 . Thus when Chinpao Shan put out its advertisement last year, looking for new people to develop its related enterprises, the notice frankly stated \"Southern Taiwanese preferred. approximant translation Ah-ying relates that \"Teacher Chang\" friendly and easy-going, is always there to answer her questions. She even goes to him for answers when her friends have legal questions. unacceptable translation Said one observer, \"If I can speak bluntly, the mainlanders are robbing graves of their treasures and smuggling them away, and the situation is bad. In reality, though, it is Taiwan that is behind it all committing the crime.",
"cite_spans": [],
"ref_spans": [
{
"start": 473,
"end": 480,
"text": "Table 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Reduce influence",
"sec_num": null
},
{
"text": "The evaluation result indicates an average precision rate of 89 % with regard to both satisfactory and approximant translation memory (shows in Table 7 ). The average precision of translation memory: 72.3%",
"cite_spans": [],
"ref_spans": [
{
"start": 144,
"end": 151,
"text": "Table 7",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Reduce influence",
"sec_num": null
},
{
"text": "The average precision of translation memory (*): 89.3%",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reduce influence",
"sec_num": null
},
{
"text": "(*) stands for the numbers of translation memory which includes approximant translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reduce influence",
"sec_num": null
},
{
"text": "Collocation, a hallmark of near native speaker, is an important area in translation yet has long been neglected. Traditional machine translation tends to translate input texts word by word, which easily leads to literal translation. Therefore, even with abundant vocabulary from dictionary and grammar rule-based model systems still fail to generate fluent translation into a target language. For example, with the lack of collocational knowledge, machine translation system may recognize take as \"na\" (i.e. take away) and medicine as \"yao\" (i.e. medicine) in Chinese respectively. Thus, systems are inclined to literally translate take medicine into \"na yao\" (i.e. take away the medicine), and probably result in odd translation or mistranslation. We suggest that machine translation system take collocational translation memory into consideration for improved translation quality. The notion of collocation is also consistent with Example-Based Machine Translation (EBMT). Due to the limitation of word-alignment technique, our method may incorrectly recognize some matching translation. We need better word-alignment to align translations more correctly. Moreover, the expansion of bilingual corpora can also increase the precision of retrieving collocational translation memory. It enables us to obtain enough counts for each collocate (i.e. verb and noun in VN collocation) in the target language so as to increase the reliability with the LLR statistics, which in turn eradicates the anomalous collocational translation memory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and limitation",
"sec_num": "4."
},
{
"text": "With the collocation types and instances extracted from the corpus, we built an on-line collocational concordance called TANGO for looking up collocation instances and translations. A user can type in any English words as query and select the expected part of speech of the accompanying words. For example in Figure 1 , after query \"influence\" is submitted, the result of possible collocates will be displayed on the return page. The user can even select different adjacent collocates for further investigation. Moreover, using the technique of bilingual collocation alignment and sentence alignment, the system will display the target collocation with highlight to show translation equivalents in context. Translators or learners, through this web-based interface, can easily acquire the usage of each collocation with relevant instances. This bilingual collocational concordance is a very useful tool for self-inductive learning tailored to intermediate or advanced English learners. ",
"cite_spans": [],
"ref_spans": [
{
"start": 309,
"end": 317,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Application: Collocational Concordance-TANGO",
"sec_num": "5."
},
{
"text": "In the field of the machine translation, the Example-Based Machine Translation (EBMT) exploits existing translations in the hope of producing better quality in translation. However, the importance of collocational translation has always been neglected and hard to be dealt with. We propose the collocational translation memoryto provide a better translation method, intending to solve some problem encountered by literal translation. With satisfactory precision rates of collocation and translation extraction, we hope collocational translation memory will path ways to more applications in translation and computer assisted language learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "Transit (http://www.star-group.net/eng/software/sprachtech/transit.html) 2 Deja-Vu (http://www.atril.com/) 3 TransSearch (http://www.tsrali.com/) 4 TOTALrecall(http://candle.cs.nthu.edu.tw/Counter/Counter.asp?funcID=1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "CoNLL is the yearly meeting of the SIGNLL, the Special Interest Group on Natural Language Learning of the Association for Computational Linguistics. The shared task of text chunking in CoNLL-2000 is available at http://cnts.uia.ac.be/conll2000/.6 We built the chunker from shared CoNLL-2000 training data and evaluate the result with the test data provided by CoNLL-2000. The precision and the recall are both 93.7%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is carried out under the project \"CANDLE\" funded by National Science Council in Taiwan (NSC92-2524-S007-002). Further information about CANDLE is available at http://candle.cs.nthu.edu.tw/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Example-Based Machine Translation of Part-Of-Speech Tagged Sentences by Recursive Division",
"authors": [
{
"first": "T",
"middle": [],
"last": "Andriamanankasina",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Araki",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Tochinai",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of MT SUMMIT VII",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andriamanankasina, T., Araki, K. and Tochinai, T. 1999. Example-Based Machine Translation of Part-Of-Speech Tagged Sentences by Recursive Division. Proceedings of MT SUMMIT VII. Singapore.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Automated Generalization of Translation Examples",
"authors": [
{
"first": "R",
"middle": [
"D"
],
"last": "Brown",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Eighteenth International Conference on Computational Linguistics (COLING-2000)",
"volume": "",
"issue": "",
"pages": "125--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brown, R. D. 2000. Automated Generalization of Translation Examples. In Proceedings of the Eighteenth International Conference on Computational Linguistics (COLING-2000), pp. 125-131. Saarbr\u00fccken, Germany, August 2000.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Inducing Translation Templates for Example-Based Machine Translation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Carl",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. of MT Summit VII",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carl, M. 1999. Inducing Translation Templates for Example-Based Machine Translation, Proc. of MT Summit VII.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A Word-to-Word Model of Translational Equivalence",
"authors": [
{
"first": "I",
"middle": [
"D"
],
"last": "Melamed",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. of the ACL97",
"volume": "",
"issue": "",
"pages": "490--497",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melamed, I. D. 1997. A Word-to-Word Model of Translational Equivalence. Proc. of the ACL97. pp 490-497.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Comprehensive and PracticalModel of Memory-Based Machine Translation",
"authors": [
{
"first": "H",
"middle": [],
"last": "Kitano",
"suffix": ""
}
],
"year": 1993,
"venue": "Proc.of IJCAI-93",
"volume": "",
"issue": "",
"pages": "1276--1282",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kitano, H. 1993. A Comprehensive and PracticalModel of Memory-Based Machine Translation. Proc.of IJCAI-93. pp. 1276-1282.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Framework of a MechanicalTranslation between Japanese and English byAnalogy Principle",
"authors": [
{
"first": "M",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1981,
"venue": "",
"volume": "",
"issue": "",
"pages": "173--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nagao, M. 1981. A Framework of a MechanicalTranslation between Japanese and English byAnalogy Principle, in Artificial and Human Intelligence, A. Elithorn and R. Banerji (eds.) North-Holland, pp. 173-180, 1984.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Web-based Collocational Concordance"
},
"TABREF0": {
"content": "
: Chunked Sentence | |
Sentence chunking | Features |
Confidence | NP |
in | PP |
the pound | NP |
is widely expected to take | VP |
another sharp dive | NP |
if | SBAR |
trade figures | NP |
for | PP |
September | NP |
",
"num": null,
"type_str": "table",
"html": null,
"text": ""
},
"TABREF1": {
"content": "",
"num": null,
"type_str": "table",
"html": null,
"text": "Examples of collocational translation memoryEnglish sentence"
},
"TABREF2": {
"content": "Type | Collocation types in | Collocation instances in |
| British Nation Corpus (BNC) | Sinorama Parallel Corpus |
| | (SPC) |
VN | 631,638 | 26,315 |
VPN | 15,394 | 3,457 |
VNP | 14,008 | 4,406 |
",
"num": null,
"type_str": "table",
"html": null,
"text": "The result of collocation types extracted from BNC and collocation instances identified in SPC"
},
"TABREF3": {
"content": "Noun | VN types | VN instances |
Language | 320 | 945 |
Influence | 319 | 880 |
Threat | 222 | 633 |
Doubt | 199 | 545 |
Crime | 183 | 498 |
Phone | 137 | 460 |
Cigarette | 121 | 379 |
Throat | 86 | 246 |
Living | 79 | 220 |
Suicide | 47 | 134 |
",
"num": null,
"type_str": "table",
"html": null,
"text": "Examples of collocation types including a given noun in BNC"
},
"TABREF4": {
"content": "VN type | Example |
Exert influence | |
",
"num": null,
"type_str": "table",
"html": null,
"text": "Examples of collocation instances extracted from SPC"
},
"TABREF6": {
"content": "Level of quality | English sentences | Chinese sentences |
satisfactory | | |
translation | | |
",
"num": null,
"type_str": "table",
"html": null,
"text": "Three levels of quality of the extracted translation memory"
},
"TABREF7": {
"content": "Type | The number | Translation | Translation | Precision of | Precision of |
| of selected | Memory | Memory (*) | Translation | Translation |
| sentences | | | Memory | Memory (*) |
VN | 100 | 73 | 90 | 73 | 90 |
VPN | 100 | 66 | 89 | 66 | 89 |
VNP | 100 | 78 | 89 | 78 | 89 |
",
"num": null,
"type_str": "table",
"html": null,
"text": "Experiment result of collocational translation memory from Sinorama parallel Corpus"
}
}
}
}