{
"paper_id": "O03-2003",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:01:58.314920Z"
},
"title": "Extracting Verb-Noun Collocations from Text",
"authors": [
{
"first": "Jia",
"middle": [],
"last": "Yan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Tsing Hua University",
"location": {
"addrLine": "101, Kuangfu Road",
"settlement": "Hsinchu",
"country": "Taiwan"
}
},
"email": "g914339@oz.nthu.edu.tw"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we describe a new method for extracting monolingual collocations. The method is based on statistical methods extracts. VN collocations from large textual corpora. Being able to extract a large number of collocations is very critical to machine translation and many other application. The method has an element of snowballing in it. Initially, one identifies a pattern that will produce a large portion of VN collocations. We experimented with an implementation of the proposed method on a large corpus with satisfactory results. The patterns are further refined to improve on the precision ration. 1 Introduction Collocations are recurrent combinations of words that co-occur more often than chance. Collocations like terminology tend to be lexicalized and have a somehow more restricted meaning than the surface form suggested (Justerson and Katz 1994). The words in a collocation may be appearing next to each other (rigid collocation) or otherwise (flexible/elastic collocations). On the other hand, collocations can be classified into lexical and grammatical collocations (Benson, Benson, Ilson, 1986). Lexical collocations are formed between content words, while the grammatical collocation has to do with a content word with a function word or a syntactic structure. Collocations are pervasive in all types of writing and can be found in phrases, chunks, proper names, idioms, and terminology. Automatic extraction of monolingual and bilingual collocations are important for many applications, including Computer Assisted Language Learning, natural language generation, word sense disambiguation, machine translation, lexicography, and cross language information retrieval. Hank and Church (1990) pointed out the usefulness of pointwise mutual information for identifying collocations in lexicography. Justeson and Katz (1995) proposed to identify technical terminology based on preferred linguistic patterns and discourse property of repetition. Among many general methods presented in Manning and Schutze (1999), the best method is filtering based on both linguistic and statistical constraints. Smadja (1993) presented a program called XTRACT, based on mean and variance of the distance between two words that is capable of computing flexible collocations. Kupiec (1992) proposed to extract bilingual noun phrases using statitistical analysis of coocurrance of phrases. Smadja, McKeown, and Hatzivassiloglou (1996) extended the EXTRACT approach to handling of bilingual collocation based mainly on the statistical measures of Dice coefficient. Dunning (1993) pointed out the weakness of mutual information and showed that log likelihood ratios are more effective in identifying monolingual collocations especially when the occurrence count is very low. Smadja's XTRACT is the seminal work on extracting collocation types. XTRACT invloves three different statistical measures related to how likely a pair of words is part of a collocation type. It is complicated to set different thresholds for each of these statistical measures. We decided to research and develop a new and simpler method for extracting monolingual collocations. We describe the experiments and evaluation in Section 3. The limitations and related issues will be taken up in Section 4. We conclude and give future direction in Section 5.",
"pdf_parse": {
"paper_id": "O03-2003",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we describe a new method for extracting monolingual collocations. The method is based on statistical methods extracts. VN collocations from large textual corpora. Being able to extract a large number of collocations is very critical to machine translation and many other application. The method has an element of snowballing in it. Initially, one identifies a pattern that will produce a large portion of VN collocations. We experimented with an implementation of the proposed method on a large corpus with satisfactory results. The patterns are further refined to improve on the precision ration. 1 Introduction Collocations are recurrent combinations of words that co-occur more often than chance. Collocations like terminology tend to be lexicalized and have a somehow more restricted meaning than the surface form suggested (Justerson and Katz 1994). The words in a collocation may be appearing next to each other (rigid collocation) or otherwise (flexible/elastic collocations). On the other hand, collocations can be classified into lexical and grammatical collocations (Benson, Benson, Ilson, 1986). Lexical collocations are formed between content words, while the grammatical collocation has to do with a content word with a function word or a syntactic structure. Collocations are pervasive in all types of writing and can be found in phrases, chunks, proper names, idioms, and terminology. Automatic extraction of monolingual and bilingual collocations are important for many applications, including Computer Assisted Language Learning, natural language generation, word sense disambiguation, machine translation, lexicography, and cross language information retrieval. Hank and Church (1990) pointed out the usefulness of pointwise mutual information for identifying collocations in lexicography. Justeson and Katz (1995) proposed to identify technical terminology based on preferred linguistic patterns and discourse property of repetition. Among many general methods presented in Manning and Schutze (1999), the best method is filtering based on both linguistic and statistical constraints. Smadja (1993) presented a program called XTRACT, based on mean and variance of the distance between two words that is capable of computing flexible collocations. Kupiec (1992) proposed to extract bilingual noun phrases using statitistical analysis of coocurrance of phrases. Smadja, McKeown, and Hatzivassiloglou (1996) extended the EXTRACT approach to handling of bilingual collocation based mainly on the statistical measures of Dice coefficient. Dunning (1993) pointed out the weakness of mutual information and showed that log likelihood ratios are more effective in identifying monolingual collocations especially when the occurrence count is very low. Smadja's XTRACT is the seminal work on extracting collocation types. XTRACT invloves three different statistical measures related to how likely a pair of words is part of a collocation type. It is complicated to set different thresholds for each of these statistical measures. We decided to research and develop a new and simpler method for extracting monolingual collocations. We describe the experiments and evaluation in Section 3. The limitations and related issues will be taken up in Section 4. We conclude and give future direction in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "We used Sinorama Corpus to develop methods for extracting monolingual collocations. A number of necessary preprocessing steps were carried out. Those preprocessing steps include:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The algorithm",
"sec_num": "2"
},
{
"text": "1. Part of speech tagging for English and Chinese test 2. N-gram construction 3. Logarithmic likelihood ratio (LLR) computation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The algorithm",
"sec_num": "2"
},
{
"text": "In our research, we discovered some problems about XTRACT. The problems with XTRACT include:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction of English VN collocations",
"sec_num": "2.1"
},
{
"text": "1. XTRACT produce a list of collocation types rather than instances. 2. XTRACT is complicated because it requires thresholds for three statistical measures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction of English VN collocations",
"sec_num": "2.1"
},
{
"text": "3. There is no systematic way of setting thresholds for a certain level of confidence. 4. XTRACT is based on the author's intuition about collocation. 5. XTRACT does not provide explicitly types of collocation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction of English VN collocations",
"sec_num": "2.1"
},
{
"text": "For the above reasons, we decided to research and explore new methods for extracting monolingual collocations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction of English VN collocations",
"sec_num": "2.1"
},
{
"text": "The method has an element of snowballing in it. Initially, one identifies a pattern that will produce a large portion of VN collocation. We started with the following pattern(1):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step1: Computing such VN types with high counts",
"sec_num": "2.1.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "V \uff0b ART or POSS \uff0b \u2026 \uff0b N",
"eq_num": "(1)"
}
],
"section": "Step1: Computing such VN types with high counts",
"sec_num": "2.1.1"
},
{
"text": "By extracting such VN types with high counts, we got a list of highly likely collocation types. In addition, we also take the passive form(2) of VN into consideration:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step1: Computing such VN types with high counts",
"sec_num": "2.1.1"
},
{
"text": "ART or POSS \uff0b N \uff0b \u2026 \uff0b be \uff0b Ved (the passive VN)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step1: Computing such VN types with high counts",
"sec_num": "2.1.1"
},
{
"text": "The list is further filtered for higher precision: the pairs with LLR lower than 7.88 (confidence level 95%) are removed from consideration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step1: Computing such VN types with high counts",
"sec_num": "2.1.1"
},
{
"text": "After obtaining the list, we gather all the instances where the VN appears in the corpus. From the instances, we compute the following patterns(3) for extracting VN collocations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step2: Extracting VN patterns from corpus",
"sec_num": "2.1.2"
},
{
"text": "and we also consequently consider the passive form and its context:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS preceding V POS sequence between V and O (3) POS following O",
"sec_num": null
},
{
"text": "Log-likelihood ratio : LLR(x;y) 2 2 2 1 1 1 2 2 1 1 1 ) 1 ( ) 1 ( ) 1 ( ) 1 ( log 2 ) ; ( 2 1 1 2 k n k k n k k n k n K p p p p p p p y x LLR \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS preceding V POS sequence between V and O (3) POS following O",
"sec_num": null
},
{
"text": "k 1 : # of pairs that contain x and y simultaneously. k 2 : # of pairs that contain x but do not contain y. n 1 : # of pairs that contain y n 2 : # of pairs that does not contain y p 1 =k 1 /n 1, p 2 = k 2 /n 2 , p = (k 1 +k 2 )/(n 1 +n 2 ) POS preceding O POS sequence between O and V (4) POS following V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS preceding V POS sequence between V and O (3) POS following O",
"sec_num": null
},
{
"text": "We eliminated patterns that appear less than three times. These patterns are much more stringent than pattern we started out with. These patterns help us get rid of unlikely VN instances such as \"make film\" in \"make a leap into TV and film,\" since the POS sequence of \"a leap into TV and\" has a low count in the initial batch of \"likely\" collocations. On the other hand, \"make film\" in \"make my first film\" would be kept as a legistimate instance of VN, since the pos sequence of \"my first\" has rather high count in the initial batch of \"likely\" collocations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step3: Manipulating the correct structure statistics of VN patterns",
"sec_num": "2.1.3"
},
{
"text": "Actually, the POS sequences of intervening words has a skew distribution concentrating on a dozen of short phrases(see Table1) : These patterns can be coupled with other constraints for best results:",
"cite_spans": [],
"ref_spans": [
{
"start": 119,
"end": 126,
"text": "Table1)",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Step3: Manipulating the correct structure statistics of VN patterns",
"sec_num": "2.1.3"
},
{
"text": "1. No punctuation marks should come between V and O 2. The noun closest to the verb takes precedence For now, we only consider verbs with two obligatory arguments of subject and object. Therefore, we exclude instance like (make, choice) in \"make entertainment at home a choice.\" We plan to extract VN in three-argument proposition separately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step3: Manipulating the correct structure statistics of VN patterns",
"sec_num": "2.1.3"
},
{
"text": "The other issue has to do with data sparseness. For collocation types with low count, the estimation of LLR is not as reliable. In the future, we will also experiment with using search engine such as Google to estimate word counts and VN instance count for more reliable estimation of LLR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step3: Manipulating the correct structure statistics of VN patterns",
"sec_num": "2.1.3"
},
{
"text": "XTRACT does not touch on the issue of identify VN collocation instances in (6) and exclude that in (5). In our research, we explored the identification of collocation instances and attempt to avoid cases that maybe a correct collocation type but not a correct collocation instance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step3: Manipulating the correct structure statistics of VN patterns",
"sec_num": "2.1.3"
},
{
"text": "\u2026 make a leap into TV and film\u2026 (5) \u2026 made great efforts to promote documentary film\u2026 (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step3: Manipulating the correct structure statistics of VN patterns",
"sec_num": "2.1.3"
},
{
"text": "To extract VN collocations, we first run part of speech tagging on sentences. For instance, we get the results of tagging below :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": "2.2"
},
{
"text": "He/pps defines/vbz success/nn for/in a/at paper/nn as/cs not/* needing/vbg to/to exert/vb political/jj influence/nn or/cc obtain/vb financial/jj subsidies/nns ,/, but/cc rather/rb being/beg able/jj to/to rely/vb wholly/rb on/in content/nn to/to attract/vb readers/nns that/cs in/in turn/nn attract/vb advertisers/nns ,/, and/cc thus/rb keep/vb afloat/rb by/in its/pp$ own/jj efforts/nns ./.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": "2.2"
},
{
"text": "After tagging English sentences, we construct N-gram extracted likely VN types with high count from bigram, trigram and fourgram. We then obtained got a list of highly likely collocation types ( Table 2) . The pairs with LLR lower then 7.88 are eliminated from Table 2. If the pair appeared less than once. we also eliminated the pair.",
"cite_spans": [],
"ref_spans": [
{
"start": 195,
"end": 203,
"text": "Table 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Example",
"sec_num": "2.2"
},
{
"text": "After obtaining likely collocation types, we gathered all instances where the VN appears in the corpus. The distance between the verb and the object is at most five words. Both of the words before the verb and after the object are recorded. Table 3 shows those patterns of VN instances.",
"cite_spans": [],
"ref_spans": [
{
"start": 241,
"end": 248,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Example",
"sec_num": "2.2"
},
{
"text": "A list of highly likely collocation types Table 3 Extracting VN collocation from corpus We worked with around 50,000 aligned sentences from the Sinorama parallel Corpus in our experiments with an implementation of the proposed method. The average English sentence had 43.95 words. From the experimental data, we have extracted 17,298 VN collocation types. Then, we could obtain 45,080 VN instances for these VN types. See Table 3 for some examples for the verb \"influence.\"",
"cite_spans": [],
"ref_spans": [
{
"start": 42,
"end": 49,
"text": "Table 3",
"ref_id": null
},
{
"start": 422,
"end": 429,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Table 2",
"sec_num": null
},
{
"text": "Rec V-1 Verb N-5 N-4 N-3 N-2 N-1 Noun N+1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 2",
"sec_num": null
},
{
"text": "We select 100 sentences from the parallel corpus of Sinorama magazine to evaluate the performance. A human judge majoring in English identified the VN collocations in these sentences. The manual VN collocations are compared with the instances extracted from the corpus and the result is showed in the Appendix. The evaluation indicates an average recall rate of 74.47% and precision of 66.67 %. It is very difficult to evaluation the experimental results. There were obvious and clear-cut collocations and non collocation, but there were a lot of cases such as \"improve environment\" and \"share housework\" that were difficult to judge and may be evaluated differently by different people. There is room for improvement as far as recall and precision ratios are concerned. Nevertheless, the extracted VNs are very diverse and useful for language learning purpose.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 2",
"sec_num": null
},
{
"text": "The proposed approach offers a simple algorithm for automatic acquisition of the VN instances from a corpus. The method is particularly interested in following ways: i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "We use a data-driven approach to extract monolingual collocations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "ii. The algorithm is applicable to elastic collocations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "iii. Systematic way of setting thresholds for a certain level of confidence iv. We could obtained instances of VN collocation through the simple statistical information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "While Xtract extracts VN types, we focus on the VN instances. It is understandable that we would get slightly lower recall and precision rates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "In this paper, we describe an algorithm that employs statistical analyses to extract instance of VN collocations from a corpus. The algorithm is applicable to elastic collocations. The main difference between our algorithm and Xtract lies in that we extract the instances from the sentence instead of extracting the VN types directly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion & Future work",
"sec_num": "5"
},
{
"text": "Moreover, in our research we observe other types related to VN such as VP (ie. verb + preposition) and VNP (ie. verb + noun + preposition). In the future, we will further take these two patterns into consideration to extract more types of verb-related collocations. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion & Future work",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "The manual VN collocations are compared with the instances extracted from the corpus: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The BBI Combinatory Dictionary of English: A Guide to Word Combinations",
"authors": [
{
"first": "Morton",
"middle": [],
"last": "Benson",
"suffix": ""
},
{
"first": "Evelyn",
"middle": [],
"last": "Benson",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Ilson",
"suffix": ""
}
],
"year": 1986,
"venue": "John Benjamins",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benson, Morton., Evelyn Benson, and Robert Ilson. The BBI Combinatory Dictionary of English: A Guide to Word Combinations. John Benjamins, Amsterdam, Netherlands, 1986.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Looking for needles in a haystack",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Choueka",
"suffix": ""
}
],
"year": 1988,
"venue": "Actes RIAO, Conference on User-Oriented Context Based Text and Image Handling",
"volume": "",
"issue": "",
"pages": "609--623",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Choueka, Y. (1988) : \"Looking for needles in a haystack\", Actes RIAO, Conference on User-Oriented Context Based Text and Image Handling, Cambridge, p. 609-623.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic retrieval of frequent idiomatic and collocational expressions in a large corpus",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Choueka",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Neuwitz",
"suffix": ""
}
],
"year": 1983,
"venue": "Journal of the Association for Literary and Linguistic Computing",
"volume": "4",
"issue": "1",
"pages": "34--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Choueka, Y.; Klein, and Neuwitz, E.. Automatic retrieval of frequent idiomatic and collocational expressions in a large corpus. Journal of the Association for Literary and Linguistic Computing, 4(1):34-8, (1983)",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Word association norms, mutual information, and lexicography",
"authors": [
{
"first": "K",
"middle": [
"W"
],
"last": "Church",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Hanks",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational Linguistics",
"volume": "16",
"issue": "1",
"pages": "22--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Church, K. W. and Hanks, P. Word association norms, mutual information, and lexicography. Computational Linguistics, 1990, 16(1), pp. 22-29.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Termight: Identifying and translation technical terminology",
"authors": [
{
"first": "I",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
}
],
"year": 1994,
"venue": "Proc. of the 4th Conference on Applied Natural Language Processing (ANLP)",
"volume": "",
"issue": "",
"pages": "34--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dagan, I. and K. Church. Termight: Identifying and translation technical terminology. In Proc. of the 4th Conference on Applied Natural Language Processing (ANLP), pages 34-40, Stuttgart, Germany, 1994.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Accurate methods for the statistics of surprise and coincidence",
"authors": [
{
"first": "T",
"middle": [],
"last": "Dunning",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "1",
"pages": "61--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dunning, T (1993) Accurate methods for the statistics of surprise and coincidence, Computational Linguistics 19:1, 61-75.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning bilingual collocations by word-level sorting",
"authors": [
{
"first": "M",
"middle": [],
"last": "Haruno",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ikehara",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Yamazaki",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. of the 16th International Conference on Computational Linguistics (COLING '96)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haruno, M., S. Ikehara, and T. Yamazaki. Learning bilingual collocations by word-level sorting. In Proc. of the 16th International Conference on Computational Linguistics (COLING '96), Copenhagen, Denmark, 1996.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Character-based Collocation for Mandarin Chinese",
"authors": [
{
"first": "C.-R",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "K.-J",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Y.-Y.",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": null,
"venue": "ACL 2000",
"volume": "",
"issue": "",
"pages": "540--543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, C.-R., K.-J. Chen, Y.-Y. Yang, Character-based Collocation for Mandarin Chinese, In ACL 2000, 540-543.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Acquiring collocations for lexical choice between near-synonyms",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Inkpen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zaiu",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": null,
"venue": "SIGLEX Workshop on Unsupervised Lexical Acquisition, 40th meeting of the Association for Computational Lin",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Inkpen, Diana Zaiu and Hirst, Graeme. ``Acquiring collocations for lexical choice between near-synonyms.'' SIGLEX Workshop on Unsupervised Lexical Acquisition, 40th meeting of the Association for Computational Lin",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Technical Terminology: some linguistic properties and an algorithm for identification in text",
"authors": [
{
"first": "J",
"middle": [
"S"
],
"last": "Justeson",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Slava",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Katz",
"suffix": ""
}
],
"year": 1995,
"venue": "Natural Language Engineering",
"volume": "1",
"issue": "1",
"pages": "9--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Justeson, J.S. and Slava M. Katz (1995). Technical Terminology: some linguistic properties and an algorithm for identification in text. Natural Language Engineering, 1(1):9-27.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An algorithm for finding noun phrase correspondences in bilingual corpora",
"authors": [
{
"first": "Julian",
"middle": [],
"last": "Kupiec",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kupiec, Julian. An algorithm for finding noun phrase correspondences in bilingual corpora. In Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics, Columbus, Ohio, 1993.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Using collocation statistics in information extraction",
"authors": [
{
"first": "D",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. of the Seventh Message Understanding Conference (MUC-7)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, D. Using collocation statistics in information extraction. In Proc. of the Seventh Message Understanding Conference (MUC-7), 1998.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A Word-to-Word Model of Translational Equivalence",
"authors": [
{
"first": "I",
"middle": [],
"last": "Melamed",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dan",
"suffix": ""
}
],
"year": 1997,
"venue": "Procs. of the ACL97",
"volume": "",
"issue": "",
"pages": "490--497",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melamed, I. Dan. \"A Word-to-Word Model of Translational Equivalence\". In Procs. of the ACL97. pp 490-497. Madrid Spain, 1997.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Retrieving collocations from text: Xtract",
"authors": [
{
"first": "F",
"middle": [],
"last": "Smadja",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "1",
"pages": "143--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Smadja, F. 1993. Retrieving collocations from text: Xtract. Computational Linguistics, 19(1):143-177",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Translating collocations for bilingual lexicons: A statistical approach",
"authors": [
{
"first": "F",
"middle": [],
"last": "Smadja",
"suffix": ""
},
{
"first": "K",
"middle": [
"R"
],
"last": "Mckeown",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "1",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Smadja, F., K.R. McKeown, and V. Hatzivassiloglou. Translating collocations for bilingual lexicons: A statistical approach. Computational Linguistics, 22(1):1-38, 1996.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"content": "
VN collocation | Translation | POS of VN |
",
"type_str": "table",
"text": "Samples of VN collocation from text",
"num": null
},
"TABREF2": {
"html": null,
"content": "#answer keys | #output | #Correct | Recall (%) | Precision (%) |
94 | 105 | 70 | 74.47 | 66.67 |
",
"type_str": "table",
"text": "Experiment result of VN collocation extracted from Sinorama parallel Corpus",
"num": null
},
"TABREF3": {
"html": null,
"content": "7932 | eliminate unfariness \\ seek equity | seek equity \\ eliminate unfairness |
8056 | | |
8510 | improve environment | improve environment |
8630 | | |
9326 | do research \\ have influcence | have influence |
9433 | | |
10600 | | |
10624 | | contemplate footstep |
11293 | understand meaning | understand meaning |
11603 | | |
12937 | receive attention \\ witness progress | receive attention \\ witness progress |
13033 | promote idea \\ invest effort \\ share housework \\ expend effort | expend effort \\ share housework \\ promote idea \\ invest effort |
13491 | | |
13576 | | test wisdom |
15349 | take paycut \\ exceed budget \\ unload property | show increase \\ house price \\ unload property |
16949 | | |
17106 | block view \\ make offering | make offering |
17608 | lose ability | lose ability \\ save forest |
17924 | take effort \\ take time | consider success |
18183 | | |
18717 | carry work | carry work |
18745 | | |
19735 | bear son | bear son |
20002 | make money \\ think way | make money \\ think way |
21450 | | buy portion |
21663 | live life | live space |
22610 | | |
23067 | adopt method | adopt method |
23074 | | |
24307 | move production | move production \\ develop computer |
25478 | | |
26030 | make thing | make thing |
28303 | increase chance \\ increase production | increase chance \\ increase production |
28336 | | |
28417 | write essay | write essay |
28806 | write seller | write seller |
28826 | | |
29003 | make money \\ take care \\ have time | take care \\ make money \\ have time |
29292 | | |
29736 | damage environment | damage environment \\ insure recovery \\ choose styrofoam \\ recover styrofoam |
30881 | donate kidney \\ implant kidney | donate kidney \\ implant kidney |
31096 | drive car \\ take transportation \\ have responsibility | drive car \\ consume pastry \\ have responsibility \\ wrap candy |
32975 | instruct student | instruct student |
",
"type_str": "table",
"text": "7878make money \\ make profit \\ rise price stop conglomerate \\ make money \\ rise price \\ make profit",
"num": null
}
}
}
}