{ "paper_id": "O03-4003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:02:03.117432Z" }, "title": "Measuring and Comparing the Productivity of Mandarin Chinese Suffixes", "authors": [ { "first": "Eiji", "middle": [], "last": "Nishimoto", "suffix": "", "affiliation": { "laboratory": "", "institution": "The City University of New York", "location": { "addrLine": "365 Fifth Avenue", "postCode": "10016", "settlement": "New York", "region": "NY", "country": "U.S.A" } }, "email": "enishimoto@gc.cuny.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The present study attempts to measure and compare the morphological productivity of five Mandarin Chinese suffixes: the verbal suffix-hua, the plural suffix-men, and the nominal suffixes-r,-zi, and-tou. These suffixes are predicted to differ in their degree of productivity:-hua and-men appear to be productive, being able to systematically form a word with a variety of base words, whereas-zi and-tou (and perhaps also-r) may be limited in productivity. Baayen [1989, 1992] proposes the use of corpus data in measuring productivity in word formation. Based on word-token frequencies in a large corpus of texts, his token-based measure of productivity expresses productivity as the probability that a new word form of an affix will be encountered in a corpus. We first use the token-based measure to examine the productivity of the Mandarin suffixes. The present study, then, proposes a type-based measure of productivity that employs the deleted estimation method [Jelinek & Mercer, 1985] in defining unseen words of a corpus and expresses productivity by the ratio of unseen word types to all word types. The proposed type-based measure yields the productivity ranking \"-men,-hua,-r,-zi,-tou,\" where-men is the most productive and-tou is the least productive. The effects of corpus-data variability on a productivity measure are also examined. The proposed measure is found to obtain a consistent productivity ranking despite variability in corpus data.", "pdf_parse": { "paper_id": "O03-4003", "_pdf_hash": "", "abstract": [ { "text": "The present study attempts to measure and compare the morphological productivity of five Mandarin Chinese suffixes: the verbal suffix-hua, the plural suffix-men, and the nominal suffixes-r,-zi, and-tou. These suffixes are predicted to differ in their degree of productivity:-hua and-men appear to be productive, being able to systematically form a word with a variety of base words, whereas-zi and-tou (and perhaps also-r) may be limited in productivity. Baayen [1989, 1992] proposes the use of corpus data in measuring productivity in word formation. Based on word-token frequencies in a large corpus of texts, his token-based measure of productivity expresses productivity as the probability that a new word form of an affix will be encountered in a corpus. We first use the token-based measure to examine the productivity of the Mandarin suffixes. The present study, then, proposes a type-based measure of productivity that employs the deleted estimation method [Jelinek & Mercer, 1985] in defining unseen words of a corpus and expresses productivity by the ratio of unseen word types to all word types. The proposed type-based measure yields the productivity ranking \"-men,-hua,-r,-zi,-tou,\" where-men is the most productive and-tou is the least productive. The effects of corpus-data variability on a productivity measure are also examined. The proposed measure is found to obtain a consistent productivity ranking despite variability in corpus data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The focus of a study of morphological productivity is on derivational affixation that involves a base word and an affix [Aronoff, 1976] , as seen in sharp + -ness \u2192 sharpness, electric + -ity \u2192 electricity, child + -ish \u2192 childish. 1 Native speakers of a language have intuitions about what are and are not acceptable words of their language, and if presented with non-existent, potential words [Aronoff, 1983] , they accept certain word formations more readily than others [Anshen & Aronoff, 1981; Aronoff & Schvaneveldt, 1978; Cutler, 1980] . Most intriguing in the issue of productivity is that the degree of productivity varies among affixes, and many studies in the literature have been devoted to accounting for this particular aspect of productivity [see Bauer, 2001, and Plag, 1999, for an overview] .", "cite_spans": [ { "start": 120, "end": 135, "text": "[Aronoff, 1976]", "ref_id": "BIBREF4" }, { "start": 395, "end": 410, "text": "[Aronoff, 1983]", "ref_id": "BIBREF6" }, { "start": 474, "end": 498, "text": "[Anshen & Aronoff, 1981;", "ref_id": "BIBREF2" }, { "start": 499, "end": 528, "text": "Aronoff & Schvaneveldt, 1978;", "ref_id": "BIBREF8" }, { "start": 529, "end": 542, "text": "Cutler, 1980]", "ref_id": "BIBREF21" }, { "start": 762, "end": 778, "text": "Bauer, 2001, and", "ref_id": "BIBREF15" }, { "start": 779, "end": 807, "text": "Plag, 1999, for an overview]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Morphological Productivity", "sec_num": "1.1" }, { "text": "How the degree of productivity varies among affixes is best illustrated by the English nominal suffixes -ness and -ity, which are often considered \"rivals\" as they sometimes share a base word (e.g., clear \u2192 clearness or clarity). In general, -ness is felt to be more productive than -ity. 2 The word formation of -ity is limited, for example, by the Latinate Restriction [Aronoff, 1976: 51 ] that requires the base word to be of Latinate origin; hence, purity is acceptable but *cleanity is not. In contrast, -ness freely attaches to a variety of base words of both Latinate and Germanic (native) origin; thus, both pureness and cleanness are acceptable.", "cite_spans": [ { "start": 289, "end": 290, "text": "2", "ref_id": null }, { "start": 371, "end": 389, "text": "[Aronoff, 1976: 51", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Morphological Productivity", "sec_num": "1.1" }, { "text": "There are also some affixes that could be regarded as unproductive; for example, Aronoff and Anshen [1998: 243] note that the English nominal suffix -th (as in long \u2192 length) has long been unsuccessful in forming a new word that survives, despite attempts at terms like coolth.", "cite_spans": [ { "start": 81, "end": 111, "text": "Aronoff and Anshen [1998: 243]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Morphological Productivity", "sec_num": "1.1" }, { "text": "Varying degrees of productivity are also observed in Mandarin Chinese word formation. As will be discussed shortly, some Mandarin suffixes appear to be more productive than others.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Morphological Productivity", "sec_num": "1.1" }, { "text": "Early studies on productivity mainly focused on restrictions on word formation and viewed the degree of productivity to be determined by such restrictions [Booij, 1977; Schultink, 1961; van Marle, 1985] . Booij [1977: 120] , for example, considers the degree of productivity of a word formation rule to be inversely proportional to the amount of restrictions that the word formation rule is subject to. Although the view that productivity is affected by restrictions on word formation is certainly to the point, from a quantitative point of view, measuring productivity by the amount of restrictions on word formation is limited in that the restrictive weight of such restrictions is unknown [Baayen & Renouf, 1996: 87] . Baayen [1989 proposes a corpus-based approach to the quantitative study of productivity. His productivity measure uses word frequencies in a large corpus of texts to 1 Excluded from the study of productivity are seemingly irregular word formations, or \"oddities\" [Aronoff, 1976: 20] , such as blendings (e.g., smoke + fog \u2192 smog) and acronyms (e.g., NATO). 2 -ity can be more productive than -ness depending on the type of base word; for instance, -ity is more productive than -ness when the base word ends with -ile as in servile [Aronoff, 1976: 36] or with -ible as in reversible [Anshen & Aronoff, 1981] . Still, overall, -ness is intuitively felt to be more productive than -ity. express productivity as the probability that a new word form of an affix will be encountered in a corpus (see Section 3). Although Bauer [2001: 204] observes that a generally agreed measure of productivity is yet to be achieved in the literature, Baayen's corpus-based approach seems to be appealing and promising. Most importantly, since corpus data include productively formed words that are typically not found in a dictionary [Baayen & Renouf, 1996] , corpus-based descriptions of productivity reflect how words are actually used. 3 The corpus-based approach is also timely, as linguists have growing interests in corpus data. The present study pursues the corpus-based approach to measuring productivity using a corpus of Chinese texts.", "cite_spans": [ { "start": 155, "end": 168, "text": "[Booij, 1977;", "ref_id": "BIBREF17" }, { "start": 169, "end": 185, "text": "Schultink, 1961;", "ref_id": "BIBREF33" }, { "start": 186, "end": 202, "text": "van Marle, 1985]", "ref_id": "BIBREF40" }, { "start": 205, "end": 222, "text": "Booij [1977: 120]", "ref_id": null }, { "start": 692, "end": 719, "text": "[Baayen & Renouf, 1996: 87]", "ref_id": null }, { "start": 722, "end": 734, "text": "Baayen [1989", "ref_id": "BIBREF9" }, { "start": 888, "end": 889, "text": "1", "ref_id": null }, { "start": 985, "end": 1004, "text": "[Aronoff, 1976: 20]", "ref_id": null }, { "start": 1253, "end": 1272, "text": "[Aronoff, 1976: 36]", "ref_id": null }, { "start": 1304, "end": 1328, "text": "[Anshen & Aronoff, 1981]", "ref_id": "BIBREF2" }, { "start": 1537, "end": 1554, "text": "Bauer [2001: 204]", "ref_id": null }, { "start": 1836, "end": 1859, "text": "[Baayen & Renouf, 1996]", "ref_id": "BIBREF14" }, { "start": 1941, "end": 1942, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Measuring the Degree of Productivity", "sec_num": "1.2" }, { "text": "The outline of this paper is as follows. In Section 2, five Mandarin suffixes are introduced and are analyzed qualitatively based on observations in the literature. In Section 3, Baayen's token-based productivity measure is discussed, and the measure is applied to a corpus of Chinese texts to quantitatively analyze the productivity of the Mandarin suffixes. In Section 4, a type-based productivity measure is proposed, and its performance is evaluated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measuring the Degree of Productivity", "sec_num": "1.2" }, { "text": "Also, some experiments are conducted to examine the effects of corpus-data variability on a productivity measure. Section 5 summarizes the findings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measuring the Degree of Productivity", "sec_num": "1.2" }, { "text": "The present study examines the productivity of five Mandarin suffixes: the verbal suffix -hua, the plural suffix -men, and the nominal suffixes -r, -zi, and -tou.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Qualitative Analysis of Five Mandarin Suffixes", "sec_num": "2.1" }, { "text": "The verbal suffix -hua \u5316 functions similarly to English -ize (and -ify):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Qualitative Analysis of Five Mandarin Suffixes", "sec_num": "2.1" }, { "text": "(1) xi\u00e0nd\u00e0i \u73b0\u4ee3 'modern' \u2192 xi\u00e0nd\u00e0ihu\u00e0 \u73b0\u4ee3\u5316 'modernize'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Qualitative Analysis of Five Mandarin Suffixes", "sec_num": "2.1" }, { "text": "Verbs formed with -hua can be used as nouns [Baxter & Sagart, 1998: 40] , so xi\u00e0nd\u00e0ihu\u00e0 \u73b0 \u4ee3\u5316 in (1) can also be interpreted as 'modernization'. Analogous to English -ize (and -ify), -hua systematically attaches to a variety of base words to form verbs, such as g\u014dngy\u00e8hu\u00e0 \u5de5\u4e1a\u5316 'industrialize', gu\u00f3j\u00echu\u00e0 \u56fd\u9645\u5316 'internationalize', and j\u00ecsu\u00e0nj\u012bhu\u00e0 \u8ba1 \u7b97\u673a\u5316 'computerize'.", "cite_spans": [ { "start": 44, "end": 71, "text": "[Baxter & Sagart, 1998: 40]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A Qualitative Analysis of Five Mandarin Suffixes", "sec_num": "2.1" }, { "text": "The suffix -men \u4eec pluralizes a noun, as in the following example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Qualitative Analysis of Five Mandarin Suffixes", "sec_num": "2.1" }, { "text": "(2) xu\u00e9sheng \u5b66\u751f 'student' \u2192 xu\u00e9shengmen \u5b66\u751f\u4eec 'students'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Qualitative Analysis of Five Mandarin Suffixes", "sec_num": "2.1" }, { "text": "According to Packard's [2000] classification, -men is a grammatical affix, whereas the other four suffixes that we examine are word-forming affixes. If we use the standard terminology of the field, -men could be viewed as an inflectional affix, and the other four suffixes could be considered derivational affixes. There are three major characteristics of -men that differentiate -men from the English plural suffix -s [Lin, 2001: 59; Norman, 1988: 159; Ramsey, 1987: 64] . First, -men attaches only to human nouns 4 ; hence, *zhu\u014dzimen \u684c\u5b50\u4eec 'desks' and *di\u00e0nn\u01ceomen \u7535\u8111\u4eec 'computers' are not acceptable, unless they are considered animate as in a cartoon. Second, -men is obligatory with pronouns (e.g., w\u01d2 \u6211 'I' \u2192 w\u01d2men \u6211\u4eec 'we') but not with nouns; for example, h\u00e1izi \u5b69\u5b50 without -men can be interpreted as 'child' or 'children' depending on the context. Third, -men is not compatible with numeral classifiers; hence, *s\u0101ng\u00e8 xu\u00e9shengmen \u4e09\u4e2a\u5b66\u751f\u4eec 'three students' is ungrammatical. Due to these characteristics, -men may not be as frequently used or \"productive\" [Lin, 2001: 58] as the English plural suffix -s. However, -men has many base words to which it can attach, for there are a variety of nouns in Mandarin (as in any language) designating human beings (e.g., j\u00eczh\u011bmen \u8bb0\u8005\u4eec 'reporters', k\u00e8r\u00e9nmen \u5ba2\u4eba\u4eec 'guests', sh\u00eczh\u01cengmen \u5e02\u957f\u4eec 'mayors').", "cite_spans": [ { "start": 13, "end": 29, "text": "Packard's [2000]", "ref_id": "BIBREF30" }, { "start": 419, "end": 434, "text": "[Lin, 2001: 59;", "ref_id": null }, { "start": 435, "end": 453, "text": "Norman, 1988: 159;", "ref_id": null }, { "start": 454, "end": 471, "text": "Ramsey, 1987: 64]", "ref_id": null }, { "start": 1056, "end": 1071, "text": "[Lin, 2001: 58]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A Qualitative Analysis of Five Mandarin Suffixes", "sec_num": "2.1" }, { "text": "The suffix -r \u513f forms a noun from a verb or an adjective, or -r can create a diminutive form [Ramsey, 1987: 63; Lin, 2001 : 57-58]:", "cite_spans": [ { "start": 93, "end": 111, "text": "[Ramsey, 1987: 63;", "ref_id": null }, { "start": 112, "end": 121, "text": "Lin, 2001", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "A Qualitative Analysis of Five Mandarin Suffixes", "sec_num": "2.1" }, { "text": "(3) hu\u00e0 \u753b 'to paint' \u2192 hu\u00e0r \u753b\u513f 'painting'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Qualitative Analysis of Five Mandarin Suffixes", "sec_num": "2.1" }, { "text": "(4) ni\u01ceo \u9e1f 'bird' \u2192 ni\u01ceor \u9e1f\u513f 'small bird'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Qualitative Analysis of Five Mandarin Suffixes", "sec_num": "2.1" }, { "text": "The use of -r is abundant in the colloquial speech of local Beijing residents, and three distinct usages of -r by local Beijing residents are identified [Chen, 1999: 39] . First, -r can create a semantic difference:", "cite_spans": [ { "start": 153, "end": 169, "text": "[Chen, 1999: 39]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A Qualitative Analysis of Five Mandarin Suffixes", "sec_num": "2.1" }, { "text": "(5) x\u00ecn \u4fe1 'letter' \u2192 x\u00ecnr \u4fe1\u513f 'message'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Qualitative Analysis of Five Mandarin Suffixes", "sec_num": "2.1" }, { "text": "Second, a form with -r may be habitually preferred to a form without it:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Qualitative Analysis of Five Mandarin Suffixes", "sec_num": "2.1" }, { "text": "(6) hu\u0101 \u82b1 'flower' \u2192 hu\u0101r \u82b1\u513f 'flower'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Qualitative Analysis of Five Mandarin Suffixes", "sec_num": "2.1" }, { "text": "Third, -r may be attached to a word solely for a stylistic reason. The use of -r in the last category is the most frequent among local Beijing residents [Chen, 1999: 39] . In both Mainland China and Taiwan, the use of -r is not favored especially in broadcasting, and -r words are rarely incorporated into the standard [Chen, 1999: 39; Ramsey, 1987: 64] .", "cite_spans": [ { "start": 153, "end": 169, "text": "[Chen, 1999: 39]", "ref_id": null }, { "start": 319, "end": 335, "text": "[Chen, 1999: 39;", "ref_id": null }, { "start": 336, "end": 353, "text": "Ramsey, 1987: 64]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A Qualitative Analysis of Five Mandarin Suffixes", "sec_num": "2.1" }, { "text": "The suffixes -zi \u5b50 and -tou \u5934 typically appear in the following constructions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Qualitative Analysis of Five Mandarin Suffixes", "sec_num": "2.1" }, { "text": "(7) *m\u00e0o \u5e3d \u2192 m\u00e0ozi \u5e3d\u5b50 'hat' (8) *m\u00f9 \u6728 \u2192 m\u00f9tou \u6728\u5934 'wood'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Qualitative Analysis of Five Mandarin Suffixes", "sec_num": "2.1" }, { "text": "In these examples, -zi and -tou combine with a bound morpheme that does not constitute a word by itself (i.e., neither *m\u00e0o \u5e3d nor *m\u00f9 \u6728 is a word).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Qualitative Analysis of Five Mandarin Suffixes", "sec_num": "2.1" }, { "text": "Historically, the word formation of -zi and -tou appeared in the course of two changes in Chinese: a shift from monosyllabic to disyllabic words and a simplification of the phonological system [Packard, 2000: 265-266] . According to Packard [2000: 265] , the shift toward disyllabic words occurred as early as in the Zhou dynasty (1000-700 BC) and underwent a large scale development during and after the Han dynasty (206 BC-AD 220). The phonological simplification, which occurred around the same time [Packard, 2000: 266] , caused syllable-final consonants to be lost, and many single-syllable words that were once distinct became homophones [Li & Thompson, 1981: 44] . One possible account of how the two changes occurred is that the phonological simplification preceded as a natural linguistic process of phonetic attrition, and the shift toward disyllabic words occurred as a solution to the increase of homophonous syllables [Li & Thompson, 1981: 44; Packard, 2000: 266] . The increase of homophonous syllables was particularly significant in Mandarin [Li & Thompson, 1981: 44] , and -zi and -tou played a role in the disyllabification of Mandarin words.", "cite_spans": [ { "start": 193, "end": 217, "text": "[Packard, 2000: 265-266]", "ref_id": null }, { "start": 233, "end": 252, "text": "Packard [2000: 265]", "ref_id": null }, { "start": 503, "end": 523, "text": "[Packard, 2000: 266]", "ref_id": null }, { "start": 644, "end": 669, "text": "[Li & Thompson, 1981: 44]", "ref_id": null }, { "start": 931, "end": 956, "text": "[Li & Thompson, 1981: 44;", "ref_id": null }, { "start": 957, "end": 976, "text": "Packard, 2000: 266]", "ref_id": null }, { "start": 1058, "end": 1083, "text": "[Li & Thompson, 1981: 44]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A Qualitative Analysis of Five Mandarin Suffixes", "sec_num": "2.1" }, { "text": "The word formation of -zi and -tou is not limited to bound morphemes [Lin, 2001: 58-59; Packard, 2000: 84] :", "cite_spans": [ { "start": 69, "end": 87, "text": "[Lin, 2001: 58-59;", "ref_id": null }, { "start": 88, "end": 106, "text": "Packard, 2000: 84]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A Qualitative Analysis of Five Mandarin Suffixes", "sec_num": "2.1" }, { "text": "(9) sh\u016b \u68b3 'to comb' \u2192 sh\u016bzi \u68b3\u5b50 'comb' (10) xi\u01ceng \u60f3 'to think' \u2192 xi\u01cengtou \u60f3\u5934 'thought'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Qualitative Analysis of Five Mandarin Suffixes", "sec_num": "2.1" }, { "text": "In these examples, -zi and -tou form a noun by attaching to a free morpheme (i.e., both sh\u016b \u68b3 and xi\u01ceng \u60f3 are independent words).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Qualitative Analysis of Five Mandarin Suffixes", "sec_num": "2.1" }, { "text": "The term \"productive\" is sometimes used in the literature to describe the above-discussed suffixes. Ramsey [1987: 63] describes -tou to be much less productive than -zi, while Li and Thompson [1981: 42-43] observe that -zi and -tou are both no longer productive. Lin [2001: 57] views -r to be the most productive Mandarin suffix. Unfortunately, the basis for these observations is left unclear. Some observations may be based on the number of word forms of a suffix found in a dictionary; for example, present-day Mandarin has by far more -zi word forms than -tou word forms, and this may lead to the view that -zi is more productive than -tou.", "cite_spans": [ { "start": 100, "end": 117, "text": "Ramsey [1987: 63]", "ref_id": null }, { "start": 176, "end": 205, "text": "Li and Thompson [1981: 42-43]", "ref_id": null }, { "start": 263, "end": 273, "text": "Lin [2001:", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "A Qualitative Analysis of Five Mandarin Suffixes", "sec_num": "2.1" }, { "text": "However, as Aronoff [1980] argues, of interest to linguists is the synchronic aspect of productivity (i.e., how words of an affix can be formed at a given point in time), rather than the diachronic aspect of productivity (i.e., how many words of an affix have been formed between two points in time). Concentrating on the synchronic aspect, if we associate productivity with regularity in word formation [Spencer, 1991: 49] or availability of base words with which a new word can be readily formed, we may predict -hua and -men to be productive, and -zi and -tou to be limited in productivity. The productivity of -r would likely depend on the context-if we focus on broadcasting, the productivity of -r may also be limited.", "cite_spans": [ { "start": 12, "end": 26, "text": "Aronoff [1980]", "ref_id": "BIBREF5" }, { "start": 404, "end": 423, "text": "[Spencer, 1991: 49]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A Qualitative Analysis of Five Mandarin Suffixes", "sec_num": "2.1" }, { "text": "Admittedly, these predictions are speculative, and the difficulty in describing the productivity of an affix is where a quantitative productivity measure becomes important. In the following sections, the productivity of the Mandarin suffixes will be examined quantitatively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Qualitative Analysis of Five Mandarin Suffixes", "sec_num": "2.1" }, { "text": "Baayen [1989, 1992] proposes a corpus-based measure of productivity, formulated as:", "cite_spans": [ { "start": 7, "end": 13, "text": "[1989,", "ref_id": null }, { "start": 14, "end": 19, "text": "1992]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Baayen's Corpus-Based Approach", "sec_num": "3.1" }, { "text": "(11) N n p 1 =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baayen's Corpus-Based Approach", "sec_num": "3.1" }, { "text": "where given all word forms of an affix found in a large corpus of texts, n 1 is the number of word types of the affix that occur only once in the corpus, the so-called hapax legomena (henceforth, hapaxes) , N is the sum of word tokens of the affix, and p is the productivity index of the affix in question. 5 The measure (11) employs Good's [1953] probability estimation method (commonly known as the Good-Turing estimation method) that provides a mathematically proven estimate [Church & Gale, 1991] of the probability of seeing a new word in a corpus, based on the probability of seeing hapaxes in that corpus. The productivity index p expresses the probability that a new word type of an affix will appear in a corpus after N tokens of the affix have been sampled. One important characteristic of the measure (11) is that it is token-based; that is, the measure relies on word-token frequencies in a corpus. The sum of word types of an affix in a corpus, represented by V, is not directly tied to the degree of productivity (see Section 4.1). In the remaining sections, the measure (11) will be referred to as the hapax-based productivity measure. 6 While the hapax-based measure has been primarily used in the studies of Western languages, such as Dutch [e.g., Baayen, 1989 and English [e.g., Baayen & Lieber, 1991; 5 A clear distinction has to be made between word tokens and word types in the context of a corpus study. To give the simplest example, if we have three occurrences of the in a small corpus, the token frequency of the is three, and the type frequency of the is one. In the case of affixation, we ignore the differences between singular and plural forms; for example, if we have a corpus that has {activity, activity, activities, possibility, possibilities}, the token frequency of -ity is five (the sum of all these occurrences of -ity) while the type frequency of -ity is two (after normalizing the plural forms, we have two distinct -ity words, activity and possibility). An exception to ignoring the plural suffix is when we are interested in the productivity of the plural suffix itself. In that case, if we have a corpus consisting of {book, books, books, student, students}, the token frequency of -s is three (i.e., books, books, and students), and the type frequency of -s is two (we have two distinct -s forms, books and students). 6 For the purposes of this paper, the term hapax-based measure is used to express, in a shorthand manner, the fact that the measure defines new words based on hapaxes and that the measure is token-frequency-based. It should not be confused with the hapax-conditioned measure, p*, discussed in Baayen [1993] . Baayen & Renouf, 1996] , the measure was also used by in a study of Mandarin word formation. The focus of Sproat and Shih's study was on productivity in Mandarin root compounding, as seen in the nominal root y\u01d0 \u8681 of m\u01cey\u01d0 \u8682\u8681 'ant' that forms many words of 'ant-kind', such as y\u01d0w\u00e1ng \u8681\u738b 'queen ant' and g\u014dngy\u01d0 \u5de5\u8681 'worker ant'. By analyzing the degree of productivity of a number of Mandarin nominal roots, Sproat and Shih showed that, contrary to a claim in the literature, root compounding is a productive word-formation process in Mandarin. For example, while sh\u00ed \u77f3 'rock-kind' and y\u01d0 \u8681 'ant-kind' had the productivity indices of 0.129 and 0.065, respectively, apparently unproductive b\u012bn \u69df and l\u00e1ng \u6994 of b\u012bnl\u00e1ng \u69df\u6994 'betel nut' were found to have zero productivity. Sproat and Shih's study shows that a corpus-based study of productivity in Chinese is fruitful.", "cite_spans": [ { "start": 183, "end": 204, "text": "(henceforth, hapaxes)", "ref_id": null }, { "start": 307, "end": 308, "text": "5", "ref_id": null }, { "start": 334, "end": 347, "text": "Good's [1953]", "ref_id": "BIBREF22" }, { "start": 479, "end": 500, "text": "[Church & Gale, 1991]", "ref_id": "BIBREF20" }, { "start": 1151, "end": 1152, "text": "6", "ref_id": null }, { "start": 1265, "end": 1277, "text": "Baayen, 1989", "ref_id": "BIBREF9" }, { "start": 1297, "end": 1319, "text": "Baayen & Lieber, 1991;", "ref_id": "BIBREF13" }, { "start": 1320, "end": 1321, "text": "5", "ref_id": null }, { "start": 2361, "end": 2362, "text": "6", "ref_id": null }, { "start": 2654, "end": 2667, "text": "Baayen [1993]", "ref_id": "BIBREF11" }, { "start": 2670, "end": 2692, "text": "Baayen & Renouf, 1996]", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Baayen's Corpus-Based Approach", "sec_num": "3.1" }, { "text": "A major difficulty in conducting a corpus-based study of productivity in Chinese is that Chinese texts lack word delimiters. Segmentation of Chinese text is, by itself, a contested subject [see Sproat, Shih, Gale, & Chang, 1996] , and consequently, a large-size corpus of segmented Chinese texts is not as readily available as a large-size corpus of English texts. used a large-size Chinese corpus (40-million Chinese characters) in their study by running an automatic segmenter to segment strings that contained the Chinese characters of interest and manually processing some problematic cases where the segmentation was not complete.", "cite_spans": [ { "start": 194, "end": 228, "text": "Sproat, Shih, Gale, & Chang, 1996]", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "A Corpus of Segmented Chinese Texts", "sec_num": "3.2" }, { "text": "The corpus of choice in the present study is a \"cleaned-up\" version of the Mandarin Chinese PH Corpus [Guo, 1993; hereafter, the PH Corpus] of segmented Chinese texts, made available in a study by Hockenmaier and Brew [1998] . 7 The corpus contains about 2.4-million (2,447,719) words-or 3.7-million (3,753,291) Chinese characters-from XinHua newspaper articles between January 1990 and March 1991. The texts of the PH Corpus are originally encoded in GB (simplified Chinese characters), and to facilitate the processing of the texts in computer programs, we convert the texts into UTF8 (Unicode) using an encoding conversion program developed by Basis Technology [Uniconv, 1999] . The size of the PH Corpus is relatively small by today's standards (cf. a corpus of 80-million English words used in Baayen & Renouf, 1996) , but the PH Corpus is one of few widely available corpora of segmented Chinese texts. Another widely available corpus of segmented Chinese texts is the Academia Sinica Balanced Corpus [1998; hereafter, the Sinica Corpus] that contains 5-million words from a variety of text sources. The sentences of the Sinica Corpus are syntactically parsed, so the part-of-speech of each segmented word is identified. Although the Sinica Corpus is not used in the present study, the use of the Sinica Corpus is certainly of interest. 8", "cite_spans": [ { "start": 102, "end": 113, "text": "[Guo, 1993;", "ref_id": "BIBREF23" }, { "start": 114, "end": 114, "text": "", "ref_id": null }, { "start": 198, "end": 225, "text": "Hockenmaier and Brew [1998]", "ref_id": "BIBREF24" }, { "start": 228, "end": 229, "text": "7", "ref_id": null }, { "start": 665, "end": 680, "text": "[Uniconv, 1999]", "ref_id": null }, { "start": 800, "end": 822, "text": "Baayen & Renouf, 1996)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "A Corpus of Segmented Chinese Texts", "sec_num": "3.2" }, { "text": "Certain words were filtered out as potentially relevant words of the Mandarin suffixes in question were collected from the PH Corpus. With -r and -zi, a criterion for distinguishing a suffix from a non-suffix is that -r and -zi as a suffix lose their tone [Liu, 2001, 57-58; Norman, 1988, 113-114] . This criterion helps identify and block many non-suffixal cases where -r and -zi denote 'son' or 'child', such as y\u012bng'\u00e9r \u5a74\u513f 'baby', f\u00f9z\u01d0 \u7236\u5b50 'father and son', and", "cite_spans": [ { "start": 256, "end": 274, "text": "[Liu, 2001, 57-58;", "ref_id": null }, { "start": 275, "end": 297, "text": "Norman, 1988, 113-114]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A Corpus of Segmented Chinese Texts", "sec_num": "3.2" }, { "text": "xi\u00e0oz\u01d0 \u5b5d\u5b50 'filial son'. 9 We exclude w\u00e9nhu\u00e0 \u6587\u5316 'culture' because it is never a verb, and according to Norman [1988: 21] , the specific use of w\u00e9nhu\u00e0 \u6587\u5316 to mean 'culture' was adopted from Japanese. Also excluded are some -tou words, such as m\u00e1ot\u00f3u \u77db\u5934 'spearhead', in which -tou is a bound morpheme denoting 'head'. In addition, all pronouns in -men are excluded, as suggested in Sproat [2002] . As discussed earlier, -men behaves differently between pronouns and nouns (i.e., it is obligatory only with pronouns), and it is -men attaching to open-class nouns, rather than closed-class pronouns, that we are currently interested in.", "cite_spans": [ { "start": 24, "end": 25, "text": "9", "ref_id": null }, { "start": 102, "end": 119, "text": "Norman [1988: 21]", "ref_id": null }, { "start": 378, "end": 391, "text": "Sproat [2002]", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "A Corpus of Segmented Chinese Texts", "sec_num": "3.2" }, { "text": "The result of the hapax-based measure applied to the PH Corpus is shown in Table 1 . Figure 1 presents a bar graph illustrating the productivity ranking of the suffixes based on the p values. Among the five suffixes, -r is found to be the most productive. The high productivity of -r is somewhat unexpected given the fact that the PH Corpus consists of newspaper texts. If the use of -r is not favored in broadcasting, we may also expect a limited use of -r in a newspaper context. In addition, the use of -r is often a mere phonological phenomenon as seen in the speech of local Beijing residents, and it is unlikely for such a phonological phenomenon to be represented in newspaper texts. In Table 1 , the number of types (V) of -r does not reach the number of types of the least productive suffix -tou. However, the token frequency (N) of -r is lower than that of -tou, and -r has a larger number of hapaxes than -tou. Under the hapax-based measure, a high token frequency is associated with a high degree of lexicalization of words (i.e., the extent to which words are stored in the lexicon in their full form), and a high degree of lexicalization of words, in turn, is associated with a low degree of productivity [Baayen, 1989 . The rationale behind this mechanism is that if many words of an affix are lexicalized, the word formation rule of the affix needs to be invoked less often to form a ", "cite_spans": [ { "start": 1219, "end": 1232, "text": "[Baayen, 1989", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 75, "end": 82, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 85, "end": 93, "text": "Figure 1", "ref_id": null }, { "start": 694, "end": 701, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "A Quantitative Analysis of the Mandarin Suffixes", "sec_num": "3.3" }, { "text": "The productivity of -hua seems somewhat lower than what we may expect from the regularity in -hua word formation. Comparing -men and -hua in Table 1 , we see that -men and -hua are similar with respect to both V and n 1 , but the p value of -hua is lowered by the high token frequency (N) of -hua. The high token frequency of -hua could be attributed to the fact that the present analysis includes -hua words used as nouns. According to Baxter and Sagart [1998: 40] , -hua words are formed as verbs first, and these verbs can be used as nouns.", "cite_spans": [ { "start": 437, "end": 465, "text": "Baxter and Sagart [1998: 40]", "ref_id": null } ], "ref_spans": [ { "start": 141, "end": 148, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Figure 1 The productivity ranking of the Mandarin suffixes by the p values (the vertical axis lists the suffixes, and the horizontal axis shows the p values of the suffixes).", "sec_num": null }, { "text": "If this is the case, the word formation of -hua is also relevant in -hua nouns. However, the uniform treatment of -hua verbs and -hua nouns may not be appropriate for the hapax-based measure. It could be the case, for example, that some -hua words are typically used as nouns with high token frequencies while other -hua words are typically used as verbs with low token frequencies. It is, therefore, necessary to make a more detailed analysis of the word frequency distribution of -hua by separating -hua nouns from -hua verbs. Distinguishing nouns from verbs is unfortunately not available in the PH Corpus due to lack of syntactic information. A clearer description of the productivity of -hua could be achieved with a syntactically parsed corpus such as the Sinica Corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1 The productivity ranking of the Mandarin suffixes by the p values (the vertical axis lists the suffixes, and the horizontal axis shows the p values of the suffixes).", "sec_num": null }, { "text": "The present study explores a type-based measure of productivity. It has been argued that the sum of types of an affix in a corpus, V, alone often leads to some unintuitive results in measuring productivity [Baayen, 1989 Baayen & Lieber, 1991] . 10 For example, Baayen and Lieber [1991: 804] point out that the type frequencies of -ness and -ity in their corpus (497 and 405, respectively) do not adequately represent the fact that -ness is intuitively felt to be much more productive than -ity. If the number of types in a corpus can be misleading with respect the degree of productivity, how can we make use of type frequencies in a productivity measure?", "cite_spans": [ { "start": 206, "end": 219, "text": "[Baayen, 1989", "ref_id": "BIBREF9" }, { "start": 220, "end": 242, "text": "Baayen & Lieber, 1991]", "ref_id": "BIBREF13" }, { "start": 261, "end": 290, "text": "Baayen and Lieber [1991: 804]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Type-Based Measures", "sec_num": "4.1" }, { "text": "An early attempt at a type-based measure of productivity was made by Aronoff [1976: 36] , in which he proposed that the degree of productivity of an affix could be measured by the ratio of the number of actual words of the affix to the number of possible words of the affix.", "cite_spans": [ { "start": 69, "end": 87, "text": "Aronoff [1976: 36]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Type-Based Measures", "sec_num": "4.1" }, { "text": "The measure is described by Baayen [1989: 28] as:", "cite_spans": [ { "start": 28, "end": 45, "text": "Baayen [1989: 28]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Type-Based Measures", "sec_num": "4.1" }, { "text": "(12) S V I =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Type-Based Measures", "sec_num": "4.1" }, { "text": "where V is the number of actual words with the relevant affix, S is the number of possible words with the affix, and I is the productivity index of the affix. Baayen [1989: 28] argues that the measure lacks specification on how to obtain V and S. Moreover, he argues that the measure can be interpreted to express, ironically, the degree of \"unproductivity\" of an affix because the number of possible words (S) would be, in theory, increasingly large (hence, the productivity index I would be increasingly small) for a very productive affix [Baayen, 1989: 30 ].", "cite_spans": [ { "start": 159, "end": 176, "text": "Baayen [1989: 28]", "ref_id": null }, { "start": 541, "end": 558, "text": "[Baayen, 1989: 30", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Type-Based Measures", "sec_num": "4.1" }, { "text": "Baayen [1989, 1992] defines V and S based on corpus data. V is (as before) the sum of types of the relevant affix found in a corpus, and S (expressed as \u015c) is statistically estimated for an infinitely large corpus; that is, \u015c is the number of possible word types of the relevant affix to be expected when the corpus size is increased infinitely. 11 The measure that Baayen [1989: 60] proposes:", "cite_spans": [ { "start": 7, "end": 13, "text": "[1989,", "ref_id": null }, { "start": 14, "end": 19, "text": "1992]", "ref_id": null }, { "start": 366, "end": 383, "text": "Baayen [1989: 60]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Type-Based Measures", "sec_num": "4.1" }, { "text": "(13) V S I=", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Type-Based Measures", "sec_num": "4.1" }, { "text": "is the inverse of (12) and expresses the potentiality of word formation rules, the extent to which the number of actual word types of an affix exhaust the number of possible word types of the affix [Baayen, 1992: 122] . The measure (13), however, is not considered an alternative measure of the degree of productivity [Baayen, 1992: 122] .", "cite_spans": [ { "start": 198, "end": 217, "text": "[Baayen, 1992: 122]", "ref_id": null }, { "start": 318, "end": 337, "text": "[Baayen, 1992: 122]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Type-Based Measures", "sec_num": "4.1" }, { "text": "would mean under a type-based measure. One major appeal of the hapax-based measure is that it centers on the formation of new words, and we may wish to try focusing on the formation of new words under a type-based measure. However, a problem with taking a type-based approach is that we can no longer rely on the Good-Turing estimation method. In the next section, we will discuss another method of defining new words of a corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What does not appear to have been explored so far is the question of what new words", "sec_num": null }, { "text": "To define new words of a corpus in a type-based manner, we can employ the deleted estimation method [Jelinek & Mercer, 1985] used in language engineering. In a probabilistic language model, given a training corpus and a test corpus, we process words in the test corpus based on the probabilities of word occurrence in the training corpus. Since not all words of the test corpus appear in the training corpus, we need a method of assigning an appropriate probability mass to the unseen words in the test corpus. The main task involved here is to adjust the probabilities of word occurrence in the training corpus so that non-zero probability can be assigned to unseen words of the test corpus. A method used in this probability adjustment, if incorporated into a productivity measure, can tell us the probability of encountering unseen words in a corpus. The Good-Turing estimation method underlying the hapax-based measure is widely used in probabilistic language modeling, and its successful performances are reported in the literature [Chen & Goodman, 1998; Church & Gale, 1991] .", "cite_spans": [ { "start": 100, "end": 124, "text": "[Jelinek & Mercer, 1985]", "ref_id": "BIBREF25" }, { "start": 1037, "end": 1059, "text": "[Chen & Goodman, 1998;", "ref_id": "BIBREF19" }, { "start": 1060, "end": 1080, "text": "Church & Gale, 1991]", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "The Deleted Estimation Method", "sec_num": "4.2" }, { "text": "While the Good-Turing estimation method is a mathematical solution to the task of probability adjustment, where the needed probability adjustment is mathematically determined, the deleted estimation method is an empirical solution, where the needed adjustment is determined by comparing discrepancies in word frequency between corpora [Church & Gale, 1991; Manning & Sch\u00fctze, 1999] .", "cite_spans": [ { "start": 335, "end": 356, "text": "[Church & Gale, 1991;", "ref_id": "BIBREF20" }, { "start": 357, "end": 381, "text": "Manning & Sch\u00fctze, 1999]", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "The Deleted Estimation Method", "sec_num": "4.2" }, { "text": "The deleted estimation method, when incorporated into a type-based productivity measure, proceeds as follows. We begin by preparing two corpora of the same size and text type. The easiest way to have two such corpora is to split a large corpus in the middle into two sub-corpora, which we will call Corpus A and Corpus B. 12 Comparing word types that appear in Corpus A against word types in Corpus B, unseen word types (or unseen types) in Corpus A are defined as those word types that do not appear in Corpus B. Likewise, unseen types in Corpus B are those that are absent in Corpus A. We obtain the average number of unseen types between Corpus A and Corpus B. Defining all word types (or all types) in a corpus as all the word types found in that corpus, 13 we also obtain the average number of all types between the two sub-corpora. The ratio of the average number of unseen types to the average number of all types expresses the extent to which word types of an affix are of an unseen type. With an assumption that unseen types are good candidates for new word types, the degree of productivity expressed in this manner comes close to Anshen and Aronoff's [1988: 643] definition of productivity as \"the likelihood that new forms will enter the language.\"", "cite_spans": [ { "start": 322, "end": 324, "text": "12", "ref_id": null }, { "start": 1141, "end": 1173, "text": "Anshen and Aronoff's [1988: 643]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Deleted Estimation Method", "sec_num": "4.2" }, { "text": "The type-based deleted estimation productivity measure is formulated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Deleted Estimation Method", "sec_num": "4.2" }, { "text": "Given Corpus A and Corpus B of the same size and text type, and all word types of an affix found in these corpora, where all types of a corpus are all the word types found in that corpus, unseen types in one corpus are those that are absent in the other corpus, and P tde is the degree of productivity of the affix in question (tde = type-based deleted estimation). In calculating P tde by the measure (14), we can first average the unseen types in the nominator and the all types in the denominator. This will conveniently give us the average number of unseen types and the average number of all types, which are both of interest by themselves, before examining the ratio of the two (as will be seen later in Table 2 ). In the remaining sections, the measure (14) will be referred to as the P tde measure. Using a Venn Diagram, Figure 2 illustrates elements involved in the P tde measure.", "cite_spans": [], "ref_spans": [ { "start": 710, "end": 717, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 829, "end": 837, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "The Deleted Estimation Method", "sec_num": "4.2" }, { "text": "As a byproduct, the P tde measure also identifies common types, word types that are shared by two sub-corpora, as shown in Figure 2 . One possible interpretation of these common types is that they represent attested words, where attested words are defined as those words that are familiar to the majority of speakers. Although an approximation, 14 common types may be good candidates for attested words because unseen types, which are less likely to be familiar to the majority of speakers, are maximally excluded. As the corpus size increases, the number of common types may begin to provide a good estimate of the range of word types that are 14 Strictly speaking, any word type with the token frequency of two or more in the original whole corpus has a chance to be shared by the two sub-corpora after the corpus is split. Thus, a word that appears only twice in a large corpus could be identified as a common type.", "cite_spans": [ { "start": 645, "end": 647, "text": "14", "ref_id": null } ], "ref_spans": [ { "start": 123, "end": 131, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Figure 2 An illustration of elements involved in the P tde measure (all types in a corpus are all the word types found in that corpus, unseen types in one corpus are those that are absent in the other corpus, and common types are the word types shared by the two corpora).", "sec_num": null }, { "text": "A shared by the majority of speakers. Such a range of word types differs from the range of word types in a dictionary. Common types will not be pursued in the present study, but they may be worth further investigation in future research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "common types in Corpus A and Corpus B", "sec_num": null }, { "text": "The result of the P tde measure applied to the PH Corpus is shown in Table 2 . Figure 3 presents a bar graph that illustrates the productivity ranking of the suffixes based on the P tde values. Note. The PH Corpus is split in the middle into two sub-corpora. All types in a sub-corpus are all the word types that appear in that sub-corpus. The second column shows the average number of all types between the two sub-corpora. Unseen types are those that appear in one sub-corpus but are absent in the other sub-corpus. The third column shows the average number of unseen types between the two sub-corpora. The tenths place in the second and third columns is due to the averaging. P tde is the ratio of (average) unseen types to (average) all types. The suffixes are sorted in descending order by P tde .", "cite_spans": [], "ref_spans": [ { "start": 69, "end": 76, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 79, "end": 87, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Performance of the P tde Measure", "sec_num": "4.3" }, { "text": "In Table 2 , we find that -r is not as highly productive as under the hapax-based measure,", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Performance of the P tde Measure", "sec_num": "4.3" }, { "text": "though it still appears to be grouped with the more productive suffixes. Here, we may wonder why we examine the ratio of unseen types to all types, instead of examining the number of unseen types only. If productivity is determined by the number of unseen types only, -r would be among the less productive suffixes. However, comparing the number of unseen types alone is not satisfactory because an affix with a low frequency of use would generally be found to be less productive. The P tde measure must be able to capture the possibility that an affix with a low frequency of use can nevertheless be productive when it is used to form a word. With respect to the present data, the ratio of unseen types to all types is relatively high for -r,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance of the P tde Measure", "sec_num": "4.3" }, { "text": "indicating that a large proportion of -r word types are of an unseen type, or a potentially new type. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance of the P tde Measure", "sec_num": "4.3" }, { "text": "As was the case under the hapax-based measure, -men is found to be highly productive and -tou is found to be the least productive. The uniform treatment of -hua verbs and -hua nouns does not seem to pose a problem, though it is also of interest to investigate the effect of separating -hua nouns from -hua verbs under the P tde measure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3 The productivity ranking of the Mandarin suffixes by the P tde values (the vertical axis lists the suffixes, and the horizontal axis shows the P tde values of the suffixes).", "sec_num": null }, { "text": "The P tde measure defines unseen types irrespective of word-token frequencies; that is, an unseen type in a corpus is \"unseen\" as long as it is absent in the other corpus, regardless of how many times the word is repeated in the same corpus. Figure 4 shows the word-token frequency distribution of unseen types in Corpus A and Corpus B. The labels used for the word-token frequency categories are: n 1 = words occurring once, n 2 = words occurring twice, ..., n 5+ = words occurring five times or more. ", "cite_spans": [], "ref_spans": [ { "start": 242, "end": 250, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Figure 3 The productivity ranking of the Mandarin suffixes by the P tde values (the vertical axis lists the suffixes, and the horizontal axis shows the P tde values of the suffixes).", "sec_num": null }, { "text": "We find in Figure 4 that the majority of unseen types are hapaxes. There are, nonetheless, unseen types that appear more than once in a corpus-some unseen types appear even five times or more (n 5+ ). We also notice gaps between the two sub-corpora in the word frequency of the unseen types (e.g., compare the number of -men hapaxes). Variability between two corpora will be the topic of discussion in the next section.", "cite_spans": [], "ref_spans": [ { "start": 11, "end": 19, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Figure 4 The word-token frequency distribution of unseen types in the two sub-corpora of the PH Corpus, Corpus A and Corpus B (the horizontal axis shows the word-token frequency category, and the vertical axis shows the number of word types in each frequency category; the letter following each suffix in the legend indicates from which sub-corpus the data are drawn; the order of the suffixes in the legend (from top down) corresponds to the order of bars in each frequency category (from left to right)).", "sec_num": null }, { "text": "Under the P tde measure, a corpus is split in the middle to create two sub-corpora. So far, we have made the assumption that splitting a corpus in the middle would create two sub-corpora that are similar with respect to the text type. However, we must be cautious about this assumption. Baayen [2001] discusses how the texts and word frequency distribution of a corpus can be non-uniform. 15 One way to reduce variability between split halves of a corpus is to randomize words of the corpus before splitting the corpus into two. Randomization of words can be accomplished by shuffling words; that is, given a corpus of n words, we exchange each i-th word (i = 1, 2, ..., n) with a randomly chosen j-th word (1 \u2264 j \u2264 n). If we repeat the \"random split\" of a corpus (i.e., randomizing words of a corpus and splitting the corpus in the middle) for a large number of times, say 1,000 times, and compute the mean of the relevant data, we should be able to obtain a stable, representative result of a productivity measure. 16 Table 3 shows the result of the hapax-based measure applied to the two sub-corpora of the PH Corpus, with and without randomization of words. In Part (a) of Table 3 , the difference in V between Corpus A and Corpus B is almost significant, 17 which suggests variability in texts between the two sub-corpora, and a different productivity ranking is obtained in each sub-corpus. However, if we turn to Part (b) of Table 3 , the productivity ranking becomes consistent between the two sub-corpora. 18 Interestingly, the productivity ranking in Part (b) of Table 3 is the same as one obtained earlier in Table 1 in Section 3.3. The p values in Part (b) of Table 3 are overall higher than those in Table 1, but this is an expected outcome, for p is dependent on the size of a corpus [Baayen, 1989 Baayen & Lieber, 1991] . We find that the hapax-based measure can achieve stability by means of a large number of random splits of a corpus.", "cite_spans": [ { "start": 287, "end": 300, "text": "Baayen [2001]", "ref_id": null }, { "start": 389, "end": 391, "text": "15", "ref_id": null }, { "start": 1017, "end": 1019, "text": "16", "ref_id": null }, { "start": 1515, "end": 1517, "text": "18", "ref_id": null }, { "start": 1798, "end": 1811, "text": "[Baayen, 1989", "ref_id": "BIBREF9" }, { "start": 1812, "end": 1834, "text": "Baayen & Lieber, 1991]", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 1020, "end": 1027, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 1177, "end": 1184, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 1432, "end": 1439, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 1573, "end": 1580, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 1620, "end": 1627, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 1672, "end": 1679, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 1713, "end": 1725, "text": "Table 1, but", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Variability in Corpus Data", "sec_num": "4.4" }, { "text": "What will be the effects of corpus-data variability on the P tde measure? To examine this, we need to temporarily simplify the P tde measure so that the value of P tde will be obtained for each individual sub-corpus (without averaging unseen types and all types between two sub-corpora). That is, under the simplified measure, P tde for Corpus A, P tde (A), will be the ratio of \"unseen types in A given B\" to \"all types in A\"; and similarly, P tde (B) will be the ratio of \"unseen types in B given A\" to \"all types in B.\" Table 4 shows the result of the simplified P tde measure applied to the two sub-corpora of the PH Corpus, with and without randomization of words.", "cite_spans": [], "ref_spans": [ { "start": 523, "end": 530, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Variability in Corpus Data", "sec_num": "4.4" }, { "text": "The simplified P tde measure is found to be quite vulnerable to corpus-data variability. In Part (a) of Table 4 , the difference between Corpus A and Corpus B is almost significant in all types and unseen types, and the P tde values differ significantly between the two sub-corpora. 19 However, if we turn to Part (b) of Table 4 , the productivity ranking becomes consistent between the two sub-corpora. 20 Similarly to the hapax-based measure, the P tde measure can achieve stability through a large number of random splits of a corpus.", "cite_spans": [ { "start": 283, "end": 285, "text": "19", "ref_id": null }, { "start": 404, "end": 406, "text": "20", "ref_id": null } ], "ref_spans": [ { "start": 104, "end": 111, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 321, "end": 328, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Variability in Corpus Data", "sec_num": "4.4" }, { "text": "17 A paired t-test reveals that the difference in V approaches significance [t(4) = 2.595, p = .06], though the difference is not significant in other elements: N[t(4) = .905, p > .10], n 1 [t(4) = 2.046, p > .10], and p [t(4) = .555, p > .10]. 18 The correlation coefficient between Corpus A and Corpus B improves in p after the random splits: p [r(5) = (.850 \u2192) 1.0, p < .01]. 19 A paired t-test shows that the difference approaches significance in all types [t(4) = 2.595, p = .06] and in unseen types [t(4) = 2.595, p = .06] and the difference is significant in P tde [t(4) = 2.869, p < .05]. 20 The correlation coefficient between Corpus A and Corpus B improves in P tde after the random splits: Note. Each value in Part (b) is the mean of 1,000 random splits. The suffixes in each section are sorted in descending order by P tde . Figure 5 shows the word-token frequency distribution of unseen types averaged over the 1,000 random splits. We see in Figure 5 that unseen types with higher token frequencies (e.g., n 4 and n 5+ ) are almost absent. What this indicates is that as a result of randomizing words of a corpus, it became unlikely for unseen types to include word types that are repeated many times in a corpus. As compared with what we saw earlier in Figure 4 , the greater majority of unseen types are now hapaxes, and variances between Corpus A and Corpus B are also reduced.", "cite_spans": [ { "start": 245, "end": 247, "text": "18", "ref_id": null }, { "start": 379, "end": 381, "text": "19", "ref_id": null }, { "start": 597, "end": 599, "text": "20", "ref_id": null } ], "ref_spans": [ { "start": 837, "end": 845, "text": "Figure 5", "ref_id": "FIGREF5" }, { "start": 955, "end": 963, "text": "Figure 5", "ref_id": "FIGREF5" }, { "start": 1267, "end": 1275, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Variability in Corpus Data", "sec_num": "4.4" }, { "text": "P", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Variability in Corpus Data", "sec_num": "4.4" }, { "text": "We now consider the P tde measure in its original state (as in Section 4.2, with the averaging of unseen types and all types between two sub-corpora). Comparing Table 2 and Part (b) of Table 4 , we find that the original P tde measure achieves a result that is highly correlated with the result obtained with the 1,000 random splits. 21 Note in particular that the 21 Comparing the elements of Table 2 and the elements of Corpus A in Part (b) of Table 4 , the correlation coefficient is significant in all elements: all types [r(5) = 1.0, p < .01], unseen types [r(5) = 1.0, p < .01], and P tde [r(5) = 1.0, p < .01]. Likewise, the correlation coefficient is significant in all elements when we compare the elements of Table 2 and productivity ranking is consistent between Table 2 and Part (b) of Table 4 . The P tde measure seems to reduce the effects of corpus-data variability by averaging unseen types and all types between two sub-corpora. This is an advantage and makes the P tde measure handy, for a large number of random splits of a corpus can be computationally expensive, especially when the corpus size is large. ", "cite_spans": [ { "start": 334, "end": 336, "text": "21", "ref_id": null }, { "start": 365, "end": 367, "text": "21", "ref_id": null } ], "ref_spans": [ { "start": 161, "end": 168, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 185, "end": 192, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 394, "end": 401, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 446, "end": 453, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 719, "end": 730, "text": "Table 2 and", "ref_id": "TABREF2" }, { "start": 774, "end": 781, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 798, "end": 805, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Variability in Corpus Data", "sec_num": "4.4" }, { "text": "The present study has proposed a type-based measure of productivity, the P tde measure, that uses the deleted estimation method [Jelinek & Mercer, 1985] in defining unseen word types of a corpus. The measure expresses the degree of productivity of an affix by the ratio of unseen word types of the affix to all word types of the affix. If the ratio is high for an affix, a large proportion of the word types of the affix are of an unseen type, indicating that the affix has a great potential to form a new word.", "cite_spans": [ { "start": 128, "end": 152, "text": "[Jelinek & Mercer, 1985]", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5." }, { "text": "We have tested the performance of the P tde measure as well as the hapax-based measure of Baayen [1989 in a quantitative analysis of the productivity of five Mandarin suffixes: -hua, -men, -r, -zi, and -tou. The P tde measure describes -hua, -men, and -r to be highly productive, -zi to be less productive than these three suffixes, and -tou to be the least productive, yielding the productivity ranking \"-men, -hua, -r, -zi, -tou.\" The P tde measure and the hapax-based measure rank the suffixes differently with respect to -hua and -r. The relatively low productivity of -hua under the hapax-based measure could be attributed to the inclusion of -hua nouns in the present analysis. -r is assigned a larger productivity score under the hapax-based measure. The two measures agree on the high productivity of -men and the low productivity of -tou. The different results of the two measures are likely due to the type-based/token-based difference of the measures. The result of each measure requires an individual evaluation, for the knowledge that we can obtain from the result of each measure is different; for example, while the hapax-based measure takes into consideration the degree of lexicalization of words of an affix, the P tde measure does not consider such an issue.", "cite_spans": [ { "start": 90, "end": 102, "text": "Baayen [1989", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5." }, { "text": "We have also examined how corpus-data variability affects the results of a productivity measure. It was found that a large number of random splits of a corpus adds stability to both the P tde measure and the hapax-based measure. Moreover, it was found that even without randomization of words, the averaging of unseen types and all types under the P tde measure reduces the effects of corpus-data variability. This is an advantage of the P tde measure, considering the computational cost involved in randomizing words repeatedly, especially when the corpus is large.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5." }, { "text": "With an assumption that unseen words of a corpus are good candidates for new words, a corpus-based productivity measurement can be regarded as a search for unseen words in a corpus. The apparent paradox is that the words that we seek are \"unseen.\" Baayen's hapax-based measure achieves a mathematical estimate of the probability of seeing unseen words in a corpus by the Good-Turing estimation method. The deleted estimation method provides another way of defining unseen words of a corpus by comparing discrepancies in word frequency between two corpora, and the method also enables defining unseen words in a type-based context. It is hoped that words identified as unseen by the P tde measure are also good candidates for new words, and this requires further investigation in future research. The implication of the successful result of the P tde measure presented in this paper is that, in addition to what has been proposed by Baayen [1989, 1992, and subsequent works] , there appear to be possibilities for capturing and exploiting elements in corpus data that are relevant to the quantitative description of productivity. The study of morphological productivity will be enriched by exploring such possibilities in the corpus-based approach to measuring productivity. 15 -\u5e02\u957f\u4eec sh\u00eczh\u01cengmen 14 -\u5c45\u6c11\u4eec j\u016bm\u00ednmen 14 -\u9996\u8111\u4eec sh\u01d2un\u01ceomen 14 -\u6751\u6c11\u4eec c\u016bnm\u00ednmen 13 -\u6f14\u5458\u4eec y\u01cenyu\u00e1nmen 13 -\u65c5\u5ba2\u4eec l\u01dak\u00e8men 12 -\u540c\u4e8b\u4eec t\u00f3ngsh\u00ecmen 12 -\u5c0f\u4f19\u5b50\u4eec xi\u01ceohu\u01d2zimen 11 -\u533b\u751f\u4eec y\u012bsh\u0113ngmen 10 -\u884c\u5bb6\u4eec x\u00edngji\u0101men 10 -\u8bae\u5458\u4eec y\u00ecyu\u00e1nmen 10 -\u5927\u5b66\u751f\u4eec d\u00e0xu\u00e9sh\u0113ngmen 10 -\u5b98 \u5175 \u4eec gu\u0101nb\u012bngmen 9 -\u8fd0 \u52a8\u5458 \u4eec y\u00f9nd\u00f2ngyu\u00e1nmen 9 -\u89c2\u5bdf\u5bb6\u4eec gu\u0101nch\u00e1ji\u0101men 9 -\u540c\u884c\u4eec t\u00f3ngx\u00edngmen 8 -\u7ecf\u7406\u4eec j\u012bngl\u01d0men 8 -\u5e08\u751f\u4eec sh\u012bsh\u0113ngmen 7 -\u5e38\u59d4\u4eec ch\u00e1ngw\u011bimen 7 -\u4f01\u4e1a\u5bb6\u4eec q\u01d0y\u00e8ji\u0101men 7 -\u5916\u957f\u4eec w\u00e0izh\u01cengmen 7 -\u6307\u6218\u5458\u4eec zh\u01d0zh\u00e0nyu\u00e1nmen 7 -\u8239\u5458\u4eec chu\u00e1nyu\u00e1nmen 6 -\u5217\u8f66 \u5458\u4eec li\u00e8ch\u0113yu\u00e1nmen 6 -\u90e8 \u957f\u4eec b\u00f9zh\u01cengmen 6 -\u4f5c\u5bb6\u4eec zu\u00f2ji\u0101men 6 -\u5efa\u8bbe\u8005\u4eec ji\u00e0nsh\u00e8zh\u011bmen 6 -\u5de5 \u53cb \u4eec g\u014dngy\u01d2umen 6 -\u9752\u5e74\u4eec q\u012bngni\u00e1nmen 6 -\u515a \u5458 \u4eec d\u01cengyu\u00e1nmen 5 -\u987e\u5ba2\u4eec g\u00f9k\u00e8men 5 -\u5e72\u8b66\u4eec g\u00e0nj\u01d0ngmen 5 -\u5b66\u8005\u4eec xu\u00e9zh\u011bmen 5 -\u5a18 \u4eec ni\u00e1ngmen 5 -\u52b3\u6a21\u4eec l\u00e1om\u00f3men 5 -\u6559\u5e08\u4eec ji\u00e0osh\u012bmen 5 -\u8425\u4e1a\u5458\u4eec y\u00edngy\u00e8yu\u00e1nmen 4 -\u56e2\u5458\u4eec tu\u00e1nyu\u00e1nmen 4 -\u6210\u5458\u4eec ch\u00e9ngyu\u00e1nmen 4 -\u5b50\u5973\u4eec z\u01d0n\u01damen 4 -\u961f\u53cb\u4eec du\u00ecy\u01d2umen 4 -\u5987\u5973\u4eec f\u00f9n\u01damen 4 -\u4e58\u5ba2\u4eec ch\u00e9ngk\u00e8men 4 -\u4fa8\u80de\u4eec qi\u00e1ob\u0101omen 4 -\u4f19 \u4f34\u4eec hu\u01d2b\u00e0nmen 4 -\u6765\u5bbe\u4eec l\u00e1ib\u012bnmen 4 -\u513f\u5973\u4eec \u00e9rn\u01damen 3 -\u519b\u4eba\u4eec j\u016bnr\u00e9nmen 3 -", "cite_spans": [ { "start": 932, "end": 955, "text": "Baayen [1989, 1992, and", "ref_id": null }, { "start": 956, "end": 973, "text": "subsequent works]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5." }, { "text": "\u5316 w\u01cenglu\u00f2hu\u00e0 1 -\u6c28\u5316 \u0101nhu\u00e0 1 -\u6574\u4f53\u5316 zh\u011bngt\u01d0hu\u00e0 1 -\u6e20\u7f51\u5316 q\u00faw\u01cenghu\u00e0 1 -\u5065\u5eb7 \u5316 ji\u00e0nk\u0101nghu\u00e0 1 -\u795e\u5316 sh\u00e9nhu\u00e0 1 -\u672c\u5730\u5316 b\u011bnd\u00echu\u00e0 1 -\u6b27\u6d32\u5316 \u014duzh\u014duhu\u00e0 1 -\u5408\u7406 \u5316 h\u00e9l\u01d0hu\u00e0 1 -\u9986\u5316 gu\u01cenhu\u00e0 1 -\u89c4\u683c\u5316 gu\u012bg\u00e9hu\u00e0 1 -\u8d35\u65cf\u5316 gu\u00ecz\u00fahu\u00e0 1 -\u6a21\u5757\u5316 m\u00f3ku\u00e0ihu\u00e0 1 -\u4e2a\u6027\u5316 g\u00e8x\u00ecnghu\u00e0 1 -\u539f\u751f\u52a8\u7269\u5316 yu\u00e1nsh\u0113ngd\u00f2ngw\u00f9hu\u00e0 1 -\u666e\u53ca\u5316 p\u01d4j\u00edhu\u00e0 1 -\u6210\u4eba\u5316 ch\u00e9ngr\u00e9nhu\u00e0 1 -\u786c\u6717\u5316 y\u00ecnglanghu\u00e0 1 -\u6b27\u5171\u4f53\u5316 \u014dug\u00f2ngt\u01d0hu\u00e0 1 - \u6c30\u5316 q\u00ednghu\u00e0 1 -\u5b9a\u91cf\u5316 d\u00ecngli\u00e0nghu\u00e0 1 -\u6c2f\u82ef\u5316 l\u01dcb\u011bnhu\u00e0 1 -\u7535\u5668\u5316 di\u00e0nq\u00echu\u00e0 1 - \u9f84\u5316 l\u00ednghu\u00e0 1 -\u6c2f\u5316 l\u01dchu\u00e0 1 -\u5b98\u50da\u5316 gu\u0101nli\u00e1ohu\u00e0 1 -\u6c2f\u78fa\u5316 l\u01dchu\u00e1nghu\u00e0 1 -\u653f\u6cbb \u5316 zh\u00e8ngzh\u00echu\u00e0 1 -\u5173\u6000\u5316 gu\u0101nhu\u00e1ihu\u00e0 1 -\u6863\u6848\u5316 d\u00e0ng\u00e0nhu\u00e0 1 -\u78f7\u5316 l\u00ednhu\u00e0 1 -\u51dd \u56fa\u5316 n\u00edngg\u00f9hu\u00e0 1 -\u8d28\u5316 zh\u00echu\u00e0 1 -\u6eb6\u5316 r\u00f3nghu\u00e0 1 -\u7682\u5316 z\u00e0ohu\u00e0 1 -\u5c18\u5316 ch\u00e9nhu\u00e0 1 -\u85fb\u7c7b\u5316 z\u01ceol\u00e8ihu\u00e0 1 -\u5143\u9996\u5316 yu\u00e1nsh\u01d2uhu\u00e0 1 -\u56ed\u7530\u5316 yu\u00e1nti\u00e1nhu\u00e0 1 -\u8150\u5316 f\u01d4hu\u00e0 1 -\u5173\u7cfb\u5316 gu\u0101nx\u00echu\u00e0 1 -\u5851\u5316 s\u00f9hu\u00e0 1 -\u827a\u672f\u5316 y\u00ecsh\u00f9hu\u00e0 1 -\u56fd\u5bb6\u5316 gu\u00f3ji\u0101hu\u00e0 1 - \u8db3\u8ff9\u5316 z\u00faj\u00echu\u00e0 1 -\u70bc\u5316 li\u00e0nhu\u00e0 1 -\u68c9\u82b1\u5316 mi\u00e1nhuahu\u00e0 1 -\u901a\u7528\u5316 t\u014dngy\u00f2nghu\u00e0 1 - \u6e0d\u5316 z\u00echu\u00e0 1 -\u884c\u653f\u5316 x\u00edngzh\u00e8nghu\u00e0 1 -\u8d8a\u5357\u5316 yu\u00e8n\u00e1nhu\u00e0 1 -\u8815\u866b\u5316 r\u00fach\u00f3nghu\u00e0 1 - \u6a21\u786b\u5316 m\u00f3li\u00fahu\u00e0 1 -\u91cf\u5316 li\u00e0nghu\u00e0 1 -\u65f6\u88c5\u5316 sh\u00edzhu\u0101nghu\u00e0 1 -\u90e8\u95e8\u5316 b\u00f9m\u00e9nhu\u00e0 1 - \u7406\u60f3\u5316 l\u01d0xi\u01cenghu\u00e0 1 -\u7701\u57ce\u5316 sh\u011bngch\u00e9nghu\u00e0 1 -\u515a\u5316 d\u01cenghu\u00e0 1 -\u6218\u7565\u5316 zh\u00e0nl\u00fc\u00e8hu\u00e0 1 -\u5168\u80fd\u5316 qu\u00e1nn\u00e9nghu\u00e0 1 -\u50ac \u5316 cu\u012bhu\u00e0 1 -\u6570 \u91cf \u5316 sh\u00f9li\u00e0nghu\u00e0 1 -\u7a7a\u5fc3\u5316 k\u00f2ngx\u012bnhu\u00e0 1 -\u7ea4 \u5316 xi\u0101nhu\u00e0 1 -\u7fbd \u5316 y\u01d4hu\u00e0 1 -\u5957\u8def\u5316 t\u00e0ol\u00f9hu\u00e0 1 -\u5e73 \u9762 \u5316 p\u00edngmi\u00e0nhu\u00e0 1 -\u96ea\u5316 xu\u011bhu\u00e0 1 -\u751f\u6d3b\u5316 sh\u0113nghu\u00f3hu\u00e0 1 -\u52a8\u7269\u5316 d\u00f2ngw\u00f9hu\u00e0 1 -\u7a0b\u63a7 \u5316 ch\u00e9ngk\u00f2nghu\u00e0 1 -\u6c2e\u5316 d\u00e0nhu\u00e0 1 -\u8c31\u5316 p\u01d4hu\u00e0 1 -\u5eb8\u4fd7\u5316 y\u014dngs\u00fahu\u00e0 1 -men \u4eba\u4eec r\u00e9nmen 734 -\u4ee3 \u8868 \u4eec d\u00e0ibi\u01ceomen 175 -\u4e13 \u5bb6 \u4eec", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5." }, { "text": "\u5c06\u519b\u4eec ji\u0101ngj\u016bnmen 3 -\u7236\u6bcd\u5b98\u4eec f\u00f9m\u01d4gu\u0101nmen 3 -\u4e58\u52a1\u5458\u4eec ch\u00e9ngw\u00f9yu\u00e1nmen 3 -\u62a4 \u58eb\u4eec h\u00f9shimen 3 -\u5927\u5e08\u4eec d\u00e0sh\u012bmen 3 -\u513f\u5b59\u4eec \u00e9rs\u016bnmen 3 -\u620f\u8ff7\u4eec x\u00ecm\u00edmen 3 -\u5c0f\u5b66 \u751f\u4eec xi\u01ceoxu\u00e9sh\u0113ngmen 3 -\u6587\u827a\u5bb6\u4eec w\u00e9ny\u00ecji\u0101men 3 -\u89c2\u4f17\u4eec gu\u0101nzh\u00f2ngmen 3 -\u7403\u8ff7\u4eec qi\u00fam\u00edmen 3 -\u53f8\u957f\u4eec s\u012bch\u00e1ngmen 3 -\u9886\u5bfc\u4eec l\u01d0ngd\u01ceomen 3 -\u6559\u7ec3\u5458\u4eec ji\u00e0oli\u00e0nyu\u00e1nmen 2 -\u7237 \u4eec y\u00e9men 2 -\u4eba \u5458 \u4eec r\u00e9nyu\u00e1nmen 2 -\u5973 \u5de5\u4eec n\u01dag\u014dngmen 2 -\u6444\u5f71\u5bb6\u4eec sh\u00e8y\u01d0ngji\u0101men 2 -\u677f \u62a5\u5458\u4eec b\u01cenb\u00e0oyu\u00e1nmen 2 -\u8001 \u677f \u4eec l\u01ceob\u01cenmen 2 -\u8001 \u6c49 \u4eec l\u01ceoh\u00e0nmen 2 -\u72b6 \u5143 \u4eec zhu\u00e0ngyuanmen 2 -\u4f1a \u5458 \u4eec hu\u00ecyu\u00e1nmen 2 -\u5dde \u957f\u4eec zh\u014duzh\u01cengmen 2 -\u5973\u58eb\u4eec n\u01dash\u00ecmen 2 -\u53cb\u4eba\u4eec y\u01d2ur\u00e9nmen 2 -\u5927\u5bb6\u4eec d\u00e0ji\u0101men 2 -\u5e08 \u5085\u4eec sh\u012bfumen 2 -\u521b\u4f5c\u8005\u4eec chu\u00e0ngzu\u014dzh\u011bmen 2 -\u5587\u561b\u4eec l\u01cemamen 2 -\u7ecf\u6d4e\u5b66\u5bb6\u4eec j\u012bngj\u00ecxu\u00e9ji\u0101men 2 -\u652f\u6301\u8005\u4eec zh\u012bch\u00edzh\u011bmen 2 -\u8001\u5e08\u4eec l\u01ceosh\u012bmen 2 -\u513f\u5b50\u4eec \u00e9rzimen 2 - \u7956\u8f88\u4eec z\u01d4b\u00e8imen 2 -\u5c11\u5973\u4eec sh\u00e0on\u01damen 2 -\u5b66 \u5458 \u4eec xu\u00e9yu\u00e1nmen 2 -\u4e66 \u753b \u5bb6 \u4eec sh\u016bhu\u00e0ji\u0101men 2 -\u9009\u624b\u4eec xu\u01censh\u01d2umen 2 -\u5988\u5988\u4eec m\u0101mamen 2 -\u540c\u80de\u4eec t\u00f3ngb\u0101omen 2 -\u5458\u5de5\u4eec yu\u00e1ng\u014dngmen 2 -\u4eb2\u621a\u4eec q\u012bnqimen 2 -\u9009\u6c11\u4eec xu\u01cenm\u00ednmen 2 -\u5929\u6587\u5b66\u5bb6\u4eec ti\u0101nw\u00e9nxu\u00e9ji\u0101men 2 -\u513f\u7ae5\u4eec \u00e9rt\u00f3ngmen 2 -\u6cd5\u5b98\u4eec f\u01cegu\u0101nmen 1 -\u884c\u4eba\u4eec x\u00edngr\u00e9nmen 1 -\u6b79\u5f92\u4eec d\u01ceit\u00famen 1 -\u9ad8\u5f92\u4eec g\u0101ot\u00famen 1 -\u763e\u541b\u5b50\u4eec y\u01d0nj\u016bnz\u01d0men 1 -\u8d35\u5bbe\u4eec gu\u00ecb\u012bnmen 1 -\u53a8\u5e08\u4eec ch\u00fash\u012bmen 1 -\u53f0\u80de\u4eec t\u00e1ib\u0101omen 1 -\u8001\u4f19\u4f34\u4eec l\u01ceohu\u01d2b\u00e0nmen 1 - \u52c7\u58eb\u4eec y\u01d2ngsh\u00ecmen 1 -\u8f66\u8ff7\u4eec ch\u0113m\u00edmen 1 -\u652f\u59d4\u4eec zh\u012bw\u011bimen 1 -\u5b59\u5b50\u4eec s\u016bnzimen 1 -\u592b\u5987\u4eec f\u016bf\u00f9men 1 -\u914d\u6c34\u5458\u4eec p\u00e8ishu\u01d0yu\u00e1nmen 1 -\u4f24\u5458\u4eec sh\u0101ngyu\u00e1nmen 1 -\u56da\u72af\u4eec", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5." }, { "text": "qi\u00faf\u00e0nmen 1 -\u5ba2\u6237\u4eec k\u00e8h\u00f9men 1 -\u519b\u5b98\u4eec j\u016bngu\u0101nmen 1 -\u58eb\u5175\u4eec sh\u00ecb\u012bngmen 1 -\u5dfe\u5e3c \u4eec j\u012bngu\u00f3men 1 -\u52a9\u624b\u4eec zh\u00f9sh\u01d2umen 1 -\u7559\u5b66\u751f\u4eec li\u00faxu\u00e9sh\u0113ngmen 1 -\u8bbe\u8ba1\u5e08\u4eec sh\u00e8j\u00ecsh\u012bmen 1 -\u5c40\u957f\u4eec j\u00fazh\u01cengmen 1 -\u8001\u5de5\u4eba\u4eec l\u01ceog\u014dngr\u00e9nmen 1 -\u6e14\u5de5\u4eec y\u00fag\u014dngmen 1 -\u526f \u5e02\u957f\u4eec f \u00f9 sh\u00eczh\u01cengmen 1 -\u4fa6 \u5bdf\u5458\u4eec zh\u0113nch\u00e1yu\u00e1nmen 1 -\u89c2\u5bdf\u5458\u4eec gu\u0101nch\u00e1yu\u00e1nmen 1 -\u8bbe \u8ba1\u8005\u4eec sh\u00e8j\u00eczh\u011bmen 1 -\u5bb6 \u5c5e \u4eec ji\u0101sh\u01d4men 1 -\u68c0 \u5bdf \u5b98 \u4eec ji\u01cench\u00e1gu\u0101nmen 1 -\u4f53 \u80b2 \u8ff7 \u4eec t\u01d0y\u00f9m\u00edmen 1 -\u5973 \u751f\u4eec n\u01dash\u0113ngmen 1 -\u9769\u547d\u5148\u70c8\u4eec g\u00e9m\u00ecngxi\u0101nli\u00e8men 1 -\u98de\u884c\u5458\u4eec f\u0113ix\u00edngyu\u00e1nmen 1 -\u8001\u5934\u5b50\u4eec l\u01ceot\u00f3uzimen 1 -\u6d77\u5916\u4fa8\u80de \u4eec h\u01ceiw\u00e0iqi\u00e1ob\u0101omen 1 -\u70ae\u5236\u8005\u4eec p\u00e0ozh\u00eczh\u011bmen 1 -\u670d\u52a1\u5458\u4eec f\u00faw\u00f9yu\u00e1nmen 1 -\u63a8\u9500 \u5458\u4eec tu\u012bxi\u0101oyu\u00e1nmen 1 -\u592a\u592a\u4eec t\u00e0itaimen 1 -\u4f10\u6728\u8005\u4eec f\u00e1m\u00f9zh\u011bmen 1 -\u52b3\u52a8\u6a21\u8303\u4eec l\u00e1od\u00f2ngm\u00f3f\u00e0nmen 1 -\u6c34 \u5175 \u4eec shu\u01d0b\u012bngmen 1 -\u4f7f\u8282\u4eec sh\u01d0ji\u00e9men 1 -\u6b4c\u5531\u5bb6\u4eec g\u0113ch\u00e0ngji\u0101men 1 -\u4e3b \u4efb \u4eec zh\u01d4r\u00e8nmen 1 -\u4e2a\u4f53\u6237\u4eec g\u00e8t\u01d0h\u00f9men 1 -\u6f14 \u8bf4 \u5bb6 \u4eec y\u01censhu\u014dji\u0101men 1 -\u97f3\u4e50\u5bb6\u4eec y\u012bnyu\u00e8ji\u0101men 1 -\u4eb2\u53cb\u4eec q\u012bny\u01d2umen 1 -\u529f\u81e3\u4eec g\u014dngch\u00e9nmen 1 -\u804c\u5458\u4eec zh\u00edyu\u00e1nmen 1 -\u59d0\u59d0\u4eec ji\u011bjiemen 1 -\u53f8\u673a\u4eec s\u012bj\u012bmen 1 -\u5236\u9020 \u5546\u4eec zh\u00ecz\u00e0osh\u0101ngmen 1 -\u82f1\u96c4\u4eec y\u012bngxi\u00f3ngmen 1 -\u753b\u5bb6\u4eec hu\u00e0ji\u0101men 1 -\u5916\u5546\u4eec w\u00e0ish\u0101ngmen 1 -\u60a3\u8005\u4eec hu\u00e0nzh\u011bmen 1 -\u6751\u90bb\u4eec c\u016bnl\u00ednmen 1 -\u536b\u58eb\u4eec w\u00e8ish\u00ecmen 1 -\u5927 \u81e3 \u4eec d\u00e0ch\u00e9nmen 1 -\u6280 \u672f \u5458 \u4eec j\u00ecsh\u00f9yu\u00e1nmen 1 -\u56fe\u8005\u4eec t\u00fazh\u011bmen 1 -\u6559\u5458\u4eec ji\u00e0oyu\u00e1nmen 1 -\u8001\u5927\u5a18\u4eec l\u01ceod\u00e0ni\u00e1ngmen 1 -\u6cd5 \u5b66\u5bb6\u4eec f\u01cexu\u00e9ji\u0101men 1 -\u7814\u7a76\u8005\u4eec y\u00e1nji\u016bzh\u011bmen 1 -\u6e38\u4eba\u4eec y\u00f3ur\u00e9nmen 1 -\u5143\u9996\u4eec yu\u00e1nsh\u01d2umen 1 -\u5a03\u5a03\u4eec w\u00e1wamen 1 -\u9752\u5c11\u5e74\u4eec q\u012bngsh\u00e0oni\u00e1nmen 1 -\u529b\u58eb\u4eec l\u00ecsh\u00ecmen 1 -\u552e\u8d27\u5458\u4eec sh\u00f2uhu\u00f2yu\u00e1nmen 1 -\u6559\u7ec3 \u4eec ji\u00e0oli\u00e0nmen 1 -\u91c7\u8d2d\u5458\u4eec c\u01ceig\u00f2uyu\u00e1nmen 1 -\u5973\u4eec n\u01damen 1 -\u6e38\u5ba2\u4eec y\u00f3uk\u00e8men 1 -\u70c8\u58eb\u4eec li\u00e8sh\u00ecmen 1 -\u897f\u85cf\u53f2\u5b66\u5bb6\u4eec x\u012bz\u00e0ngsh\u01d0xu\u00e9ji\u0101men 1 -\u8001\u5976\u5976\u4eec l\u01ceon\u01ceinaimen 1 -\u5927\u592b\u4eec d\u00e0if\u016bmen 1 -\u6c14\u8c61\u5b66\u5bb6\u4eec q\u00ecxi\u00e0ngxu\u00e9ji\u0101men 1 -\u5de5\u4f5c\u8005\u4eec g\u014dngzu\u00f2zh\u011bmen 1 -\u53bf \u592a\u7237\u4eec xi\u00e0nt\u00e0iy\u00e9men 1 -\u5546\u8d29\u4eec sh\u0101ngf\u00e0nmen 1 -\u677e\u4eec s\u014dngmen 1 -\u4eb2\u4eba\u4eec q\u012bnr\u00e9nmen 1 -\u8001\u670b\u53cb\u4eec l\u01ceop\u00e9ngyoumen 1 -\u5bb6\u957f\u4eec ji\u0101zh\u01cengmen 1 -\u592b\u59bb\u4eec f\u016bq\u012bmen 1 -\u5b66\u5b50\u4eec xu\u00e9z\u01d0men 1 -\u4e1c\u9053\u4e3b\u4eec d\u014dngd\u00e0ozh\u01d4men 1 -\u7701 \u957f\u4eec sh\u011bngzh\u01cengmen 1 -\u540c \u4ec1 \u4eec t\u00f3ngr\u00e9nmen 1 -\u5c71\u6c34\u753b\u5bb6\u4eec sh\u0101nshu\u01d0hu\u00e0ji\u0101men 1 -\u6218\u7565\u5bb6\u4eec zh\u00e0nl\u00fc\u00e8ji\u0101men 1 -\u8463\u4e8b\u957f \u4eec d\u01d2ngsh\u00eczh\u01cengmen 1 -r \u8fd9\u513f zh\u00e8r 32 -\u4f1a\u513f hu\u00ecr 30 -\u54ea\u513f n\u01cer 18 -\u52b2\u513f j\u00ecnr 13 -\u4e8b\u513f sh\u00ecr 12 -\u70b9\u513f di\u01cenr 9 -\u90a3\u513f n\u00e0r 8 -\u4f19\u513f hu\u01d2r 7 -\u4e2a\u513f g\u00e8r 7 -\u6d3b\u513f hu\u00f3r 5 -\u9e1f\u513f ni\u01ceor 5 -\u5757\u513f ku\u00e0ir 4 -\u82b1\u513f hu\u0101r 3 -\u6cd5\u513f f\u01cer 3 -\u98ce\u513f f\u0113ngr 2 -\u5b57\u513f z\u00ecr 2 -\u6761\u513f ti\u00e1or 2 -\u5473\u513f w\u00e8ir 2 -\u7247\u513f pi\u00e0nr 2 -\u73a9\u513f w\u00e1nr 2 -\u5f2f\u513f w\u0101nr 2 -\u6837\u513f y\u00e0ngr 1 -\u8f67\u4f19\u513f y\u00e0hu\u01d2r 1 -\u8138 \u513f li\u01cenr 1 -\u5e72\u52b2\u513f g\u0101nj\u00ecnr 1 -\u5934\u513f t\u00f3ur 1 -\u4e07\u513f w\u00e0nr 1 -\u8bdd\u513f hu\u00e0r 1 -\u62a0\u513f k\u014dur 6 -\u7ef3\u5b50 sh\u00e9ngzi 6 -\u888b\u5b50 d\u00e0izi 6 -\u91d1\u5b50 j\u012bnzi 6 -\u5f71\u5b50 y\u01d0ngzi 6 -\u4f8b\u5b50 l\u00eczi 6 -\u67aa\u6746\u5b50 qi\u0101ngg\u0101nzi 6 -\u65a7\u5b50 f\u01d4zi 6 -\u53e3\u5b50 k\u01d2uzi 6 -\u6886\u5b50 b\u0101ngzi 5 -\u5e95\u5b50 d\u01d0zi 5 -\u889c\u5b50 w\u00e0zi 5 -\u8180\u5b50 b\u01cengzi 5 -\u55d3\u5b50 s\u01cengzi 5 -\u684c\u5b50 zhu\u014dzi 5 -\u7968\u5b50 pi\u00e0ozi 5 -\u80e1\u5b50 h\u00fazi 5 -\u8bdd \u5323\u5b50 hu\u00e0xi\u00e1zi 5 -\u5708\u5b50 qu\u0101nzi 4 -\u644a\u5b50 t\u0101nzi 4 -\u68cd\u5b50 g\u00f9nzi 4 -\u6746\u5b50 g\u0101nzi 4 -\u56ed\u5b50 yu\u00e1nzi 4 -\u9662\u5b50 yu\u00e0nzi 4 -\u7089\u5b50 l\u00fazi 4 -\u679c\u5b50 gu\u01d2zi 4 -\u7b77\u5b50 ku\u00e0izi 4 -\u8c79\u5b50 b\u00e0ozi 4 -", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5." }, { "text": "\u7247\u5b50 pi\u00e0nzi 4 -\u5200\u5b50 d\u0101ozi 4 -\u7bb1\u5b50 xi\u0101ngzi 3 -\u5323\u5b50 xi\u00e1zi 3 -\u88e4\u5b50 k\u00f9zi 3 -\u8925\u5b50 r\u00f9zi 3 -\u74f6\u5b50 p\u00edngzi 3 -\u80c6\u5b50 d\u01cenzi 3 -\u8c46\u5b50 d\u00f2uzi 3 -\u4e2a\u5b50 g\u00e8zi 3 -\u70b9\u5b50 di\u01cenzi 3 -\u72ee\u5b50 sh\u012bzi 3 -\u9635\u5b50 zh\u00e8nzi 3 -\u5c0f\u5b50 xi\u01ceozi 3 -\u8001\u5934\u5b50 l\u01ceot\u00f3uzi 3 -\u53f0\u5b50 t\u00e1izi 3 -\u53f6\u5b50 y\u00e8zi 3 -\u676f\u5b50 b\u0113izi 3 -\u5e18\u5b50 li\u00e1nzi 2 -\u68af\u5b50 t\u012bzi 2 -\u70c2\u644a\u5b50 l\u00e0nt\u0101nzi 2 -\u6bef\u5b50 t\u01cenzi 2 -\u778e\u5b50 xi\u0101zi 2 -\u6bfd\u5b50 ji\u00e0nzi 2 -\u71d5\u5b50 y\u00e0nzi 2 -\u5154\u5b50 t\u00f9zi 2 -\u8896\u5b50 xi\u00f9zi 2 -\u6930\u5b50 y\u0113zi 2 -\u7624\u5b50 li\u00fazi 2 -\u7334\u5b50 h\u00f3uzi 2 -\u76d2\u5b50 h\u00e9zi 2 -\u866b\u5b50 ch\u00f3ngzi 2 -\u874e\u5b50 xi\u0113zi 2 -\u6848\u5b50 \u00e0nzi 2 -\u53e5 \u5b50 j\u00f9zi 2 -\u6a21\u5b50 m\u00f3zi 2 -\u7a7a\u5b50 k\u00f2ngzi 2 -\u97ad\u5b50 bi\u0101nzi 2 -\u547d\u6839\u5b50 m\u00ecngg\u0113nzi 2 -\u66f2\u5b50 q\u01d4zi 2 -\u6cd5\u5b50 f\u01cezi 1 -\u7a97\u5b50 chu\u0101ngzi 1 -\u8c37\u5b50 g\u01d4zi 1 -\u54e8\u5b50 sh\u00e0ozi 1 -\u9776\u5b50 b\u01cezi 1 - \u9e82\u5b50 j\u01d0zi 1 -\u515c\u5b50 d\u014duzi 1 -\u5c16\u5b50 ji\u0101nzi 1 -\u5c94\u5b50 ch\u00e0zi 1 -\u6e38\u5b50 y\u00f3uzi 1 -\u8001\u6837\u5b50 l\u01ceoy\u00e0ngzi 1 -\u8902\u5b50 gu\u00e0zi 1 -\u4e71\u5b50 lu\u00e0nzi 1 -\u82c7\u5b50 w\u011bizi 1 -\u575d\u5b50 b\u00e0zi 1 -\u7a7a\u67b6\u5b50 k\u014dngji\u00e0zi 1 -\u94f6\u5b50 y\u00ednzi 1 -\u9600\u5b50 f\u00e1zi 1 -\u4e38\u5b50 w\u00e1nzi 1 -\u7b1b\u5b50 d\u00edzi 1 -\u68da\u5b50 p\u00e9ngzi 1 - \u8fab\u5b50 bi\u00e0nzi 1 -\u6817\u5b50 l\u00eczi 1 -\u67ff\u5b50 sh\u00eczi 1 -\u94fe\u5b50 li\u00e0nzi 1 -\u5934\u5b50 t\u00f3uzi 1 -\u8e44\u5b50 t\u00edzi 1 - \u68ad\u5b50 su\u014dzi 1 -\u9aa1\u5b50 lu\u00f3zi 1 -\u9a97\u5b50 pi\u00e0nzi 1 -\u67da\u5b50 y\u00f2uzi 1 -\u9524\u5b50 chu\u00edzi 1 -\u77f3\u78d9\u5b50 sh\u00edg\u01d4nzi 1 -\u7b95\u5b50 j\u012bzi 1 -\u69fd\u5b50 c\u00e1ozi 1 -\u952d\u5b50 d\u00ecngzi 1 -\u4e24\u53e3\u5b50 li\u01cengk\u01d2uzi 1 -\u693d\u5b50 chu\u00e1nzi 1 -\u5355\u5b50 d\u0101nzi 1 -\u526a\u5b50 ji\u01cenzi 1 -\u6863\u5b50 d\u00e0ngzi 1 -\u6c99\u82d1\u5b50 sh\u0101yu\u00e0nz\u01d0 1 -\u9762\u5b50 mi\u00e0nzi 1 -\u7f28\u5b50 y\u012bngzi 1 -\u53f7\u5b50 h\u00e0ozi 1 -\u76ae\u5939\u5b50 p\u00edji\u0101zi 1 -\u956f\u5b50 zhu\u00f3zi 1 -\u5352\u5b50 z\u00fazi 1 -\u6a59\u5b50 ch\u00e9ngzi 1 -\u96c6\u5b50 j\u00edzi 1 -\u9f13\u5b50 g\u01d4zi 1 -\u6247\u5b50 sh\u0101nzi 1 -\u6876\u5b50 t\u01d2ngzi 1 -\u6843\u5b50 t\u00e1ozi 1 -\u811a\u8116\u5b50 ji\u01ceob\u00f3zi 1 -\u53d4\u5b50 sh\u016bzi 1 -\u5e84\u5b50 zhu\u0101ngzi 1 -\u80d6\u5b50 p\u00e0ngzi 1 -\u674f\u5b50 x\u00ecngzi 1 -\u72cd\u5b50 p\u00e1ozi 1 -\u53f0\u67f1\u5b50 t\u00e1izh\u00f9zi 1 -\u4efd\u5b50 f\u00e8nzi 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5." }, { "text": "But see alsoPlag [1999] for a discussion of how dictionary data can be useful in a study of productivity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In colloquial speech, -men can occasionally attach to some animal nouns (e.g., g\u01d2urmen \u72d7\u513f\u4eec 'doggies').", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The PH Corpus can be downloaded from the ftp server of the Centre for Cognitive Science at University of Edinburgh.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The use of the PH Corpus in the present study is solely due to the fact that the computer programs currently used were written for the PH Corpus. It must be noted, however, that findings from a larger, more balanced corpus do not necessarily minimize findings from a smaller, less balanced corpus. Findings from both the PH Corpus (a small corpus of newspaper texts) and the Sinica Corpus (a large corpus of a variety of texts) are of interest because corpora of different types enable a comparison of findings by the corpus type. 9 Note in these examples that the tone of -r and -zi is retained (i.e., -\u00e9r and -z\u01d0, respectively). -r is originally -\u00e9r, and it becomes -r as a suffix, as a result of losing its syllabicity[Norman, 1988: 114].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "See Baayen [1992] andBaayen and Lieber [1991] for a discussion of the global productivity of an affix (expressed as P*) based on a two-dimensional analysis of p and V.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The statistical techniques for obtaining \u015c, which involve an extended version of Zipf's law, are beyond the scope of this paper. For more details, the reader is referred toBaayen [1989.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "These sub-corpora would be labeled retained and deleted (hence the term deleted estimation) under the original deleted estimation method. However, in the present context, we can simplify the argument by using the labels Corpus A and Corpus B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "See Baayen [2001] for an in-depth discussion of techniques for measuring variances among segments of a corpus.16 The procedure described here is thanks to suggestions byBaayen [personal communication].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The author wishes to thank Harald Baayen, Richard Sproat, Martin Chodorow, and the anonymous reviewers for their insightful comments on the first draft of this paper. Any errors are the responsibility of the author.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "Below are the words of the Mandarin suffixes and their token frequencies in the PH Corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix: Words of the Mandarin Suffixes in the PH Corpus", "sec_num": null }, { "text": "\u53d8\u5316 bi\u00e0nhu\u00e0 495 -\u73b0\u4ee3\u5316 xi\u00e0nd\u00e0ihu\u00e0 473 -\u6df1\u5316 sh\u0113nhu\u00e0 323 -\u81ea\u7531\u5316 z\u00ecy\u00f3uhu\u00e0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "-hua", "sec_num": null } ], "bib_entries": { "BIBREF2": { "ref_id": "b2", "title": "Morphological Productivity and Phonological Transparency", "authors": [ { "first": "F", "middle": [], "last": "Anshen", "suffix": "" }, { "first": "M", "middle": [], "last": "Aronoff", "suffix": "" } ], "year": 1981, "venue": "Canadian Journal of Linguistics", "volume": "26", "issue": "", "pages": "63--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anshen, F., & Aronoff, M. \"Morphological Productivity and Phonological Transparency.\" Canadian Journal of Linguistics, 26, 1981, 63-72.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Producing Morphologically Complex Words", "authors": [ { "first": "F", "middle": [], "last": "Anshen", "suffix": "" }, { "first": "M", "middle": [], "last": "Aronoff", "suffix": "" } ], "year": 1988, "venue": "Linguistics", "volume": "26", "issue": "", "pages": "641--655", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anshen, F., & Aronoff, M. \"Producing Morphologically Complex Words.\" Linguistics, 26, 1988, 641-655.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Word Formation in Generative Grammar", "authors": [ { "first": "M", "middle": [], "last": "Aronoff", "suffix": "" } ], "year": 1976, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aronoff, M. Word Formation in Generative Grammar. Cambridge, MA: MIT Press, 1976.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The Relevance of Productivity in a Synchronic Description of Word Formation", "authors": [ { "first": "M", "middle": [], "last": "Aronoff", "suffix": "" } ], "year": 1980, "venue": "Historical Morphology. The Hague: Mouton", "volume": "", "issue": "", "pages": "71--82", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aronoff, M. \"The Relevance of Productivity in a Synchronic Description of Word Formation.\" In J. Fisiak (Ed.), Historical Morphology. The Hague: Mouton, 1980, 71-82.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Potential Words, Actual Words, Productivity and Frequency", "authors": [ { "first": "M", "middle": [], "last": "Aronoff", "suffix": "" } ], "year": 1983, "venue": "Proceedings of the International Congress of Linguists", "volume": "13", "issue": "", "pages": "163--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aronoff, M. \"Potential Words, Actual Words, Productivity and Frequency.\" Proceedings of the International Congress of Linguists, 13, 1983, 163-171.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Morphology and the Lexicon: Lexicalization and Productivity", "authors": [ { "first": "M", "middle": [], "last": "Aronoff", "suffix": "" }, { "first": "F", "middle": [], "last": "Anshen", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "237--247", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aronoff, M., & Anshen, F. \"Morphology and the Lexicon: Lexicalization and Productivity.\" In A. Spencer & A. M. Zwicky (Eds.), The Handbook of Morphology. Oxford, UK: Blackwell Publishers, 1998, 237-247.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Testing Morphological Productivity", "authors": [ { "first": "M", "middle": [], "last": "Aronoff", "suffix": "" }, { "first": "R", "middle": [], "last": "Schvaneveldt", "suffix": "" } ], "year": 1978, "venue": "Annals of the New York Academy of Sciences", "volume": "318", "issue": "", "pages": "106--114", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aronoff, M., & Schvaneveldt, R. \"Testing Morphological Productivity.\" Annals of the New York Academy of Sciences, 318, 1978, 106-114.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A Corpus-Based Study of Morphological Productivity: Statistical Analysis and Psychological Interpretation. Doctoral dissertation", "authors": [ { "first": "R", "middle": [ "H" ], "last": "Baayen", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baayen, R. H. A Corpus-Based Study of Morphological Productivity: Statistical Analysis and Psychological Interpretation. Doctoral dissertation, Free University, Amsterdam, 1989.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Quantitative Aspects of Morphological Productivity", "authors": [ { "first": "R", "middle": [ "H" ], "last": "Baayen", "suffix": "" } ], "year": 1991, "venue": "Yearbook of Morphology", "volume": "", "issue": "", "pages": "109--149", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baayen, R. H. \"Quantitative Aspects of Morphological Productivity.\" In G. Booij & J. van Marle (Eds.), Yearbook of Morphology 1991. Dordrecht: Kluwer, 1992, 109-149.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "On Frequency, Transparency and Productivity", "authors": [ { "first": "R", "middle": [ "H" ], "last": "Baayen", "suffix": "" } ], "year": 1992, "venue": "Yearbook of Morphology", "volume": "", "issue": "", "pages": "181--208", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baayen, R. H. \"On Frequency, Transparency and Productivity.\" In G. Booij & J. van Marle (Eds.), Yearbook of Morphology 1992. Dordrecht: Kluwer, 1993, 181-208.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Productivity and English Word-Formation: A Corpus-Based Study", "authors": [ { "first": "R", "middle": [ "H" ], "last": "Baayen", "suffix": "" }, { "first": "R", "middle": [], "last": "Lieber", "suffix": "" } ], "year": 1991, "venue": "Linguistics", "volume": "29", "issue": "", "pages": "801--843", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baayen, R. H., & Lieber, R. \"Productivity and English Word-Formation: A Corpus-Based Study.\" Linguistics, 29, 1991, 801-843.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Chronicling the Times: Productive Lexical Innovations in an English Newspaper", "authors": [ { "first": "R", "middle": [ "H" ], "last": "Baayen", "suffix": "" }, { "first": "A", "middle": [], "last": "Renouf", "suffix": "" } ], "year": 1996, "venue": "Language", "volume": "72", "issue": "", "pages": "69--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baayen, R. H., & Renouf, A. \"Chronicling the Times: Productive Lexical Innovations in an English Newspaper.\" Language, 72, 1996, 69-96.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Morphological Productivity", "authors": [ { "first": "L", "middle": [], "last": "Bauer", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bauer, L. Morphological Productivity. Cambridge, UK: Cambridge University Press, 2001.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Word Formation in Old Chinese", "authors": [ { "first": "W", "middle": [ "H" ], "last": "Baxter", "suffix": "" }, { "first": "L", "middle": [], "last": "Sagart", "suffix": "" } ], "year": 1998, "venue": "New Approaches to Chinese Word Formation: Morphology, Phonology and Lexicon in Modern and Ancient Chinese", "volume": "", "issue": "", "pages": "35--76", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baxter, W. H., & Sagart, L. \"Word Formation in Old Chinese.\" In J. L. Packard (Ed.), New Approaches to Chinese Word Formation: Morphology, Phonology and Lexicon in Modern and Ancient Chinese. Berlin: Mouton de Gruyter, 1998, 35-76.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Dutch Morphology: A Study of Word Formation in Generative Grammar", "authors": [ { "first": "G", "middle": [ "E" ], "last": "Booij", "suffix": "" } ], "year": 1977, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Booij, G. E. Dutch Morphology: A Study of Word Formation in Generative Grammar. Dordrecht: Foris, 1977.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Modern Chinese: History and Sociolinguistics", "authors": [ { "first": "P", "middle": [], "last": "Chen", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, P. Modern Chinese: History and Sociolinguistics. Cambridge University Press, 1999.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "An Empirical Study of Smoothing Techniques for Language Modeling", "authors": [ { "first": "S", "middle": [ "F" ], "last": "Chen", "suffix": "" }, { "first": "J", "middle": [], "last": "Goodman", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, S. F., & Goodman, J. An Empirical Study of Smoothing Techniques for Language Modeling (Tech. Rep. No. 10-98). Cambridge, MA: Harvard University, Center for Research in Computing Technology, 1998.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A Comparison of the Enhanced Good-Turing and Deleted Estimation Methods for Estimating Probabilities of English Bigrams", "authors": [ { "first": "K", "middle": [ "W" ], "last": "Church", "suffix": "" }, { "first": "W", "middle": [ "A" ], "last": "Gale", "suffix": "" } ], "year": 1991, "venue": "Computer Speech and Language", "volume": "5", "issue": "", "pages": "19--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Church, K. W., & Gale, W. A. \"A Comparison of the Enhanced Good-Turing and Deleted Estimation Methods for Estimating Probabilities of English Bigrams.\" Computer Speech and Language, 5, 1991, 19-54.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Productivity in Word Formation", "authors": [ { "first": "A", "middle": [], "last": "Cutler", "suffix": "" } ], "year": 1980, "venue": "Papers from the Sixteenth Regional Meeting of the Chicago Linguistic Society", "volume": "", "issue": "", "pages": "45--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cutler, A. \"Productivity in Word Formation.\" Papers from the Sixteenth Regional Meeting of the Chicago Linguistic Society. Chicago, IL: Chicago Linguistic Society, 1980, 45-51.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "The Population Frequencies of Species and the Estimation of Population Parameters", "authors": [ { "first": "I", "middle": [ "J" ], "last": "Good", "suffix": "" } ], "year": 1953, "venue": "Biometrika", "volume": "40", "issue": "", "pages": "237--264", "other_ids": {}, "num": null, "urls": [], "raw_text": "Good, I. J. \"The Population Frequencies of Species and the Estimation of Population Parameters.\" Biometrika, 40, 1953, 237-264.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "PH: A Chinese Corpus", "authors": [ { "first": "J", "middle": [], "last": "Guo", "suffix": "" } ], "year": 1993, "venue": "Communications of COLIPS", "volume": "3", "issue": "1", "pages": "45--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guo, J. \"PH: A Chinese Corpus.\" Communications of COLIPS, 3 (1), 1993, 45-48.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Error-Driven Learning of Chinese Word Segmentation", "authors": [ { "first": "J", "middle": [], "last": "Hockenmaier", "suffix": "" }, { "first": "C", "middle": [], "last": "Brew", "suffix": "" } ], "year": 1998, "venue": "12th Pacific Conference on Language and Information", "volume": "", "issue": "", "pages": "218--229", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hockenmaier, J., & Brew, C. \"Error-Driven Learning of Chinese Word Segmentation.\" In J. Guo, K. T. Lua, & J. Xu (Eds.), 12th Pacific Conference on Language and Information. Singapore: Chinese and Oriental Languages Processing Society, 1998, 218 -229.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Probability Distribution Estimation for Sparse Data", "authors": [ { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" }, { "first": "R", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1985, "venue": "IBM Technical Disclosure Bulletin", "volume": "28", "issue": "", "pages": "2591--2594", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jelinek, F., & Mercer, R. \"Probability Distribution Estimation for Sparse Data.\" IBM Technical Disclosure Bulletin, 28, 1985, 2591-2594.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Mandarin Chinese: A Functional Reference Grammar", "authors": [ { "first": "C", "middle": [], "last": "Li", "suffix": "" }, { "first": "S", "middle": [ "A" ], "last": "Thompson", "suffix": "" } ], "year": 1981, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, C., & Thompson, S. A. Mandarin Chinese: A Functional Reference Grammar. Berkeley, CA: University of California Press, 1981.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A Grammar of Modern Chinese", "authors": [ { "first": "H", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, H. A Grammar of Modern Chinese. LINCOM EUROPA, 2001.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Foundations of Statistical Natural Language Processing", "authors": [ { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "H", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manning, C. D., & Sch\u00fctze, H. Foundations of Statistical Natural Language Processing. Cambridge, MA: MIT Press, 1999.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "The Morphology of Chinese: A Linguistic and Cognitive Approach", "authors": [ { "first": "J", "middle": [ "L" ], "last": "Packard", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Packard, J. L. The Morphology of Chinese: A Linguistic and Cognitive Approach. Cambridge, UK: Cambridge University Press, 2000.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Morphological Productivity: Structural Constraints in English Derivation", "authors": [ { "first": "I", "middle": [], "last": "Plag", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Plag, I. Morphological Productivity: Structural Constraints in English Derivation. Berlin: Mouton de Gruyter, 1999.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "The Languages of China", "authors": [ { "first": "R", "middle": [ "S" ], "last": "Ramsey", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramsey, R. S. The Languages of China. Princeton, NJ: Princeton University Press, 1987.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Produktiviteit als Morfologisch Fenomeen", "authors": [ { "first": "H", "middle": [], "last": "Schultink", "suffix": "" } ], "year": 1961, "venue": "Forum der Letteren", "volume": "2", "issue": "", "pages": "110--125", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schultink, H. \"Produktiviteit als Morfologisch Fenomeen.\" Forum der Letteren, 2, 1961, 110-125.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Morphological Theory: An Introduction to Word Structure in Generative Grammar", "authors": [ { "first": "A", "middle": [], "last": "Spencer", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Spencer, A. Morphological Theory: An Introduction to Word Structure in Generative Grammar. Cambridge, UK: Cambridge University Press, 1991.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Corpus-Based Methods in Chinese Morphology", "authors": [ { "first": "R", "middle": [], "last": "Sproat", "suffix": "" } ], "year": 2002, "venue": "Tutorial given at COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sproat, R. \"Corpus-Based Methods in Chinese Morphology.\" Tutorial given at COLING, Taipei, Taiwan, 2002.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "A Corpus-Based Analysis of Mandarin Nominal Root Compound", "authors": [ { "first": "R", "middle": [], "last": "Sproat", "suffix": "" }, { "first": "C", "middle": [], "last": "Shih", "suffix": "" } ], "year": 1996, "venue": "Journal of East Asian Linguistics", "volume": "5", "issue": "", "pages": "49--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sproat, R., & Shih, C. \"A Corpus-Based Analysis of Mandarin Nominal Root Compound.\" Journal of East Asian Linguistics, 5, 1996, 49-71.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "A Stochastic Finite-State Word-Segmentation Algorithm for Chinese", "authors": [ { "first": "R", "middle": [], "last": "Sproat", "suffix": "" }, { "first": "C", "middle": [], "last": "Shih", "suffix": "" }, { "first": "W", "middle": [], "last": "Gale", "suffix": "" }, { "first": "N", "middle": [], "last": "Chang", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "22", "issue": "3", "pages": "66--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sproat, R., Shih, C., Gale, W., & Chang, N. \"A Stochastic Finite-State Word-Segmentation Algorithm for Chinese.\" Computational Linguistics, 22 (3), 1996, 66-73.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Basis Technology", "authors": [ { "first": "M", "middle": [ "A" ], "last": "Cambridge", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cambridge, MA: Basis Technology, 1999.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "On the Paradigmatic Dimension of Morphological Productivity. Dordrecht: Foris", "authors": [ { "first": "J", "middle": [], "last": "Van Marle", "suffix": "" } ], "year": 1985, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Van Marle, J. On the Paradigmatic Dimension of Morphological Productivity. Dordrecht: Foris, 1985.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "text": "word. What the present data of -r indicate, then, is that -r words are characterized by a low degree of lexicalization. The low degree of lexicalization of -r words and the relatively large number of hapaxes (as compared with -tou) suggest that the word formation rule of -r is active. 00.010.020.030.040.050.060.070.", "num": null }, "FIGREF3": { "type_str": "figure", "uris": null, "text": "tde [r(5) = (.753 \u2192) 9.99, p < .01].", "num": null }, "FIGREF4": { "type_str": "figure", "uris": null, "text": "the elements of Corpus B in Part (b) of Table 4: all types [r(5) = 1.0, p < .01], unseen types [r(5) = 1.0, p < .01], and P tde [r(5) = .999, p < .01].", "num": null }, "FIGREF5": { "type_str": "figure", "uris": null, "text": "The word-token frequency distribution of unseen types in the two sub-corpora of the PH Corpus, Corpus A and Corpus B, averaged over 1000 random splits (the horizontal axis shows the word-token frequency category, and the vertical axis shows the number of word types in each frequency category; the letter following each suffix in the legend indicates from which sub-corpus the data are drawn; the order of the suffixes in the legend (from top down) corresponds to the order of bars in each frequency category (from left to right)).", "num": null }, "FIGREF6": { "type_str": "figure", "uris": null, "text": "zhu\u0101nji\u0101men 117 -\u59d4\u5458\u4eec w\u011biyu\u00e1nmen 109 -\u5de5\u4eba\u4eec g\u014dngr\u00e9nmen 75 -\u540c\u5fd7\u4eec t\u00f3ngzh\u00ecmen 72 -\u5b69\u5b50\u4eec h\u00e1izimen 64 -\u6218\u58eb\u4eec zh\u00e0nsh\u00ecmen 59 -\u804c\u5de5\u4eec zh\u00edg\u014dngmen 39 -\u540c\u5b66\u4eec t\u00f3ngxu\u00e9men 32 -\u961f\u5458\u4eec du\u00ecyu\u00e1nmen 31 -\u59d1\u5a18\u4eec g\u016bniangmen 26 -\u5ba2\u4eba\u4eec k\u00e8renmen 24 -\u8bb0\u8005\u4eec j\u00eczh\u011bmen 23 -\u79d1\u5b66\u5bb6\u4eec k\u0113xu\u00e9ji\u0101men 23 -\u8001\u4eba\u4eec l\u01ceor\u00e9nmen 23 -\u519c\u6c11\u4eec n\u00f3ngm\u00ednmen 22 -\u5b66\u751f\u4eec xu\u00e9shengmen 21 -\u5206 \u6790 \u5bb6 \u4eec f\u0113nx\u012bji\u0101men 21 -\u59d0\u59b9\u4eec ji\u011bm\u00e8imen 19 -\u670b\u53cb\u4eec p\u00e9ngyoumen 18 -\u827a\u672f\u5bb6\u4eec y\u00ecsh\u00f9ji\u0101men 16 -\u5e72\u90e8\u4eec g\u00e0nb\u00f9men 16 -\u5e02\u6c11\u4eec sh\u00ecm\u00ednmen", "num": null }, "FIGREF7": { "type_str": "figure", "uris": null, "text": "\u729f\u52b2\u513f ji\u00e0ngj\u00ecnr 1 -\u4fe1\u513f x\u00ecnr 1 -\u585e\u513f s\u00e8r 1 -\u4e3b\u513f zh\u01d4r 1 -\u82af\u513f x\u012bnr 1 -\u5f53\u513f d\u0101ngr 1 -tou \u52bf\u5934 sh\u00ectou 133 -\u7801\u5934 m\u01cetou 99 -\u8857\u5934 ji\u0113t\u00f3u 96 -\u77f3\u5934 sh\u00edtou 33 -\u7f50\u5934 gu\u00e0ntou 30 -\u955c\u5934 j\u00ecngt\u00f3u 26 -\u5e74\u5934 ni\u00e1nt\u00f3u 20 -\u62f3\u5934 qu\u00e1ntou 18 -\u9992\u5934 m\u00e1ntou 16 -\u7095\u5934 k\u00e0ngt\u00f3u 14 -\u8001\u5934 l\u01ceot\u00f3u 12 -\u5fc3\u5934 x\u012bnt\u00f3u 11 -\u6728\u5934 m\u00f9tou 9 -\u9aa8\u5934 g\u01d4tou 9 -\u6e90\u5934 yu\u00e1nt\u00f3u 8 -\u53e3\u5934 k\u01d2ut\u00f3u 8 -\u82d7\u5934 mi\u00e1otou 7 -\u5730\u5934 d\u00ect\u00f3u 7 -\u6307\u5934 zh\u01d0tou 7 -\u9504\u5934 ch\u00fatou 5 -\u6865\u5934 qi\u00e1ot\u00f3u 5 -\u90e8\u5934 b\u00f9t\u00f3u 4 -\u6795\u5934 zh\u011bntou 3 -\u65a7\u5934 f\u01d4tou 2 -\u5148\u5934 xi\u0101nt\u00f3u 2 -\u811a \u8dbe\u5934 ji\u01ceozh\u01d0tou 2 -\u91cc\u5934 l\u01d0tou 2 -\u98ce\u5934 f\u0113ngtou 2 -\u624b\u6307\u5934 sh\u01d2uzh\u01d0t\u00f3u 2 -\u7281\u5934 l\u00edt\u00f3u 2 -\u6ee9\u5934 t\u0101nt\u00f3u 1 -\u4e2b\u5934 y\u0101tou 1 -\u7a9d\u7a9d\u5934 w\u014dw\u014dt\u00f3u 1 -\u5173\u5934 gu\u0101nt\u00f3u 1 -\u7709\u5934 m\u00e9it\u00f3u 1 -\u4e24\u5934 li\u01cengt\u00f3u 1 -zi \u5b69\u5b50 h\u00e1izi 457 -\u79cd\u5b50 zh\u01d2ngzi 146 -\u513f\u5b50 \u00e9rzi 131 -\u65e5\u5b50 r\u00eczi 129 -\u59bb\u5b50 q\u012bzi 112 -\u73ed \u5b50 b\u0101nzi 105 -\u8def\u5b50 l\u00f9zi 63 -\u7bee\u5b50 l\u00e1nzi 58 -\u4f19\u5b50 hu\u01d2zi 53 -\u623f\u5b50 f\u00e1ngzi 50 -\u5e3d\u5b50 m\u00e0ozi 37 -\u4e00\u4e0b\u5b50 y\u00edxi\u00e0zi 29 -\u6837\u5b50 y\u00e0ngzi 27 -\u8f88\u5b50 b\u00e8izi 25 -\u997a\u5b50 ji\u01ceozi 23 -\u8d29\u5b50 f\u00e0nzi 22 -\u62c5\u5b50 d\u00e0nzi 21 -\u5b59\u5b50 s\u016bnzi 20 -\u724c\u5b50 p\u00e1izi 20 -\u809a\u5b50 d\u00f9zi 19 -\u6b65\u5b50 b\u00f9zi 18 -\u6751\u5b50 c\u016bnzi 18 -\u4e00\u63fd\u5b50 y\u012bl\u01cenz\u01d0 16 -\u6854\u5b50 j\u00fazi 16 -\u8116\u5b50 b\u00f3zi 15 -\u8eab\u5b50 sh\u0113nz\u01d0", "num": null }, "TABREF0": { "num": null, "type_str": "table", "html": null, "text": "With all the occurrences of a suffix found in the corpus, V is the sum of types, N is the sum of tokens, n 1 is the number of hapaxes, and p is the productivity index of the suffix. The suffixes are sorted in descending order by p.", "content": "
suffixVNn 1p
-r35184140.076
-men21923241010.043
-zi1772130620.029
-hua2093366930.028
-tou3660060.010
Note.
" }, "TABREF2": { "num": null, "type_str": "table", "html": null, "text": "", "content": "
(average)(average)
suffixall typesunseen typesP tde
-men149700.470
-hua144650.451
-r24.510.50.429
-zi130.546.50.356
-tou29.56.50.220
" }, "TABREF4": { "num": null, "type_str": "table", "html": null, "text": "Each value in Part (b) is the mean of 1,000 random splits. The suffixes in each section are sorted in descending order by p. In Corpus B of Part (a), the p values of -tou and -hua expressed to the fourth decimal place are 0.0313 and 0.0311, respectively.", "content": "
with and without
" }, "TABREF5": { "num": null, "type_str": "table", "html": null, "text": "", "content": "
PH Corpus, Corpus A and Corpus B, with and without randomization
of words
(a) Without randomization, a single split
Corpus ACorpus B
suffixallunseenP tdesuffixallunseenP tde
-men165860.521-hua140610.436
-r29150.517-men133540.406
-hua148690.466-r2060.300
-zi142580.408-zi119350.294
-tou3070.233-tou2960.207
(b) With randomization, the mean of 1000 splits
Corpus ACorpus B
suffixallunseenP tdesuffixallunseenP tde
-men158620.394-men157610.389
-hua154570.372-hua152550.364
-r2690.356-r2690.342
-zi138400.291-zi137390.287
-tou3150.160-tou3150.163
" } } } }