File size: 44,666 Bytes
6fa4bc9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 |
{
"paper_id": "O13-1025",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:03:46.339869Z"
},
"title": "Observing Features of PTT Neologisms: A Corpus-driven Study with N-gram Model",
"authors": [
{
"first": "Tsun-Jui",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Graduate Institute of Linguistics National Taiwan University",
"location": {}
},
"email": ""
},
{
"first": "Shu-Kai",
"middle": [],
"last": "Hsieh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Taiwan University",
"location": {}
},
"email": "shukaihsieh@ntu.edu.tw"
},
{
"first": "Laurent",
"middle": [],
"last": "Prevot",
"suffix": "",
"affiliation": {
"laboratory": "Laboratoire Parole et Langage Universit\u00e9",
"institution": "",
"location": {
"settlement": "Aix-Marseille"
}
},
"email": "laurent.prevot@lpl-aix.fr"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "PTT (\u6279\u8e22\u8e22) is one of the largest web forums in Taiwan. In the last few years, its importance has been growing rapidly because it has been widely mentioned by most of the mainstream media. It is observed that its influence reflects not only on the society but also on the language novel use in Taiwan. In this research, a pipeline processing system in Python was developed to collect the data from PTT, and the n-gram model with proposed linguistic filter are adopted with the attempt to capture two-character neologisms emerged in PTT. Evaluation task with 25 subjects was conducted against the system's performance with the calculation of Fleiss' kappa measure. Linguistic discussion as well as the comparison with time series analysis of frequency data are provided. It is hoped that the detection of neologisms in PTT can be improved by observing the features, which may even facilitate the prediction of the neologisms in the future.",
"pdf_parse": {
"paper_id": "O13-1025",
"_pdf_hash": "",
"abstract": [
{
"text": "PTT (\u6279\u8e22\u8e22) is one of the largest web forums in Taiwan. In the last few years, its importance has been growing rapidly because it has been widely mentioned by most of the mainstream media. It is observed that its influence reflects not only on the society but also on the language novel use in Taiwan. In this research, a pipeline processing system in Python was developed to collect the data from PTT, and the n-gram model with proposed linguistic filter are adopted with the attempt to capture two-character neologisms emerged in PTT. Evaluation task with 25 subjects was conducted against the system's performance with the calculation of Fleiss' kappa measure. Linguistic discussion as well as the comparison with time series analysis of frequency data are provided. It is hoped that the detection of neologisms in PTT can be improved by observing the features, which may even facilitate the prediction of the neologisms in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A neologism in general refers to \"a newly coined term, word, or phrase, that may be in the process of entering common use, but has not yet been accepted into mainstream language\" (Levchenko, 2010) 1 . It is closely related to the unknown words or out-of-vocabulary in the field of Speech and Natural Language Processing, but with the nuance that the latter is often formally defined by its non-existence in a given vocabulary repository. With the emergence of voluminous data on the web and fast-developing technologies, never before has our world been facing with such an overwhelming mass of neologisms. Therefore, the description and detection of neologism has become an important research topic in the recent years.",
"cite_spans": [
{
"start": 197,
"end": 198,
"text": "1",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In this paper, we aim to begin with a corpus-driven approach in exploring the linguistic features of Chinese neologisms. We use PTT as our corpus data. As widely known, PTT is one of the largest web forums in Taiwan that contain users from various backgrounds and ages. In these years, its importance has been growing rapidly because it has been widely mentioned by most of the mainstream media in Taiwan. As Magistry (2012) suggested, \"PTT should be seen as an extension of the modern society in Taiwan.\" This implies that PTT has great influence not only on the society but also the novel language use in Taiwan, which motives this research to exploit PTT as data source. Section 2 explains the pipeline framework developed for data crawling and pre-processing, and the lexicon and filter for capturing two-character neologisms in PTT. Section 3 introduces the methodological part, where the rationale of our proposed 'diachronic n-gram model' is introduced and classification results are shown. Section 4 provides the discussion on the evaluation task as well as explanation from linguistic perspective. A time series analysis on the extracted diachronic n-gram data is conducted for further investigation. Section 5 concludes this paper.",
"cite_spans": [
{
"start": 409,
"end": 424,
"text": "Magistry (2012)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Taiwan. It is a non-profit, free and open online community, and it is claimed to be one of the largest BBS sites in the world. PTT contains over 20,000 discussion boards with more than 1.5 million registered users, and over 10,000 articles are posted every day. The screenshot of PTT is shown in Figure 1 . Figure 2 shows the number of tokens in the corpus per year, and Table 1 provides some basic meta-information ",
"cite_spans": [],
"ref_spans": [
{
"start": 296,
"end": 304,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 307,
"end": 315,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 371,
"end": 378,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "PTT PTT (\u6279\u8e22\u8e22) 2 , founded in 1999, is a terminal-based bulletin board system (BBS) based in",
"sec_num": "2.1."
},
{
"text": "In this research, the lexicon was used for filtering out existed words. It is comprised of The Revised Chinese Dictionary (\u6559\u80b2\u90e8\u91cd\u7de8\u570b\u8a9e\u8fad\u5178\u4fee\u8a02\u672c, TRCD) 3 and Taiwan Spoken Mandarin Wordlist (\u4e2d\u7814\u9662\u6f22\u8a9e\u53e3\u8a9e\u8a9e\u6599\u5eab\u8a5e\u983b\u8868, TSMW) 4 . TRCD was compiled by Ministry of Education with 139,401 words and expressions, and TSMW was collected by Academia Sinica with 16,683 entries. Since two-character words are dominant in modern Chinese, as a first step, only two-character words will be chosen. ",
"cite_spans": [
{
"start": 143,
"end": 144,
"text": "3",
"ref_id": "BIBREF2"
},
{
"start": 203,
"end": 204,
"text": "4",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon",
"sec_num": "2.2."
},
{
"text": "We have developed a pipeline framework for the corpus-driven analysis. A crawler module collects the textual data and meta-information from the PTT; a cleaner module removes the unnecessary information of the retrieved raw data; an n-gram module creates bigram candidates and compares them with the lexicon; and finally a linguistic module filters out some noisy data via encodes heuristic rules 5 . The resulting bigrams are thus divided into three basic categories: words, nonwords and potential neologisms. The main steps can be listed as follows and illustrated in Figure 3 :",
"cite_spans": [
{
"start": 396,
"end": 397,
"text": "5",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 569,
"end": 577,
"text": "Figure 3",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Data pre-processing",
"sec_num": "2.3."
},
{
"text": "Step 1. Transforming all the tokens into bigrams",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data pre-processing",
"sec_num": "2.3."
},
{
"text": "Step 2. Exploiting the lexicon to exclude existed words from out-of-vocabulary (OOV) Step 3. Linguistics rules were applied to separate OOV into nonwords and potential neologisms Figure 3 . Data processing flowchart",
"cite_spans": [],
"ref_spans": [
{
"start": 179,
"end": 187,
"text": "Figure 3",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Data pre-processing",
"sec_num": "2.3."
},
{
"text": "Most previous works on unknown word / OOV extraction exploited complicated morphological rules and various machine learning techniques (Chen and Ma, 2002) . In order to utilize the contextual information, as much linguistic resource (such as syntax, semantics, morphology and world knowledge) as possible were explored. It is worth mentioning that why the (naive) n-gram model is adopted in this study.",
"cite_spans": [
{
"start": 135,
"end": 154,
"text": "(Chen and Ma, 2002)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3."
},
{
"text": "An n-gram is a contiguous sequence of n items from a given sequence of text. In Mandarin Chinese, the items correspond to individual characters. The n-grams of size 2, viz. bigrams, will be the major focus in this research. A bigram is a sequence of two adjacent elements in a string of tokens. For example, there are five bigrams in \u4eca\u5929\u5929\u6c23\u5f88\u597d, which are \u4eca\u5929, \u5929\u5929, \u5929\u6c23, \u6c23\u5f88 and \u5f88\u597d. In this paper, we further propose a notion of 'diachronic n-gram' by leveraging diachronic frequency data in PTT, whose advantages can be explicated by the following points: First, this model does not have to presume a word segmentator. The reason why prominent segmentation system such as CKIP 8 was not used to segmentate words is that language used on PTT contains too many fragments, novel linguistic forms, jargons and slangs, causing the low accuracy of the performance. Take the following sentences as an example. Sentence 1is a sentence extracted from the data, and sentence (2) is the segmentation result by CKIP. (Bybee, 2007) . A usage-based perspective on language also argues that language as a complex adaptive system is to be viewed as emergent from the repeated application of underlying process, rather than given a priori or by design (Hopper, 1987) . Instead of rule-based normalization, modeling lexical change with empirical data support could also bypass the thorny wordhood issue in Chinese. In addition, time series statistical analysis and other distributional models can bring their contribution in this scenario too.",
"cite_spans": [
{
"start": 670,
"end": 671,
"text": "8",
"ref_id": "BIBREF7"
},
{
"start": 998,
"end": 1011,
"text": "(Bybee, 2007)",
"ref_id": "BIBREF4"
},
{
"start": 1228,
"end": 1242,
"text": "(Hopper, 1987)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram in Diachronic Contexts",
"sec_num": "3.1."
},
{
"text": "Based on the considerations and framework mentioned above, the data was categorized into words, nonwords and potential neologisms, whose frequency data are plotted as in Figure 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 170,
"end": 178,
"text": "Figure 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Classification",
"sec_num": "3.2."
},
{
"text": "In Figure 4 Generally, a first look at the data shows that the overall frequency of words is higher than nonwords and potential neologisms, and the frequency of potential neologisms is slightly lower than nonwords. ",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Classification",
"sec_num": "3.2."
},
{
"text": "In order to evaluate the classification performance, the results were manually annotated, and measured with Fleiss' kappa (1971), a statistical measure of inter-rater reliability. The equation is shown as the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Judgment Experiment",
"sec_num": "4.1."
},
{
"text": "( \u2032 ) = \u2212 1 \u2212",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Judgment Experiment",
"sec_num": "4.1."
},
{
"text": "The score of Fleiss' kappa is \u2212 , the degree of agreement actually achieved above chance, divided by 1 \u2212 , the degree of agreement that is attainable above chance. It can be interpreted as expressing the extent to which the observed amount of agreement among raters exceeds what would be expected if all raters made their ratings completely randomly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Judgment Experiment",
"sec_num": "4.1."
},
{
"text": "In this annotation task, 25 raters (n) were assigned 75 bigrams (N, which were selected from each category randomly and equally) into three categories (k, i.e., words, nonwords, potential neologisms) according to the following definitions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Judgment Experiment",
"sec_num": "4.1."
},
{
"text": "(1) Words: bigrams that are stable or already exist in current language use.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Judgment Experiment",
"sec_num": "4.1."
},
{
"text": "(2) Nonwords: bigrams that are unstable, does not exist, or being used only by a very small subculture. The result shows that the score of Fleiss' kappa is 0.54, which indicates \"moderate agreement\" (Landis and Koch, 1977 ).",
"cite_spans": [
{
"start": 199,
"end": 221,
"text": "(Landis and Koch, 1977",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Human Judgment Experiment",
"sec_num": "4.1."
},
{
"text": "In this section, the characteristics of the neologisms and the inconsistency between the system's judgment and the rates' judgment will be discussed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "4.2."
},
{
"text": "For the raters' judgment, a bigram will be recognized as a neologism if more than half of the raters have the same agreement on it. The results are categorized under Hsu's (1999) classification, which are shown in Table 3 . As mentioned earlier, 25 bigrams were randomly selected from the category potential neologisms. In the result, it is observed that only parts of them are rated as neologisms, and some of the bigrams originally selected from words and nonwords are rated as neologisms as well. Table 4 shows the inconsistency between the system's judgment and raters' judgment.",
"cite_spans": [
{
"start": 172,
"end": 178,
"text": "(1999)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 214,
"end": 221,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 500,
"end": 507,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Discussions",
"sec_num": "4.2."
},
{
"text": "Also, the last column indicates the numbers of bigrams' occurrences in newspaper 9 , which is used to show the relationship between the public news and neologisms.",
"cite_spans": [
{
"start": 81,
"end": 82,
"text": "9",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "4.2."
},
{
"text": "According to the number of people with agreement, we can see that dialectal words tend to have higher newness (the degree of how new a word is), showing that dialectal words play an important role in the input of neologisms of Taiwanese Mandarin. Second, it is shown that the higher the newness of a bigram, the less frequent it will appear in the public newspapers, which reflects that the more stable a bigram is, the more it will be recognized as a formal word. For example, \u767d\u76ee has the lower occurrence than \u79d1\u5927 in the public newspapers because it has the higher newness. From the statistic perspective, time series analysis also shows the similar correspondence with our prediction. The time series of the frequency data appears is non-seasonal, and can be probably described by using an additive model. We use Holt Winters exponential smoothing method to make short term forecast for the 4 words in the three categories. Figure 5 shows the illustrative plots for \u963f\u7f75 (nonword), \u7b46\u96fb (potential neologism), \u9ad8\u9435 (word), \u5c0f\u9b3c (word) with parameter alpha of (0.369, 0.2328, 0.0088, 0.1933) respectively. The predictive model gives us the forecast for the year 2013 (plotted as a blue line), an 80% prediction interval for the forecast (plotted as a purple shaded area,), and the 95% prediction interval as a gray shaded area. shows that it has a higher probability of being a neologism. According to this observation, we suggest that a bigram with low frequency and high stability has a higher chance of being a neologism.",
"cite_spans": [],
"ref_spans": [
{
"start": 925,
"end": 933,
"text": "Figure 5",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Discussions",
"sec_num": "4.2."
},
{
"text": "In terms of the distribution, we can divide words into two patterns. Take \u5c0f\u9b3c and \u9ad8\u9435 for example. The former one has peaks with high frequency during its development, which implies that it has a higher stability of being a word. The latter one has a significant peak at the beginning, and then it starts decreasing gradually. In fact, \u9ad8\u9435 (Taiwan High Speed Rail) was a popular issue since late 2005 after the construction was formally announced by the government, but the topic was out of focus year after year. This reflects that public issues sometimes can dominate the occurrence of a potential neologism, and also implies that the difficulty of detecting a potential neologisms not only due to its low frequency but also due to some extralingusitics factors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "4.2."
},
{
"text": "In this research, we have built a diachronic corpus of PTT from 2005 to 2012., neologisms are detected by a proposed 'diachronic n-gram model' inspired by functional linguistics, and an experiment of human judgment was conducted among 25 raters. The score of the inter-rater agreement measured by Fleiss' kappa is 0.54, which indicates the moderate agreement. The characteristics of the neologisms and the inconsistency between the system's judgment and the raters' judgment are then discussed in an attempt to improve the detection of neologisms in PTT. Comparison with newly released Google book n-gram data will be conducted in the future study, which would facilitate the prediction and deeper understanding of neologisms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "As cited by wiki at http://en.wikipedia.org/wiki/Neologism#cite_ref-1Proceedings of the Twenty-Fifth Conference on Computational Linguistics and Speech Processing (ROCLING 2013)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "telnet://ptt.cc Proceedings of the Twenty-Fifth Conference on Computational Linguistics and Speech Processing (ROCLING 2013)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://dict.revised.moe.edu.tw/ 4 http://mmc.sinica.edu.tw/resources_c_01.htm Proceedings of the Twenty-Fifth Conference on Computational Linguistics and Speech Processing (ROCLING 2013)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Linguistic rules are used to exclude bigrams with function words or affixes, such as pronouns, particles and aspects. See Li and Thompson (1989). Proceedings of the Twenty-Fifth Conference on Computational Linguistics and Speech Processing (ROCLING 2013)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Chinese Knowledge and Information Processing (CKIP) is a Chinese word segmentation system developed by Academia Sinica. Proceedings of the Twenty-Fifth Conference on Computational Linguistics and Speech Processing (ROCLING 2013)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Proceedings of the Twenty-Fifth Conference on Computational Linguistics and Speech Processing(ROCLING 2013)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Newspapers are comprised of \u806f\u5408\u5831, \u7d93\u6fdf\u65e5\u5831, \u6c11\u751f\u5831, \u806f\u5408\u665a\u5831 and Upaper with 11,230,842 articles, which are collected by United Daily News. See http://udndata.com/ndapp/Detail. Proceedings of the Twenty-Fifth Conference on Computational Linguistics and Speech Processing (ROCLING 2013)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "PTT \u6279\u8e22\u8e22 as a corpus",
"authors": [
{
"first": "P",
"middle": [],
"last": "Magistry",
"suffix": ""
}
],
"year": 2012,
"venue": "presented at the annual meeting of the European Association of Taiwan Studies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Magistry, \"PTT \u6279\u8e22\u8e22 as a corpus,\" presented at the annual meeting of the European Association of Taiwan Studies, S\u00f8nderborg, 2012.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Unknown word extraction for Chinese documents",
"authors": [
{
"first": "K.-J",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "W.-Y",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 19th international conference on Computational linguistics",
"volume": "1",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K.-J. Chen and W.-Y. Ma, \"Unknown word extraction for Chinese documents,\" in Proceedings of the 19th international conference on Computational linguistics-Volume 1, 2002, pp. 1-7.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Mandarin Chinese: A functional reference grammar: University of California Pr",
"authors": [
{
"first": "C",
"middle": [
"N"
],
"last": "Li",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Thompson",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. N. Li and S. A. Thompson, Mandarin Chinese: A functional reference grammar: University of California Pr, 1989.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Automated extraction of swedish neologisms using a temporally annotated corpus: Skolan f\u00f6r datavetenskap och kommunikation, Kungliga Tekniska h\u00f6gskolan",
"authors": [
{
"first": "P",
"middle": [],
"last": "Stenetorp",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Stenetorp, Automated extraction of swedish neologisms using a temporally annotated corpus: Skolan f\u00f6r datavetenskap och kommunikation, Kungliga Tekniska h\u00f6gskolan, 2010.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Frequency of Use and the Organization of Language",
"authors": [
{
"first": "J",
"middle": [],
"last": "Bybee",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Bybee, Frequency of Use and the Organization of Language: Oxford University Press, 2007.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Emergent grammar",
"authors": [
{
"first": "P",
"middle": [],
"last": "Hopper",
"suffix": ""
}
],
"year": 1987,
"venue": "Berkeley Linguistics Conference (BLS)",
"volume": "13",
"issue": "",
"pages": "139--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Hopper, \"Emergent grammar,\" Berkeley Linguistics Conference (BLS), vol. 13, pp. 139-157, 1987.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Measuring nominal scale agreement among many raters",
"authors": [
{
"first": "J",
"middle": [
"L"
],
"last": "Fleiss",
"suffix": ""
}
],
"year": 1971,
"venue": "Psychological bulletin",
"volume": "76",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. L. Fleiss, \"Measuring nominal scale agreement among many raters,\" Psychological bulletin, vol. 76, p. 378, 1971.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The measurement of observer agreement for categorical data",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Landis",
"suffix": ""
},
{
"first": "G",
"middle": [
"G"
],
"last": "Koch",
"suffix": ""
}
],
"year": 1977,
"venue": "biometrics",
"volume": "",
"issue": "",
"pages": "159--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. R. Landis and G. G. Koch, \"The measurement of observer agreement for categorical data,\" biometrics, pp. 159-174, 1977.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Proceedings of the Twenty-Fifth Conference on Computational Linguistics and Speech Processing",
"authors": [
{
"first": "\"",
"middle": [],
"last": "\u8a31\u6590\u7d62",
"suffix": ""
},
{
"first": "\"",
"middle": [],
"last": "\u53f0\u7063\u7576\u4ee3\u570b\u8a9e\u65b0\u8a5e\u63a2\u5fae",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "\u81fa\u7063\u5e2b\u7bc4\u5927\u5b78\u83ef\u8a9e\u6587\u6559\u5b78\u7814\u7a76\u6240\u5b78\u4f4d\u8ad6\u6587",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\u8a31\u6590\u7d62, \"\u53f0\u7063\u7576\u4ee3\u570b\u8a9e\u65b0\u8a5e\u63a2\u5fae,\" \u81fa\u7063\u5e2b\u7bc4\u5927\u5b78\u83ef\u8a9e\u6587\u6559\u5b78\u7814\u7a76\u6240\u5b78\u4f4d\u8ad6\u6587, 1999. Proceedings of the Twenty-Fifth Conference on Computational Linguistics and Speech Processing (ROCLING 2013)",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Screenshot of PTTThe data are collected from 2005 to 2012 from three major boards on PTT, which are Gossiping (\u516b\u5366\u7248), joke (\u5c31\u53ef\u7248) and StupidClown (\u7b28\u7248).",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"text": "Number of tokens in the corpus per year from 2005 to 2012",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"text": "\u5c0f\u59b9\u60f3\u8acb\u554f\u5404\u4f4d\u6279\u8e22\u8e22\u5e25\u5b85\u5b85\u845b\u683c\u5011(2) \u5c0f\u59b9/\u60f3/\u8acb\u554f/\u5404/\u4f4d/\u6279/\u8e22\u8e22/\u5e25\u5b85\u5b85\u845b\u683c\u5011 As we can see, the result of segmentation is out of satisfactory. Stenetorp (2010) suggested \"[\u2026] an exclusion error is not recoverable and likely to make users unable to observe a certain neologism we might be forced to tolerate a high degree of noise.\" To reduce the risk of losing any potential neologism, segmentator was not exploited in this research.Secondly, an n-gram model equipped with diachronic information would arouse echoes in current theoretical development in linguistics. Frequency effect has been widely recognized in cognitive linguistics, and recent functional linguistic studies also justify the frequency as a determinant in lexical diffusion and changes",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF3": {
"text": ", x-axis represents different time periods, which starts from 2005 to 2012, and y-axis represents the frequency of bigrams. Each curves stands for an individual bigram, and the total number of the bigrams are listed in the upper-left corner of each plot. (For example, there are 2,836 bigrams in the category words.)",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF4": {
"text": "Plots of words, nonwords and potential neologisms4. Evaluation and Discussions",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF5": {
"text": "Potential Neologisms: bigrams that have reached a significant audience, but probably not yet have gained lasting acceptance.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF6": {
"text": "The time series of the frequency data Although the overall frequency of \u7b46\u96fb is low, its occurrence is relatively stable. The figure",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF0": {
"text": "",
"content": "<table><tr><td/><td>PTT Corpus</td></tr><tr><td colspan=\"2\">Boards Gossiping(\u516b\u5366\u7248), joke (\u5c31\u53ef\u7248), StupidClown (\u7b28\u7248)</td></tr><tr><td colspan=\"2\">Years 2005 -2012</td></tr><tr><td>Posts</td><td>33,450</td></tr><tr><td colspan=\"2\">Authors 17,031</td></tr><tr><td colspan=\"2\">Tokens 14,285,768</td></tr><tr><td colspan=\"2\">Types 7,010</td></tr><tr><td colspan=\"2\">Bigrams 785,494</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF1": {
"text": "",
"content": "<table><tr><td/><td>Lexicon</td></tr><tr><td/><td>TRCD TSMW</td></tr><tr><td>Entries</td><td>159,401 16,683</td></tr><tr><td colspan=\"2\">Two-character word 86,907 10,198</td></tr><tr><td colspan=\"2\">Two-character words in total: 89,118</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF2": {
"text": "native neologisms are in the majority of neologisms. According to Hsu, native neologisms appear when there is a lexical gap, and they are born without any effect from other foreign language. Second, Min Nan provides the major source of dialectal neologism. This shows that Min Nan has the higher prestige in Taiwanese dialects, which is also in accordance to Hsu's proposal. It is interesting that most of the dialectal neologisms seem to have negative meanings, but more evidence should be provided to support this observation, which will be included the future research. Third, abbreviation words such as \u7b46 \u96fb and \u79d1\u5927 forms the major source of native neologisms, which corresponds to Hsu's proposal as well. Fourth, \u963f\u7f75 is categorized as a trendy word since it is a play on words. That is to say, \u963f\u7f75 [a ma4] has the same pronunciation with \u963f\u5b24 [a ma4], which is an existed word in Taiwanese Mandarin.",
"content": "<table><tr><td colspan=\"2\">Neologism Classification (Hsu, 1999)</td></tr><tr><td colspan=\"2\">Native neologisms \u81ea\u522a \u7b28\u9ede \u641e\u7b11</td></tr><tr><td/><td>\u7b46\u96fb \u79d1\u5927 \u9ad8\u9435</td></tr><tr><td>Loan words</td><td>\u99ac\u514b</td></tr><tr><td>Dialectal words</td><td>\u552c\u721b \u767d\u76ee \u8c6a\u6d28</td></tr><tr><td>Trendy words</td><td/></tr></table>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF3": {
"text": "",
"content": "<table><tr><td/><td colspan=\"3\">Neologisms according to raters' judgment</td></tr><tr><td colspan=\"2\">Bigrams System's</td><td>Number of</td><td>Number of</td></tr><tr><td/><td>judgment</td><td>people with</td><td>occurrence in</td></tr><tr><td/><td/><td>agreement</td><td>newspapers</td></tr><tr><td>\u552c\u721b</td><td/><td>21</td><td>127</td></tr><tr><td>\u767d\u76ee</td><td/><td>17</td><td>787</td></tr><tr><td>\u81ea\u522a \u8c6a\u6d28 \u7b28\u9ede</td><td>Potential neologisms</td><td>17 15 11</td><td>37 6 3</td></tr><tr><td>\u7b46\u96fb</td><td/><td>11</td><td>13214</td></tr><tr><td>\u79d1\u5927</td><td/><td>10</td><td>17847</td></tr><tr><td>\u641e\u7b11</td><td/><td>15</td><td>6829</td></tr><tr><td>\u9ad8\u9435</td><td>Words</td><td>12</td><td>19893</td></tr><tr><td>\u99ac\u514b</td><td/><td>11</td><td>4909</td></tr><tr><td>\u963f\u7f75</td><td>Nonwords</td><td>12</td><td>2</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
}
}
}
} |