File size: 64,367 Bytes
6fa4bc9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 |
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:33:39.630084Z"
},
"title": "Effort versus performance tradeoff in Uralic lemmatisers",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Howell",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Research University Higher School of Economics",
"location": {
"settlement": "Moscow",
"country": "Russia"
}
},
"email": ""
},
{
"first": "Maria",
"middle": [],
"last": "Bibaeva",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Research University Higher School of Economics",
"location": {
"settlement": "Moscow",
"country": "Russia"
}
},
"email": ""
},
{
"first": "Francis",
"middle": [
"M"
],
"last": "Tyers",
"suffix": "",
"affiliation": {
"laboratory": "National Research University Higher School of Economics",
"institution": "Indiana University",
"location": {
"settlement": "Moscow, Bloomington",
"region": "IN",
"country": "Russia, United States"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Lemmatisers in Uralic languages are required for dictionary lookup, an important task for language learners. We explore how to decide which of the rule-based and unsupervised categories is more efficient to invest in. We present a comparison of rule-based and unsupervised lemmatisers, derived from the Giellatekno finite-state morphology project and the Morfessor surface segmenter trained on Wikipedia, respectively. The comparison spanned six Uralic languages, from relatively high-resource (Finnish) to extremely lowresource (Uralic languages of Russia). Performance is measured by dictionary lookup and vocabulary reduction tasks on the Wikipedia corpora. Linguistic input was quantified, for rulebased as quantity of source code and state machine complexity, and for unsupervised as the size of the training corpus; these are normalised against Finnish. Most languages show performance improving with linguistic input. Future work will produce quantitative estimates for the relationship between corpus size, ruleset size, and lemmatisation performance.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Lemmatisers in Uralic languages are required for dictionary lookup, an important task for language learners. We explore how to decide which of the rule-based and unsupervised categories is more efficient to invest in. We present a comparison of rule-based and unsupervised lemmatisers, derived from the Giellatekno finite-state morphology project and the Morfessor surface segmenter trained on Wikipedia, respectively. The comparison spanned six Uralic languages, from relatively high-resource (Finnish) to extremely lowresource (Uralic languages of Russia). Performance is measured by dictionary lookup and vocabulary reduction tasks on the Wikipedia corpora. Linguistic input was quantified, for rulebased as quantity of source code and state machine complexity, and for unsupervised as the size of the training corpus; these are normalised against Finnish. Most languages show performance improving with linguistic input. Future work will produce quantitative estimates for the relationship between corpus size, ruleset size, and lemmatisation performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Lemmatisation is the process of deinflecting a word (the surface form) to obtain a normalised, grammatically \"neutral\" form, called the lemma.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A related task is stemming, the process of removing affix morphemes from a word, reducing it to the intersection of all surface forms of the same lemma.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These two operations have finer (meaning more informative) variants: morphological analysis (producing the lemma plus list of morphological tags) and surface segmentation (producing the stem plus list of affixes). Still, a given surface form may have several possible analyses and several possible segmentations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Uralic languages are highly agglutinative, that is, inflection is often performed by appending suffixes to the lemma. For such languages, stemming and lemmatisation agree, allowing one dimension of comparison between morphological analysers and surface segmenters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Such agglutinative languages typically do not have all surface forms listed in a dictionary; users wishing to look up a word must lemmatise before performing the lookup. Software tools (Johnson et al., 2013) are being developed to combine the lemmatisation and lookup operations.",
"cite_spans": [
{
"start": 185,
"end": 207,
"text": "(Johnson et al., 2013)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Further, most Uralic languages are low-resourced, meaning large corpora (necessary for the training of some analysers and segmenters) are not readily available. In such cases, software engineers, linguists and system designers must decide whether to invest effort in obtaining a large enough corpus for statistical methods or in writing rulesets for a rule-based system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this article we explore this trade-off, comparing rule-based and statistical stemmers across several Uralic languages (with varying levels of resources), using a number of proxies for \"model effort\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For rule-based systems, we evaluate the Giellatekno (Moshagen et al., 2014) finite-state morphological transducers, exploring model effort through ruleset length, and number of states of the transducer.",
"cite_spans": [
{
"start": 52,
"end": 75,
"text": "(Moshagen et al., 2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For statistical systems, we evaluate Morfessor (Virpioja et al., 2013) surface segementer models along with training corpus size.",
"cite_spans": [
{
"start": 47,
"end": 70,
"text": "(Virpioja et al., 2013)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We hope to provide guidance on the question, \"given an agglutinative language with a corpus of N words, how much effort might a rule-based analyser require to be better than a statistical segmenter at lemmatisation?\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The most interesting results of this work are the figures shown in Section 5.4, where effort proxies are plotted against several measures of performance (normalised against Finnish). The efficient reader may wish to look at these first, looking up the various quantities afterwards.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reading Guide",
"sec_num": "1.1"
},
{
"text": "For (brief) information on the languages involved, see Section 2; to read about the morphological analysers and statistical segmenters used, see Section 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reading Guide",
"sec_num": "1.1"
},
{
"text": "Discussion and advisement on directions for future work conclude the article in Section 6. The entire project is reproducible, and will be made available before publication.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reading Guide",
"sec_num": "1.1"
},
{
"text": "The languages used for the experiments in this paper are all of the Uralic group. These languages are typologically agglutinative with predominantly suffixing morphology. The following paragraphs give a brief introduction to each of the languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Languages",
"sec_num": "2"
},
{
"text": "Finnish (ISO-639-3 fin) is the majority and official (together with Swedish) language of Finland. It is in the Finnic group of Uralic languages, and has an estimate of around 6 million speakers worldwide. The language, like other Uralic languages spoken in the more western regions of the language area has predominantly SVO word order and NP-internal agreement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Languages",
"sec_num": "2"
},
{
"text": "Komi-Zyrian (ISO-639-3 kpv; often simply referred to as Komi) is one of the major varieties of the Komi macrolanguage of the Permic group of Uralic languages. It is spoken by the Komi-Zyrians, the most populous ethnic subgroup of the Komi peoples in the Uralic regions of the Russian Federation. Komi languages are spoken by an estimated 220, 00 people, and are co-official with Russian in the Komi Republic and the Perm Krai territory of the Russian Federation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Languages",
"sec_num": "2"
},
{
"text": "Moksha (ISO-639-3 mdf) is one of the two Mordvinic languges, the other being Erzya; the two share co-official status with Russian in the Mordovia Republic of the Russian Federation. There are an estimated 2, 000 speakers of Moksha, and it is dominant in the Western part of Mordovia.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Languages",
"sec_num": "2"
},
{
"text": "Meadow Mari (ISO-639-3 mhr, also known as Eastern Mari) is one of the minor languages of Russia belonging to the Finno-Volgaic group of the Uralic family. After Russian, it is the second-most spoken language of the Mari El Republic in the Russian Federation, and an estimated 500, 000 speakers globally. Meadow Mari is co-official with Hill Mari and Russian in the Mari El Republic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Languages",
"sec_num": "2"
},
{
"text": "Hill Mari (ISO-639-3 mrj; also known as Western Mari) is one of the minor languages of Russia belonging to the Finno-Volgaic group of the Uralic family, with an estimated 30, 000 speakers. It is closely related to Meadow Mari (ISO-639-3 mhr, also known as Eastern Mari, and Hill Mari is sometimes regarded as a dialect of Meadow Mari. Both languages are co-official with Russian in the Mari El Republic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Languages",
"sec_num": "2"
},
{
"text": "Erzya (ISO-639-3 myv) is one of the two Mordvinic languages, the other being Moksha, which are traditionally spoken in scattered villages throughout the Volga Region and former Russian Empire by well over a million in the beginning of the 20th century and down to approximately half a million according to the 2010 census. Together with Moksha and Russian, it shares co-official status in the Mordovia Republic of the Russian Federation. 1 North S\u00e1mi (ISO-639-3 sme) belongs to the Samic branch of the Uralic languages. It is spoken in the Northern parts of Norway, Sweden and Finland by approximately 24.700 people, and it has, alongside the national language, some official status in the municipalities and counties where it is spoken. North S\u00e1mi speakers are bilingual in their mother tongue and in their respective national language, many also speak the neighbouring official language. It is primarily an SVO language with limited NP-internal agreement. Of all the languages studied it has the most complex phonological processes.",
"cite_spans": [
{
"start": 438,
"end": 439,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Languages",
"sec_num": "2"
},
{
"text": "Udmurt (ISO-639-3 udm) is a Uralic language in the Permic subgroup spoken in the Volga area of the Russian Federation. It is co-official with Russian in the Republic of Udmurtia. As of 2010 it has around 340,000 native speakers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Languages",
"sec_num": "2"
},
{
"text": "Grammatically as with the other languages it is agglutinative, with 15 noun cases, seven of which are locative cases. It has two numbers, singular and plural and a series of possessive suffixes which decline for three persons and two numbers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Languages",
"sec_num": "2"
},
{
"text": "In terms of word order typology, the language is SOV, like many of the other Uralic languages of the Russian Federation. There are a number of grammars of the language in Russian and in English, e.g. Winkler (2001) . 3 Lemmatisers",
"cite_spans": [
{
"start": 200,
"end": 214,
"text": "Winkler (2001)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Languages",
"sec_num": "2"
},
{
"text": "Giellatekno is a research group working on language technology for the S\u00e1mi languages. It is based in Troms\u00f8, Norway and works primarily on rule-based language technology, particularly finite-state morphological descriptions and constraint grammars. In addition to the S\u00e1mi languages, their open-source infrastructure also contains software and data for many other Uralic languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Giellatekno transducers",
"sec_num": "3.1"
},
{
"text": "In particular, Giellatekno has produced (Moshagen et al., 2014) finite-state transducers for morphological analysis of our chosen Uralic languages; we use these to extract lemmas from surface forms. When multiple lemmatisations are offered, the highest weight one is chosen. Unaccepted words are treated as already-lemmatised.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Giellatekno transducers",
"sec_num": "3.1"
},
{
"text": "Morfessor (Virpioja et al., 2013 ) is a class of unsupervised and semi-supervised trainable surface segmentation algorithms; it attempts to find a minimal dictionary of morphemes. We use Wikipedia as training data for this model.",
"cite_spans": [
{
"start": 10,
"end": 32,
"text": "(Virpioja et al., 2013",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Morfessor",
"sec_num": "3.2"
},
{
"text": "The stemmers are applied to every word in the corpus, and the resulting stem is looked up in a dictionary. This mimics a user attempting to look up a highlighted word in a dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dictionary task",
"sec_num": "4.1"
},
{
"text": "Bilingual dictionaries are taken from Giellatekno, with definitions in Russian, Finnish, English, or German. (The actual definitions are not used, just the presence of an entry; we take the union over all dictionaries.) Dictionary sizes are shown in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 250,
"end": 257,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Dictionary task",
"sec_num": "4.1"
},
{
"text": "As baseline we take the percentage of words in the corpus which are already in the dictionary. Both token and type counts provided.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dictionary task",
"sec_num": "4.1"
},
{
"text": "We apply the lemmatisers to each word of the corpus, and measure the reduction in tokens and types. Lower diversity of post-lemmatisation tokens or types demonstrates that the lemmatiser is identifying more words as having the same lemma.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vocabulary reduction",
"sec_num": "4.2"
},
{
"text": "The distinction between token reduction and type reduction corresponds to a notion of \"user experience\": from the perspective of our tasks, correctly lemmatising a more frequent token is more important than a less frequent token.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vocabulary reduction",
"sec_num": "4.2"
},
{
"text": "The effort expended in producing a model is a subjective and qualitative measure; we claim only to provide coarse objective and quantitative proxies for this.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effort",
"sec_num": "4.3"
},
{
"text": "In the case of statistical methods, total effort (which would include the effort of developing the algorithm) is not important for our purposes: we are comparing the specialisation of a statistical method to a particular language with the development of a rule-based model. (Indeed, to fairly compare total effort of the technique, a completely different and perhaps more academic question, we would need to include the general development of rule-based methods.) Thus for statistical methods we include only the size of the corpus used to train the system. In our experiments, this corpus is Wikipedia, which we use (for better or worse) as a proxy for general availability of corpora in a given language on the internet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effort",
"sec_num": "4.3"
},
{
"text": "For rule-based systems, we must find a measure of the effort. In this article our rule-based systems are all finitestate transducers, compiled from rulesets written by linguists. We choose two proxies for invested effort: the lines of code in all rulesets used in compiling the transducer, and the number of states of the transducer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effort",
"sec_num": "4.3"
},
{
"text": "The former will count complex and simple rules the same, which the latter may provide insight into. Conversely, a highly powerful rule system may create a great number of states while being simple to write; in this case, the ruleset is a better proxy than the number of states.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effort",
"sec_num": "4.3"
},
{
"text": "Wikipedia dumps from 20181201 are used as source corpus; the corpus is split into tokens at word boundaries and tokens which are not purely alphabetical are dropped. Corpus size in tokens, post-processing, is shown in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 218,
"end": 225,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Wikipedia",
"sec_num": "4.4"
},
{
"text": "Corpora were randomly divided into training (90% of the corpus) and testing subcorpora (10%); Morfessor models are produced with the training subcorpus, and lemmatiser evaluation is only with the test subcorpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wikipedia",
"sec_num": "4.4"
},
{
"text": "Our study involves treating the Uralic language as an independent variable; the six languages we consider here do not provide for a very large sample. We attempt to mitigate this by using both traditional and robust statistics; potential \"outliers\" can then be quantitatively identified. Thus for every mean and standard deviation seen, we will also present the median and the median absolute deviation. For reference:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "suppose that {x i } N i=1 is a finite set of numbers. If {y i } N i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "is the same collection, but sorted (so that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "y 1 \u2264 y 2 \u2264 \u2022 \u2022 \u2022 \u2264 y N ), then the median is med{x i } = { y N /2 N is even mean{y (N \u00b11)/2 } N is odd",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "and the median absolute deviation (or for brevity, \"median deviation\") is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "mad{x i } = med {|x i \u2212 med x i |} .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "When we quote means, we will write them as \u00b5 \u00b1 \u03c3 where \u00b5 is the mean and \u03c3 the standard deviation of the data. Similarly, for medians we will write m \u00b1 d where m is the median and d the median deviation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Data with potential outliers can be identified by comparing the median/median deviation and the mean/standard deviation: if they are significantly different (for example, the mean is much further than one standard deviation away from the median, or the median deviation is much smaller than the standard deviation), then attention is likely warranted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Results of the dictionary lookup are presented in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 57,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dictionary lookup",
"sec_num": "5.1"
},
{
"text": "Cursory inspection shows that while the Giellatekno model for Finnish slightly out-performs the Wikipedia Morfessor model, on average Morfessor provides not only the greatest improvement in token lookup performance (average/median improvement of 1.6/1.5 versus Giellatekno's 1.4/1.3), but also more consistent (standard/median deviation of 0.3/0.1 versus 0.4/0.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dictionary lookup",
"sec_num": "5.1"
},
{
"text": "We see some limitations in the Morfessor model when projecting to type lookup performance: the value of Morfessor on type lookup is essentially random, hurting as often and as much as it helps: mean and median improvement factors are both 1.0. Compare with Giellatekno, where improvement mean and median are at least one deviation above baseline. We suggest this disparity could be due to our Morfessor model over-stemming rare words, and successfully stemming common words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dictionary lookup",
"sec_num": "5.1"
},
{
"text": "Vocabulary reduction results are presented in Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 46,
"end": 53,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Vocabulary reduction",
"sec_num": "5.2"
},
{
"text": "Generally, we see that Morfessor is much more aggressively reducing the vocabulary: average Morfessor reduction is 9% versus Giellatekno's 15%; here North S\u00e1mi and Finnish again stand out with Morfessor reducing to 7.2% and 6.5% respectively. Compare with Hill Mari, where reduction is to a mere 11%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vocabulary reduction",
"sec_num": "5.2"
},
{
"text": "While the performance of Giellatekno is much less dramatic, we still notice that North S\u00e1mi and Hill Mari are more than a standard deviation, or more than two median deviations, away from the mean performance. Otherwise, the clustering is fairly tight, with all languages besides North S\u00e1mi and Hill Mari within one standard deviation and 1.5 median deviations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vocabulary reduction",
"sec_num": "5.2"
},
{
"text": "The analysis above shows that our data are affected by outlier models; which of the two measures is nominally more representative of the overall performance landscape could be demonstrated through an increase of sample size, i.e., increasing the number of languages surveyed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vocabulary reduction",
"sec_num": "5.2"
},
{
"text": "The effort quantification is presented in Table 5 . Transducer source code complexity, measured in number of transducer states per line of source code, is presented in Table 6 . Note that comments are included as part of the \"source code\"; we consider, for example, explanation of how the code works to count as some of the effort behind the development of the transducer. Some immediate observations: among the Uralic languages studied here, Finnish is high-resource, but not overwhelmingly: North S\u00e1mi compares for transducer size (in number of states), at nearly 2.5 times the median. While Meadow Mari actually has a comparable amount of transducer source code (1.8 million lines of code, about 80% the size of the Finnish transducer), its transducer code is extremely low complexity; see Table 6 . Finnish Wikipedia is approximately 2.5 times larger than the next largest, Hill Mari, and nearly 7 times larger than the median; under our assumption, this would indicate that Finnish written material is also much more accessible on the internet than our other Uralic languages.",
"cite_spans": [],
"ref_spans": [
{
"start": 42,
"end": 49,
"text": "Table 5",
"ref_id": "TABREF4"
},
{
"start": 168,
"end": 175,
"text": "Table 6",
"ref_id": "TABREF5"
},
{
"start": 793,
"end": 800,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Effort",
"sec_num": "5.3"
},
{
"text": "Among Giellatekno models, Hill Mari transducer is uniformly the lowest-resource of the Uralic languages studied, with very few lines of below-average complexity code written; contrast this with the Morfessor models, where Hill Mari has a respectable 350, 000 tokens. The lowest resource Morfessor model is Udmurt, with only 7, 000 tokens; the Udmurt Giellatekno model is also significantly below-average in resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effort",
"sec_num": "5.3"
},
{
"text": "While North S\u00e1mi has slightly below-median transducer source size, it has extremely high (eight deviations above median) state complexity, with more than one state for every two lines of code.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effort",
"sec_num": "5.3"
},
{
"text": "See Figures 1, 2 , and 3 for plots of effort normalised against Finnish versus performance. Plots are colored by language and marked by the effort quantification method. Note that since \"lines of code\" and \"number of states\" are two different measures of the same model, Table 3 : Results of the dictionary lookup task for no-op (NOOP), Morfessor (MF), and Giellatekno transducer (GT). A \"hit\" means a successful dictionary lookup. Percentage hits (tokens or types) is the percentage of tokens or types in the corpus for which the lemmatiser produces a dictionary word. The \"no-op\" (NOOP) lemmatiser takes the surface form as-is, and is used as baseline; the last two columns are percentage hits normalised by this.",
"cite_spans": [],
"ref_spans": [
{
"start": 4,
"end": 16,
"text": "Figures 1, 2",
"ref_id": null
},
{
"start": 271,
"end": 278,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.4"
},
{
"text": "Hits ( .0 \u00b1 6.0 0.6 \u00b1 0.7 40.0 \u00b1 10.0 16.0 \u00b1 8.0 1.6 \u00b1 0.3 1.0 \u00b1 0.2 median 0.7 \u00b1 0.4 0.21 \u00b1 0.09 46.0 \u00b1 7.0 17.0 \u00b1 6.0 1.5 \u00b1 0.1 1.0 \u00b1 0.1 Table 4 : Vocabulary reduction results for no-op (NOOP), Morfessor (MF), and Giellatekno (GT) lemmatisers. The final column gives the reduction factor in vocabulary size: reduction of 1 corresponds to no reduction performed, while 0.01 corresponds to a 100-fold reduction in vocabulary (average of 100 types per lemma).",
"cite_spans": [],
"ref_spans": [
{
"start": 140,
"end": 147,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Language Lemmatiser",
"sec_num": null
},
{
"text": "Note that there is no constraint that the \"lemmas\" produced are dictionary words. 6.5 kpv 0.4 9.1 mdf 1.8 9.9 mhr 0.6 9.9 mrj 5.2 11.1 myv 0.4 8.6 sme 0.5 7.2 udm 0.4 9.9 average MORF 3.3 \u00b1 5.4 9.0 \u00b1 1.4 median 0.5 \u00b1 0.1 9.5 \u00b1 0.6 MORF ktok 171 \u00b1 296 19.1 \u00b1 33.0 med.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Lemmatiser",
"sec_num": null
},
{
"text": "13 \u00b1 5 1.5 \u00b1 0.5 0.9 \u00b1 0.3 90 \u00b1 40 0.12 \u00b1 0.06 their performance is the same. Figure 1 indicates that for the dictionary lookup task by-token, Morfessor with Wikipedia is more effortefficient (relative to Finnish) for Komi-Zyrian, Udmurt, North S\u00e1mi, Erzya, Meadow Mari, and Giellatekno is more effort-efficient for Hill Mari. Remaining is Moksha, for which performance improvement scales with effort independent of model, and Finnish.",
"cite_spans": [],
"ref_spans": [
{
"start": 78,
"end": 86,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Language Lemmatiser",
"sec_num": null
},
{
"text": "Since we normalise effort against Finnish, we can only observe that the Finnish Giellatekno model performs slightly better than the Finnish Wikipedia Morfessor model; efficiency claims cannot be made. Figure 2 indicates that for the dictionary lookup task by-token, Morfessor with Wikipedia is more effortefficient (relative to Finnish) for Komi-Zyrian only; Giellatekno remains more effort-efficient for Hill Mari. Meanwhile, Udmurt, North S\u00e1mi, Erzya, and Meadow Mari join Moksha in improvement scaling with effort; the spread in slopes (the rate at which performance improves as effort is increased) is, however, quite large. Figure 3 shows that, as with lookup performance for tokens, Morfessor dominates vocabulary reduction efficiency, with only Hill Mari scaling with relative effort.",
"cite_spans": [],
"ref_spans": [
{
"start": 201,
"end": 209,
"text": "Figure 2",
"ref_id": null
},
{
"start": 629,
"end": 637,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Language Lemmatiser",
"sec_num": null
},
{
"text": "There are many interesting things to notice in the effortperformance analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion 6.1 Discussion",
"sec_num": "6"
},
{
"text": "Focusing just on the dictionary task, we find that compared against the same technology for Finnish, the Giellatekno North S\u00e1mi (sme) transducer has very high performance (relatively small ruleset), due to high rule complexity (the number of states is not very low). It is possible that North S\u00e1mi is simply easy to lemmatise, as Morfessor seems to do very well with a small corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion 6.1 Discussion",
"sec_num": "6"
},
{
"text": "Hill Mari (mrj) shows predictable performance: relative to Finnish, a small increase in resources (going from 20% or 30% of Finnish resources for the Giellatekno transducer to 40% resources for the Wikipedia corpus) gives a modest increase in performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion 6.1 Discussion",
"sec_num": "6"
},
{
"text": "Overall, we see that percent improvement in tasks scales with effort (relative to Finnish) in the type-lookup task; in the token-lookup and vocabulary reduction tasks, performance improvement favours Morfessor. (That is, the Morfessor model has a higher improvement-toresource ratio, with resources relative to Finnish.) This might be explained by the dramatic spread in Wikipedia corpus sizes used in the Morfessor models: median corpus size is 1.5% \u00b1 0.5% the size of Finnish. Thus, improvement of 5% of the Morfessor model is increasing the nominal effort (kilotokens) by a factor of four, for the median corpus; compare with Giellatekno, where median model is 20% or 40% the size of the corresponding Finnish model, depending on the metric used. See the following section for potential avenues to control for this.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion 6.1 Discussion",
"sec_num": "6"
},
{
"text": "In the dictionary task, hits/words is lower than unique hits/words (see Section 5.1); this indicates that mislemmatised words are more frequent. Since irregular words are typically high-frequency, we might hypothesize that filtering these would close this gap. If not, it might point out areas for improvement in the lemmatisation algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "6.2"
},
{
"text": "We would like to also try other methods of lemmatising. One of the problems with the finite-state transducers is that they have limited capacity for lemmatising words which are not found in the lexicon. It is possible to use guesser techniques such as those described in Lind\u00e9n (2009) , but the accuracy is substantially lower than for hand-written entries. We would like to approach the problem as in Silfverberg and Tyers (2018) and train a sequence-to-sequence LSTM to perform lemmatisation using the finite-state transducer to produce forms for the training process.",
"cite_spans": [
{
"start": 271,
"end": 284,
"text": "Lind\u00e9n (2009)",
"ref_id": "BIBREF2"
},
{
"start": 402,
"end": 430,
"text": "Silfverberg and Tyers (2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "6.2"
},
{
"text": "There are other statistical methods, in particular bytepair encoding and adaptor grammars (Johnson et al., 2006) , which should be added to the comparison, and addition of further languages should be straightforward.",
"cite_spans": [
{
"start": 90,
"end": 112,
"text": "(Johnson et al., 2006)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "6.2"
},
{
"text": "A more refined understanding of the relationship between size of corpus and Morfessor would give a richer dataset; this could be achieved by decimating the Wikipedia corpus. For truly low-resource languages, additional corpora may be necessary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "6.2"
},
{
"text": "Similar refinement could be produced for the Giellatekno transducers using their version history: older versions of the transducers have had less work, and presumably have less source code. A dedicated researcher could compare various editions of the same transducer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "6.2"
},
{
"text": "Cross-validation (in the case of Morfessor) and using multiple testing subcorpora would give some idea of the confidence of our performance measurements at the language-level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "6.2"
},
{
"text": "Another interesting analysis, which we do not have the space to perform here, would be to normalise performance P , along the model axis m, for example for lan- Figure 1 : Improvement factor in hit rate in dictionary lookup (by tokens) (see Section 4.1; higher is better) vs. effort relative to Finnish (see Section 4.3; higher is more effort). In general, more effort-efficient models will appear to the upper-left of less effort-efficient models. Figure 2 : Improvement factor in hit rate in dictionary lookup (by types) (see Section 4.1; higher is better) vs. effort relative to Finnish (see Section 4.3; higher is more effort). In general, more effort-efficient models will appear to the upper-left of less effort-efficient models. Figure 3 : Vocabulary reduction performance in types (see Section 4.2; lower is better) vs. effort relative to Finnish (see Section 4.3; higher is more effort). In general, more effort-efficient models will appear to the lower-left of less effort-efficient models.",
"cite_spans": [],
"ref_spans": [
{
"start": 161,
"end": 169,
"text": "Figure 1",
"ref_id": null
},
{
"start": 449,
"end": 457,
"text": "Figure 2",
"ref_id": null
},
{
"start": 736,
"end": 744,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Future work",
"sec_num": "6.2"
},
{
"text": "guage xxx (normalising against Giellatekno model performance):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "6.2"
},
{
"text": "P *",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "6.2"
},
{
"text": "xxx,m = P xxx,m \u2022 P fin,GT P fin,m",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "6.2"
},
{
"text": "This measure, P * , would always be fixed to 1.0 for Finnish, and would partially control for languageindependent performance variation between models. This would then allow study of the distribution over languages of marginal performance improvement with effort.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "6.2"
},
{
"text": "https://efo.revues.org/1829",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Adaptor grammars: A framework for specifying compositional nonparametric bayesian models",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2006,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "641--648",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson, Thomas L. Griffiths, and Sharon Gold- water. 2006. Adaptor grammars: A framework for specifying compositional nonparametric bayesian models. In Advances in neural information processing systems, pages 641-648.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Using finite state transducers for making efficient reading comprehension dictionaries",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Lene",
"middle": [],
"last": "Antonsen",
"suffix": ""
},
{
"first": "Trond",
"middle": [],
"last": "Trosterud",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 19th Nordic Conference of Computational Linguistics",
"volume": "85",
"issue": "",
"pages": "59--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Johnson, Lene Antonsen, and Trond Trosterud. 2013. Using finite state transducers for making effi- cient reading comprehension dictionaries. In Proceed- ings of the 19th Nordic Conference of Computational Linguistics (NODALIDA 2013), 85, pages 59-71.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Guessers for finite-state transducer lexicons",
"authors": [
{
"first": "",
"middle": [],
"last": "Krister Lind\u00e9n",
"suffix": ""
}
],
"year": 2009,
"venue": "Computational Linguistics and Intelligent Text Processing 10th International Conference",
"volume": "5449",
"issue": "",
"pages": "158--169",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Krister Lind\u00e9n. 2009. Guessers for finite-state trans- ducer lexicons. Computational Linguistics and Intel- ligent Text Processing 10th International Conference, CICLing 2009, 5449:158-169.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Open-source infrastructures for collaborative work on under-resourced languages",
"authors": [
{
"first": "Jack",
"middle": [],
"last": "Sjur N\u00f8rsteb\u00f8 Moshagen",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Rueter",
"suffix": ""
},
{
"first": "Trond",
"middle": [],
"last": "Prinen",
"suffix": ""
},
{
"first": "Francis",
"middle": [
"Morton"
],
"last": "Trosterud",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tyers",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sjur N\u00f8rsteb\u00f8 Moshagen, Jack Rueter, Tommi Prinen, Trond Trosterud, and Francis Morton Tyers. 2014. Open-source infrastructures for collaborative work on under-resourced languages.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Datadriven morphological analysis for Uralic languages",
"authors": [
{
"first": "Miikka",
"middle": [],
"last": "Silfverberg",
"suffix": ""
},
{
"first": "Francis",
"middle": [
"M"
],
"last": "Tyers",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 5th International Workshop on Computational Linguistics for the Uralic Languages",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miikka Silfverberg and Francis M. Tyers. 2018. Data- driven morphological analysis for Uralic languages. In Proceedings of the 5th International Workshop on Computational Linguistics for the Uralic Languages (IWCLUL 2018).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Morfessor 2.0: Python Implementation and Extensions for Morfessor Baseline",
"authors": [
{
"first": "Sami",
"middle": [],
"last": "Virpioja",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Smit",
"suffix": ""
},
{
"first": "Stig-Arne",
"middle": [],
"last": "Gr\u00f6nroos",
"suffix": ""
},
{
"first": ",",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Kurimo",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sami Virpioja, Peter Smit, Stig-Arne Gr\u00f6nroos, , and Mikko Kurimo. 2013. Morfessor 2.0: Python Im- plementation and Extensions for Morfessor Baseline. Technical report, Aalto University.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td colspan=\"2\">Language Lexemes</td></tr><tr><td>fin</td><td>19012</td></tr><tr><td>kpv</td><td>43362</td></tr><tr><td>mdf</td><td>28953</td></tr><tr><td>mhr</td><td>53134</td></tr><tr><td>mrj</td><td>6052</td></tr><tr><td>myv</td><td>15401</td></tr><tr><td>sme</td><td>17605</td></tr><tr><td>udm</td><td>19639</td></tr></table>",
"text": "Giellatekno bilingual dictionary sizes, in words.",
"num": null,
"type_str": "table",
"html": null
},
"TABREF1": {
"content": "<table><tr><td colspan=\"2\">Language Tokens</td><td>Types</td></tr><tr><td>fin</td><td colspan=\"2\">897867 276761</td></tr><tr><td>mrj</td><td>352521</td><td>51420</td></tr><tr><td>mhr</td><td>15159</td><td>6468</td></tr><tr><td>myv</td><td>11177</td><td>5107</td></tr><tr><td>sme</td><td>9442</td><td>6552</td></tr><tr><td>udm</td><td>7503</td><td>4308</td></tr></table>",
"text": "Wikipedia corpus size by language, in alphabetic words.",
"num": null,
"type_str": "table",
"html": null
},
"TABREF4": {
"content": "<table><tr><td colspan=\"5\">: Effort quantification; last column is normalized</td></tr><tr><td colspan=\"5\">by Finnish. The group 'Mloc' refers to millions of lines</td></tr><tr><td colspan=\"5\">of code in the Giellatekno transducer source, including</td></tr><tr><td colspan=\"5\">lexc, xfst, regular expression, constrain grammar, and</td></tr><tr><td colspan=\"5\">twol code. The group 'kst' is the number (in thousands)</td></tr><tr><td colspan=\"5\">of states in the Giellatekno transducer, and 'ktok' is the</td></tr><tr><td colspan=\"5\">number (in thousands) of tokens in the Morfessor train-</td></tr><tr><td colspan=\"5\">ing corpus. The final column normalises against Finnish.</td></tr><tr><td colspan=\"3\">Lang. Model Effort</td><td>Quan.</td><td>% fin</td></tr><tr><td>fin</td><td/><td/><td>440</td><td>100</td></tr><tr><td>kpv</td><td/><td/><td>150</td><td>35</td></tr><tr><td>mdf</td><td/><td/><td>60</td><td>13</td></tr><tr><td>mhr mrj</td><td>GT</td><td>kst</td><td>80 50</td><td>17 11</td></tr><tr><td>myv</td><td/><td/><td>110</td><td>25</td></tr><tr><td>sme</td><td/><td/><td>540</td><td>122</td></tr><tr><td>udm</td><td/><td/><td>60</td><td>15</td></tr><tr><td>avg. med.</td><td>GT</td><td>kst</td><td>190 \u00b1 180 90 \u00b1 40</td><td>40 \u00b1 40 20 \u00b1 9</td></tr><tr><td>fin</td><td/><td/><td>2.3</td><td>100.0</td></tr><tr><td>kpv</td><td/><td/><td>0.7</td><td>30.0</td></tr><tr><td>mdf</td><td/><td/><td>0.9</td><td>40.0</td></tr><tr><td>mhr mrj</td><td>GT</td><td>Mloc</td><td>1.8 0.5</td><td>80.0 20.0</td></tr><tr><td>myv</td><td/><td/><td>1.2</td><td>50.0</td></tr><tr><td>sme</td><td/><td/><td>0.9</td><td>40.0</td></tr><tr><td>udm</td><td/><td/><td>0.5</td><td>20.0</td></tr><tr><td>avg. med.</td><td>GT</td><td>Mloc</td><td>1.1 \u00b1 0.6 0.9 \u00b1 0.3</td><td>50 \u00b1 30 40 \u00b1 10</td></tr><tr><td>fin</td><td/><td/><td>898.0</td><td>100.0</td></tr><tr><td>kpv</td><td/><td/><td>11.0</td><td>1.2</td></tr><tr><td>mdf</td><td/><td/><td>64.0</td><td>7.1</td></tr><tr><td>mhr mrj</td><td colspan=\"2\">MORF ktok</td><td>15.0 353.0</td><td>1.7 39.3</td></tr><tr><td>myv</td><td/><td/><td>11.0</td><td>1.2</td></tr><tr><td>sme</td><td/><td/><td>9.0</td><td>1.1</td></tr><tr><td>udm</td><td/><td/><td>7.0</td><td>0.8</td></tr><tr><td>avg.</td><td/><td/><td/><td/></tr></table>",
"text": "",
"num": null,
"type_str": "table",
"html": null
},
"TABREF5": {
"content": "<table><tr><td colspan=\"3\">Lang. LoC (M) States (k)</td><td>Complex.</td></tr><tr><td>fin</td><td>2.3</td><td>440.0</td><td>0.19</td></tr><tr><td>kpv</td><td>0.7</td><td>150.0</td><td>0.21</td></tr><tr><td>mdf</td><td>0.9</td><td>60.0</td><td>0.06</td></tr><tr><td>mhr</td><td>1.8</td><td>80.0</td><td>0.04</td></tr><tr><td>mrj</td><td>0.5</td><td>50.0</td><td>0.09</td></tr><tr><td>myv</td><td>1.2</td><td>110.0</td><td>0.09</td></tr><tr><td>sme</td><td>0.9</td><td>540.0</td><td>0.63</td></tr><tr><td>udm</td><td>0.5</td><td>60.0</td><td>0.14</td></tr><tr><td>avg.</td><td colspan=\"2\">1.1 \u00b1 0.6 200 \u00b1 200</td><td>0.2 \u00b1 0.2</td></tr><tr><td>med.</td><td/><td/><td/></tr></table>",
"text": "Transducer source complexity, in number of states per line of transducer source code. The column \"LoC (M)\" gives the number of lines of source code, in millions, and \"States (k)\" the size, in thousands of states of the compiled transducer.",
"num": null,
"type_str": "table",
"html": null
}
}
}
} |