{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:33:34.845189Z" }, "title": "No more fumbling in the dark -Quality assurance of high-level NLP tools in a multi-lingual infrastructure", "authors": [ { "first": "Linda", "middle": [], "last": "Wiechetek", "suffix": "", "affiliation": { "laboratory": "Flammie A Pirinen B\u00f8rre Gaup Thomas Omma", "institution": "", "location": {} }, "email": "linda.wiechetek@uit.no" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We argue that regression testing is necessary to ensure reliability in the continuous development of NLP tools, especially higher level applications like grammar checkers. Our approach is rule-based, building on successful work for a number of low-resourced languages over the last 20 years. Instead of working with a black box, we choose a method that allows us to pinpoint the exact reasons for failures in the system. We present a tool for regression testing for GramDivvun, the rule-based open source North S\u00e1mi grammar checker. The regression tool is available for any of the 135 languages in the Giella-LT infrastructure and can be applied when respective tools are built. An evaluation of the system shows how the precision of the regression tests improves with almost 20% over a time span of 1.5 years. We also illustrate that the regression tool can detect undesired effects of rule changes that affect the performance of the grammar checker. Tiivistelm\u00e4 T\u00e4ss\u00e4 artikellissa esit\u00e4mme ett\u00e4 regressiotestaus on v\u00e4ltt\u00e4m\u00e4t\u00f6nt\u00e4 kielitekonologiaty\u00f6kalujen, eritoten korkeampitasoisten sovellusten kuten kieliopiontarkistinten, jatkuvassa kehityksess\u00e4. Meid\u00e4n l\u00e4hestymisl\u00e4ht\u00f6kohtamme on s\u00e4\u00e4nt\u00f6pohjainen, ja rakentuu aiemmalle v\u00e4h\u00e4resurssisten kielten ty\u00f6lle viimeisen 20 vuoden ajalta. Musta laatikko-l\u00e4hestymistavan sijaan k\u00e4yt\u00e4mme menetelmi\u00e4 joiden avulla voimme suoraan paikantaa ongelmakohdat j\u00e4rjestelm\u00e4ss\u00e4. Esittelemme ty\u00f6kaluja joilla regressiotestataan GramDivvunia, s\u00e4\u00e4nt\u00f6pohjaista pohjoissaamen kieliopintarkistinta. Regressiotestaus on valmiina k\u00e4ytett\u00e4viss\u00e4 135 kielelle, joita kehitet\u00e4\u00e4n GiellaLTinfrastruktuurissa ja sit\u00e4 voi hy\u00f6dynt\u00e4\u00e4 vastaavissa ty\u00f6kaluissa. J\u00e4rjestelm\u00e4\u00e4 evaluoimalla huomaamme ett\u00e4 tarkkuus kasvaa 20 % 1,5 vuoden seurantajakson aikana. Sen lis\u00e4ksi tuomme esille kuinka regressiotesteill\u00e4 voi havaita s\u00e4\u00e4nn\u00f6st\u00f6muutosten vaikutuksia kieliopintarkistimen suorituskykyyn.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We argue that regression testing is necessary to ensure reliability in the continuous development of NLP tools, especially higher level applications like grammar checkers. Our approach is rule-based, building on successful work for a number of low-resourced languages over the last 20 years. Instead of working with a black box, we choose a method that allows us to pinpoint the exact reasons for failures in the system. We present a tool for regression testing for GramDivvun, the rule-based open source North S\u00e1mi grammar checker. The regression tool is available for any of the 135 languages in the Giella-LT infrastructure and can be applied when respective tools are built. An evaluation of the system shows how the precision of the regression tests improves with almost 20% over a time span of 1.5 years. We also illustrate that the regression tool can detect undesired effects of rule changes that affect the performance of the grammar checker. Tiivistelm\u00e4 T\u00e4ss\u00e4 artikellissa esit\u00e4mme ett\u00e4 regressiotestaus on v\u00e4ltt\u00e4m\u00e4t\u00f6nt\u00e4 kielitekonologiaty\u00f6kalujen, eritoten korkeampitasoisten sovellusten kuten kieliopiontarkistinten, jatkuvassa kehityksess\u00e4. Meid\u00e4n l\u00e4hestymisl\u00e4ht\u00f6kohtamme on s\u00e4\u00e4nt\u00f6pohjainen, ja rakentuu aiemmalle v\u00e4h\u00e4resurssisten kielten ty\u00f6lle viimeisen 20 vuoden ajalta. Musta laatikko-l\u00e4hestymistavan sijaan k\u00e4yt\u00e4mme menetelmi\u00e4 joiden avulla voimme suoraan paikantaa ongelmakohdat j\u00e4rjestelm\u00e4ss\u00e4. Esittelemme ty\u00f6kaluja joilla regressiotestataan GramDivvunia, s\u00e4\u00e4nt\u00f6pohjaista pohjoissaamen kieliopintarkistinta. Regressiotestaus on valmiina k\u00e4ytett\u00e4viss\u00e4 135 kielelle, joita kehitet\u00e4\u00e4n GiellaLTinfrastruktuurissa ja sit\u00e4 voi hy\u00f6dynt\u00e4\u00e4 vastaavissa ty\u00f6kaluissa. J\u00e4rjestelm\u00e4\u00e4 evaluoimalla huomaamme ett\u00e4 tarkkuus kasvaa 20 % 1,5 vuoden seurantajakson aikana. Sen lis\u00e4ksi tuomme esille kuinka regressiotesteill\u00e4 voi havaita s\u00e4\u00e4nn\u00f6st\u00f6muutosten vaikutuksia kieliopintarkistimen suorituskykyyn.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Mii \u00e1kkastallat ahte regre\u0161uvdnaiskosat leat d\u00e1rbba\u0161la\u010d\u010dat jus galg\u00e1 s\u00e1httit r\u00e1hkadit luohtehahtti NLP-reaidduid, erenoam\u00e1\u017eit reaidduid nugo grammatihkkad\u00e1rkkisteddjiid, mat sorj\u00e1stit m\u00e1\u014bga ear\u00e1 progr\u00e1mmaide. Min bargu lea njuolggadusvuo\u0111\u0111uduvvon, huksejuvvon barggu ala mii lea dahkkon sm\u00e1vva-resursagielaiguin ma\u014bemu\u0161 20 jagi. Dan sajis go bargat \"\u010d\u00e1hppes bovssain\", mii v\u00e1lljet vuogi man bokte mii dal\u00e1n oaidnit gokko vuog\u00e1dagas meatt\u00e1hus \u010duo\u017e\u017eila. Mii \u010d\u00e1jehit reaiddu mii isk\u00e1 leatgo GramDivvumis regre\u0161uvnnat.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstr\u00e1kta", "sec_num": null }, { "text": "lea njuolggadusvuo\u0111\u0111uduvvon davvis\u00e1mi rabas g\u00e1ldokoda grammatihkkad\u00e1rkkisteaddji.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GramDivvun", "sec_num": null }, { "text": "Regre\u0161uvdnaiskanreaidu lea ol\u00e1muttus visot 135 gillii mat leat GiellaLT-infrastruktuvrras ja dan s\u00e1htt\u00e1 vuodjit go gulleva\u0161 reaiddut leat huksejuvvon. Vuog\u00e1datevalueren \u010d\u00e1jeha ahte regre\u0161uvdnaiskosiid bohtosat leat buorr\u00e1nan measta 20 %:in beannot jagis. Mii maid \u010d\u00e1jehit ahte regre\u0161uvdnaiskanreaidu g\u00e1vdn\u00e1 meatt\u00e1husaid ma\u014b\u014b\u00e1 rievdadusaid mat v\u00e1ikkuhit grammatihkkad\u00e1rkkisteaddji bohtosiidda.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GramDivvun", "sec_num": null }, { "text": "This paper illustrates an efficient way to quality check high level rule-based NLP applications for low resource languages with complex morphology like North S\u00e1mi. In particular, we develop a powerful regression testing tool for the rule-based open source North S\u00e1mi grammar checker Gram-Divvun (Wiechetek et al., 2019a) that provides statistics of precision and recall specific to each error type\u00b9 and a detailed analysis of each sentence including one or more (nested) errors\u00b2, together with an advanced system of error mark-up that allows us to properly identify each error type module that is successful enough to be included in the grammar checker released to the public.", "cite_spans": [ { "start": 295, "end": 320, "text": "(Wiechetek et al., 2019a)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "GramDivvun has been released by Divvun as a free plugin for Microsoft Office and Google Docs\u00b3. A grammar checker, as opposed to a spellchecker, is a tool that verifies and corrects errors in writing that are not mere mistyped non-words, but real words where the error is dependent on the whole sentencecontext and its grammatical features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "North S\u00e1mi is a minority language in a bilingual language community, which faces challenges as regards writing proficiency. In this context, a reliable grammar checker can therefore also serve as a tool to improve writing skills. However, it is a difficult task to make a precise tool that meets users needs. If it underlines too many or even any correct sentences, the user will easily be frustrated and switch off the grammar checking. Regression testing resolves this problem in a robust and uniform way and ensures high quality of the tools.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "North S\u00e1mi is a Uralic language spoken in Norway, Sweden and Finland by approximately 25,700 speakers (Simons and Fennig, 2018) . It is a synthetic language, where the open parts of speech (PoS) -e.g. nouns, adjectives -inflect for case, person, number and more. The grammatical categories are expressed by a combination of suffixes and steminternal processes affecting root vowels and consonants alike, making it perhaps the most fusional of all Uralic languages. In addition to compounding, inflection and derivation are common morphological processes in North S\u00e1mi. Due to its morphological complexity and, in addition, a large amount \u00b9More information on the different error types covered in GramDivvun can be found in (Wiechetek, 2017) and (Wiechetek et al., 2019b) \u00b2Nested errors are errors within errors (typically with different scopes), for example a typo within an agreement error.", "cite_spans": [ { "start": 102, "end": 127, "text": "(Simons and Fennig, 2018)", "ref_id": "BIBREF13" }, { "start": 723, "end": 740, "text": "(Wiechetek, 2017)", "ref_id": "BIBREF14" }, { "start": 745, "end": 770, "text": "(Wiechetek et al., 2019b)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u00b3https://divvun.no/korrektur/gramcheck.html of homonymous forms or similar forms that can be confused in writing, there are many different grammatical error types. Similarly to other low-resource languages, there is little to no error marked-up data available for it, and the available data is seldom quality checked with regard to spelling and grammar. This poses a challenge to automatic grammar checking and testing. Regression testing within software programming practice is defined as testing that ensures that recent code changes do not have any negative effects on existing features.\u2074 While regression testing is not a new idea and has been applied for some decades, to our knowledge, there are no in-detail publications of the challenges and practical solutions for it in grammar checking. However, Butt and Holloway King (2003) describe different testing strategies and their necessity for syntactic parsing. Since 2003, complexity of Natural Language Processing (NLP) tools has increased, which also requires adapting appropriate testing routines.", "cite_spans": [ { "start": 807, "end": 836, "text": "Butt and Holloway King (2003)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rule-based model enables us to be very precise in locating the shortcomings of our grammar checker, and the regression tests ensure that the grammar checker keeps improving as new rules and tests to check them are added. The novelty in our approach to building grammar checkers lies in the workflows of simultaneously building the grammar checker rules, the error corpus and the regression testing suite. This workflow is an efficient approach to both building regression data and constructing our tools. The features of our tool are powerful enough to handle these multi-modular applications as well as an advanced mark-up system for a real world corpus that includes some spelling, morphological, syntactic, punctuation, space, real-word errors as well as nested errors per sentence. Also, the regression tool provides a detailed error analysis and not just overall regression statistics. It outputs errorspecific statistics, including error subtypes, and enables efficient debugging of the system. The regression tools come with a database of tests, including several thousand sentences marked-up manually per error type.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We are using a NLP development infrastructure called GiellaLT , which is at present used by 135 languages. It consists of systems capable of building, testing and deploying a large range of NLP applications -including spelling and grammar checkers among othersbased on finite-state morphology (Beesley and Karttunen, 2003) and Constraint Grammar (Karlsson, 1990) . We apply a rule-based approach, which has a long tradition for the previously mentioned 135 languages, but is not as wide-spread as neural network approaches these days. Neural networks have shown to provide good results for many higher level NLP applications. However, they are also known to require large amounts of high quality or marked-up data, which for North S\u00e1mi would mean a manual quality check or mark-up as this data is not available. Our current error marked-up corpus (for all error types including nested errors) contains 120,459 words-a typical amount for training a neural network is at least several millions, and for a morphologically complexer language possibly more. Considering the amount of different types of errors there are and that not all of the sentences contain an error at all, this is very little data to train any kind of model. Our work strategy consists in minimizing the workload by a combination of developing rulebased tools that reliably annotate and quality check our data and searching for and annotating example sentences from the corpus that give us further insight in the grammatical issue we are dealing with.", "cite_spans": [ { "start": 293, "end": 322, "text": "(Beesley and Karttunen, 2003)", "ref_id": "BIBREF0" }, { "start": 346, "end": 362, "text": "(Karlsson, 1990)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Framework", "sec_num": "2.1" }, { "text": "There is current work on neural network error detection/correction for specific 'simpler' grammatical errors (i.e. compound errors) in North S\u00e1mi that do not involve changing morphological forms or restructuring of a whole sentence (Wiechetek et al., 2021) . However, rule-based tools were used, both to prepare the data and to access PoS information. Furthermore, its insertion of non-sense words restricts its usability for a community of real users. A full-fledged neural network grammar checker -that is not based on the rule-based grammar checker -is not to be realized in the near future.", "cite_spans": [ { "start": 232, "end": 256, "text": "(Wiechetek et al., 2021)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Framework", "sec_num": "2.1" }, { "text": "Rule-based methods have the advantage of formalizing concise rules about the grammatical structure of a language. This gives us detailed insights in the language -as opposed to the black box of a neural network. This knowledge is necessary for defining errors in the first place, especially in cases where normative descriptions do not exist. It is also a prerequisite for debugging errors in our system. As we are able to translate language insights into formal grammar rules, we can pinpoint the exact causes of errors in our system. In other words, we can write a grammar that is both machine-, and to some extent, human-readable, which means that our knowledge can be used in other contexts outside of the grammar checker.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Framework", "sec_num": "2.1" }, { "text": "In the context of grammar checking tasks, specifically for morphologically complex and/or lowresourced languages, we would like to discuss two relevant tasks for neural network approaches, i.e. the systems for Latvian (Deksne, 2019) and Russian (Rozovskaya and Roth, 2019) . The evaluation of Latvian neural network grammar checker shows a good performance with precisions between 78% and 98.5% (evaluated on a corpus of 115,000 sentences) depending on the error type. However, judging from their regular expressions to insert artificial errors, most of their error types seem to be fairly local errors that can be resolved based on shorter ngrams. The Russian system, on the other hand, focuses on more advanced error types, including case and agreement. However, precision (evaluated on a 206,258 token learners' corpus) is significantly lower -between 22% and 56%, only gender agreement reaches 68%. The corpus is rather small with regard to the task of correcting a large variety of errors. None of these two approaches deal with the advanced syntactic constructions we resolve in our approach, requiring an analysis of the whole sentence, valencies, semantic cues, etc.", "cite_spans": [ { "start": 218, "end": 232, "text": "(Deksne, 2019)", "ref_id": "BIBREF2" }, { "start": 245, "end": 272, "text": "(Rozovskaya and Roth, 2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Framework", "sec_num": "2.1" }, { "text": "The testing approach described here, while used in conjunction with a rule-based system, is agnostic of underlying technology, and could well be applied in the context of a neural system as well, should there be one that allows for correcting the errors the system makes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Framework", "sec_num": "2.1" }, { "text": "In order to provide a consistent grammar checking experience but also automatic updates and improvement, we apply stringent testing and combine that with a continuous integration / deployment (CI/CD) environment. To our knowledge, there are no publications on how to apply CI / CD to NLP product pipelines such as grammar checking, so in this article we lay out some guidelines and good practices. However, in the text books for the development of NLP applications we find some recommendations on the use of regression tests to compare different versions of the same application. (Grove, 2009, p.222) There have also been some work-shops on regression testing in NLP, e.g. (Farrow and Dzikovska, 2009) , however, these ideas have not found popular use, yet. One of the scientific contributions of our work is not only that we can provide the end users with products that work as expected, but also we can maintain scientific integrity of the systems in terms of reproducibility. We can apply the CI methods to ensure that systems can reproduce comparable results at all times. This is especially attractive for our case, since we apply mainly rule-based methods for grammar checking and correction, the results should stay relatively stable for the same versions of the system. In the recent years, the reproducibility has been brought to focus of the NLP research, with famous works like Pedersen (2008) .", "cite_spans": [ { "start": 580, "end": 600, "text": "(Grove, 2009, p.222)", "ref_id": null }, { "start": 673, "end": 701, "text": "(Farrow and Dzikovska, 2009)", "ref_id": "BIBREF3" }, { "start": 1398, "end": 1404, "text": "(2008)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Continuous integration and deployment", "sec_num": "2.2" }, { "text": "Typically, continuous development of rule-based NLP applications involves unexpected breakage. With regression tests for each error type in the grammar checker, regressions are caught quickly. This means that refactoring or larger changes to the code can be done without decreasing the overall quality of the grammar checker.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Continuous integration and deployment", "sec_num": "2.2" }, { "text": "The main motivation behind introducing regression testing came from the need of automatizing the grammar checker evaluation. Manual evaluation to calculate precision and recall got rather cumbersome. This led to the development of a more powerful tool for testing grammar checking automatically (Wiechetek et al., 2019b) , and there was parallel work and methodological in-depth study on corpus mark-up. Based on this work, we did not have to make a big leap to get regression testing. We reused the evaluation tool and turned it into a proper tester, with detailed statistics of the performance of the tool and sentence-by-sentence analysis that provides a basis for debugging.", "cite_spans": [ { "start": 295, "end": 320, "text": "(Wiechetek et al., 2019b)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Continuous integration and deployment", "sec_num": "2.2" }, { "text": "The grammar checker for North S\u00e1mi (Gram-Divvun) performs both spell-and grammar checking -i.e. requiring full sentence analysis to identify local and global syntactic errors -in addition to punctuation and format checking. It includes a version of the open-source spelling checker that has been freely distributed since 2007\u2075, cf. also Gaup et al. (2006) . It uses the HFST-based spelling mechanism described in Pirinen and Lind\u00e9n (2014) for a number of modules, and in addition includes six Constraint Grammar modules, cf. \u2022 Two valency grammars applied before and after spellchecking (valency.cg3 and valencypostspell.cg3)", "cite_spans": [ { "start": 337, "end": 355, "text": "Gaup et al. (2006)", "ref_id": "BIBREF4" }, { "start": 413, "end": 438, "text": "Pirinen and Lind\u00e9n (2014)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "The North S\u00e1mi grammar checker", "sec_num": "2.3" }, { "text": "\u2022 A tokenizer (mwe-dis.cg3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The North S\u00e1mi grammar checker", "sec_num": "2.3" }, { "text": "\u2022 Two morpho-syntactic disambiguators applied before and after spellchecking (grcdisambiguator.cg3 and after-speller-disambiguator.cg3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The North S\u00e1mi grammar checker", "sec_num": "2.3" }, { "text": "\u2022 A module for more advanced grammar checking (grammarchecker-release.cg3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The North S\u00e1mi grammar checker", "sec_num": "2.3" }, { "text": "The current version of the grammar checker module in GramDivvun\u2076 includes 313 error detection rules, 4 purely morpho-syntactic rule types, 17 morpho-syntactic rule types that are caused by general real-word rule types, 17 idiosyncratic real word error rule types, 14 punctuation or space error rule types and one spelling error rule type. A real word error is typically a misspelling, but unlike regular typos it results in (similar) real word rather than a non-word. Therefore, an analysis of the sentence is necessary to identify the error. In English language, dessert can be a real word error of desert and vice versa.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The North S\u00e1mi grammar checker", "sec_num": "2.3" }, { "text": "As in English, there are numerous idiosyncratic real word error types in North S\u00e1mi, made by native speakers for various reasons (i.e. dialectal phonetic differences that do not coincide with the written norm, vowel and consonant errors based on confusion of different forms, etc.) But some of these errors are more systematic, such as the confusion of case-marked (locative case) vs. attributive adjective forms. This is the case in ex. (1)\u2077, where the locative form \u00e1lkis should be an attributive one, i.e. \u00e1lkes, and the only distinction between these forms is the vowel -e vs. i. consequences. That means that a certain grammatical form can be confused with another grammatical form of the same lemma. Since both forms regard the same lemma, these errors can be detected and corrected systematically. Apart from that, other (morpho-phonetic) criteria decide which forms are eligible for this error type. These are lemma endings (e.g. -it, -at, or -ut) , number of syllables (even vs. uneven), and consonant gradation class membership.\u2078 Table 1 illustrates one of the consonant gradation classes with examples.", "cite_spans": [ { "start": 938, "end": 955, "text": "-it, -at, or -ut)", "ref_id": null } ], "ref_spans": [ { "start": 1040, "end": 1047, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "The North S\u00e1mi grammar checker", "sec_num": "2.3" }, { "text": "Nominal derivations of certain types of verbs (i.e. with a particular ending and a specific consonant gradation pattern In ex. (2), the vowel confusion (u/o) regards derived nouns (that should be past participle forms) from consonant gradation class 4D (cf. \u2078A number of Finno-Ugric languages use stem-internal morpho-phonological changes in addition to suffixes to mark case and other morphological processes. In North S\u00e1mi there are 123 consonant gradation patterns (Nickel, 1994, Nickel (1994, p. 30) Guovdageainnus.", "cite_spans": [ { "start": 468, "end": 482, "text": "(Nickel, 1994,", "ref_id": "BIBREF9" }, { "start": 483, "end": 503, "text": "Nickel (1994, p. 30)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The North S\u00e1mi grammar checker", "sec_num": "2.3" }, { "text": "'The police has conducted a traffic control in Guovdageaidnu today.'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Guovdageaidnu.LOC", "sec_num": null }, { "text": "The complex structure of the grammar checker shows that there are modifications in many different modules that can be responsible for possible mishaps, since changes in one module can affect the input to subsequent modules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Guovdageaidnu.LOC", "sec_num": null }, { "text": "The input for the grammar checker are unmarked sentences. The input for the regression tests are sentences with an error mark-up like in ex. (3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Guovdageaidnu.LOC", "sec_num": null }, { "text": "(3) Figure 2 shows the output for the grammar checker including error detection (red rectangle) and error correction (blue rectangle). The sentence is tokenized and reads from the top to the bottom. Word forms are in angle brackets, indented lines are homonymous analyses of each form, including lemmata, morphological, semantic and syntactic tags followed by numerical dependencies. ", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 12, "text": "Figure 2", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Guovdageaidnu.LOC", "sec_num": null }, { "text": "Regression testing for grammar checking is based on an error marked-up corpus. We have collected an error corpus of representative errors in Yaml-formatted\u2079 files specific to each error type. At the current date in august 2021, these include 17,800 sentences. Typically, each regression file contains \u2079https://yaml.org/spec/1.2/spec.html several hundred sentences, some up to 4,300 sentences. There should be a balance of correct and erroneous sentences covering the same phenomena so that one can test for false positives and false negatives. Test sentences should cover a variety of syntactic contexts and pay attention to longdistance relationships between syntactic functions. They should include coordination, (inserted) subclauses, complex noun phrases, multiple adverbials, idiomatic constructions, multiple errors, punctuation, and other phenomena that can alter the status of the error/correct form. The collected errors are designed to cover a maximally large amount of realworld errors that people make when writing texts, in order to keep the grammar checker usable for people. The file naming is now error-specific,\u00b9\u2070 but as they come from an authentic corpus, they can contain multiple errors per sentence including other types of errors and nested errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Regression testing for grammar checking", "sec_num": "3" }, { "text": "Yaml is a mark-up language with a simple syntax that makes writings of the tests convenient and co-operation with programmers and linguists easier. We chose to use the Yaml format for grammar testing because of positive experiences with the use of the same format for spell checker testing.\u00b9\u00b9 The original test framework for morphology testing initiated by Brendan Molloy can be found on GitHub.\u00b9\u00b2 The regression test script measures both error detection and error correction and whether they match the manual error mark-up. False negatives of the type fn 1 are correctly detected errors that do not receive any corrections by the grammar checker. False negatives of the type fn 2 are undetected errors. The same goes for false positives, where: fp 1 are correctly detected errors with a wrong correction, and fp 2 are error detections that are not manually marked up. True positives (tp), on the other hand, are detected and corrected errors that match with the manual mark-up. In our final evaluation, we will not distinguish between these and only take into account successful vs. unsuccessful error correction in terms of false negatives and true/false positives. The tester script is implemented in Python and can be downloaded from GitHub\u00b9\u00b3. \u00b9\u2070current examples: https://github.com/giellalt/ lang-sme/tree/main/tools/grammarcheckers/tests \u00b9\u00b9https://giellalt.uit.no/infra/infraremake/ AddingMorphologicalTestData.html#Yaml+tests \u00b9\u00b2https://github.com/apertium/ apertium-tgl-ceb/blob/master/dev/verbs/ HfstTester.py \u00b9\u00b3https://github.com/giellalt/giella-core/ blob/master/scripts/gramcheck-test.py", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Regression testing for grammar checking", "sec_num": "3" }, { "text": "The grammar checker makes a list of each error that consists of the erroneous word, the position of the error (start and end), a list of suggestions and error type. The error mark-up is then converted to the same structure so that manual and grammar checker mark-up can be compared. For each of these test sentences, three things are collected: the erroneous version of the error marked-up sentence, the error marked-up version of the errors in the sentence and the errors detected by the sending the erroneous sentence through the grammar checker. The tester prints the outcome of each of the tests in a detailed manner, sentence by sentence and with references to the particular error types involved. The final report contains the number of total passes, fails, true and false positives/negatives, precision, recall and F 1 -score. On exit, the script returns 0 or 1, 0 meaning all tests succeeded, 1 otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Regression testing for grammar checking", "sec_num": "3" }, { "text": "The test script is fast and light-weight enough to be part of a CI/CD system, even with processor time and RAM limitation, e.g. testing 300 sentences on the developers' machines takes about 30 seconds.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Regression testing for grammar checking", "sec_num": "3" }, { "text": "The error mark-up formalism has earlier been used to automatize spellchecking for Greenlandic, Icelandic, North, Lule and South Sami.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Regression testing for grammar checking", "sec_num": "3" }, { "text": "The error mark-up follows a number of guide-lines\u00b9\u2074 based on earlier corpus mark-up (Moshagen, 2014) and applies eight different general error types, each of them marked by a different sign: orthographic, real word, morpho-syntactic, syntactic, lexical, formatting, foreign language, and unclassified errors. The error is enclosed in curly brackets, followed by its correction in another set of curly brackets. The second curly bracket may or may not include a part of speech, morpho-syntactic criteria and a subclassification of the error type.", "cite_spans": [ { "start": 84, "end": 100, "text": "(Moshagen, 2014)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Regression testing for grammar checking", "sec_num": "3" }, { "text": "Orthographic errors (marked by $) include nonwords only. They are traditional misspellings confined to single (error) strings, and the traditional speller should detect them. Real word errors (marked by \u00a2) are misspellings that cannot be detected by a traditional speller, they are an analysis of the surrounding words. Morpho-syntactic errors (marked by \u00a3) are case, agreement, tense, mode errors. They require an analysis of (parts of) the sentence or surrounding words to be detected. Syntactic errors (marked by \u00a5) require a partial or full analysis of (parts of) the sentence or surrounding words. They include word order errors, compound errors, \u00b9\u2074https://giellalt.uit.no/proof/spelling/ testdoc/error-markup.html missing words, and redundant words. Lexical errors (marked by \u20ac) include wrong derivations. Foreign language (marked by \u221e) includes words in other languages that do not require a correction. Formatting errors (marked by \u2030) include spacing errors in combination with punctuation. Unclassified errors are marked with \u00a7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Regression testing for grammar checking", "sec_num": "3" }, { "text": "In ex. (4), the tokens involved in the error are nouns, the syntactic error is a missing word and the correction is adding the subjunction ahte 'that'. Regarding the span of an error, we typically mark as little as possible, even if larger parts of the sentence are responsible for the identification of the error. This is done to facilitate matching error markup with grammar checker marking of the error, and it has direct effect on automatic evaluation. Most of the frameworks we use to process language material in context, e.g. Constraint Grammar takes a token-based approach to language processing, and therefore marking several words can get cumbersome and should be avoided if possible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Regression testing for grammar checking", "sec_num": "3" }, { "text": "Ex. (5) shows the mark-up of nested errors. There is both a morpho-syntactic error, the case of linj\u00e1 'line' should be accusative instead of nominative, and a compound error, njuolggo and linjj\u00e1 should be written as one word. 'Draw a straight line between these two points.'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Regression testing for grammar checking", "sec_num": "3" }, { "text": "We performed two measurements of the system quality: firstly we have the well-curated and targeted regression test suite that is summarized in Table 2 . Secondly, we measure an overview of how the system fares for texts in the whole corpora in the wild in Table 2 : Evaluation results from the regression tests.", "cite_spans": [], "ref_spans": [ { "start": 143, "end": 150, "text": "Table 2", "ref_id": null }, { "start": 256, "end": 263, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "4" }, { "text": "In Table 2 we show the results of the regression tests at the same three stages of the development. We measure the success percentage in terms of the number of the tests passed from the overall tests. The regression test corpus we use is a set of tests selected to have a representative coverage of the various error types and contexts. With the carefully selected grammar tests we can control the quality of the overall system, the overall aim for these grammar tests is to keep the correctness at 100 %. The correctness measure C here is C = tp CS where CS is the corpus size. In Table 3 , we show the overall performance of GramDivvun at three stages over the course of approximately one and a half years of continuous development. This means that all grammatical errors are included, also the ones that the grammar checker does not have any module for yet. The tests are done on an error marked-up evaluation-corpus of approx. 26,000 words. The first test is made with the North S\u00e1mi grammar checker from 2019-11-21\u00b9\u2075 before the introduction of the Yaml-tests (naacl-1). The second test uses the version from 2020-11-20\u00b9\u2076 (naacl-2 -Yaml baseline) from when we had first introduced the regression tests. The third test uses the North S\u00e1mi grammar checker from 2021-03-20\u00b9\u2077 (naacl-4) where we have taken into account results from the regression tests in the form of general rule changes.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 2", "ref_id": null }, { "start": 582, "end": 589, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Quantitative evaluation", "sec_num": "4.1" }, { "text": "The results show that the overall performance of the grammar checker on a small error marked-up corpus improves only slightly. This is due to the frequency of the errors we worked on. The corpus to test these error types in particular needs to be substantially bigger to show a change in performance. However, especially recall has improved by 6% showing an increased coverage of the error types covered in the grammar checker. Figure 3 shows a number of stages of the performance of the grammar checker after developing regression tests. There was a significant drop in precision (naacl-2) and a number of drops in recall (bisect).\u00b9\u2078 These coincided with the addition of test sentences (the regression tests grew from a couple of sentences to larger corpora of several thousand sentences), introducing new contexts that required stricter rules. Stricter rules typically lower recall to ensure stable precision. New, more specific rules need to be introduced to get recall up again. This explains the ups and downs in the graph. After the introduction of Yaml tests, however, we can see that precision has steadily been going up, and by that proves the main objective of regression tests right.", "cite_spans": [], "ref_spans": [ { "start": 428, "end": 436, "text": "Figure 3", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Quantitative evaluation", "sec_num": "4.1" }, { "text": "One can generally see, that rule types that have been prioritized in the grammar checker improved after involving regression testing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative evaluation", "sec_num": "4.2" }, { "text": "Precision got better in ex. (6), where the nominalization dovdan 'feeling' is confused with the firstperson singular form dovddan 'I know', forms that are distinguished by a change in the consonant centre only. In ex. 7, GramDivvun finds the locative adjective form oktageard\u00e1nis, which by analogy is confused with the nominative form oktageard\u00e1n. 'Expenditure to buy necessary books to carry through the project.' Some errors that are dealt with in the grammar checker are not recognized in certain syntactic contexts, such as the compound error guovdd\u00e1\u0161 doaimmat that should be written as one word in ex. (9). In addition, there are error types that the gram-mar checker does not deal with at all, which is why they are not recognized, and the result are false negatives. This is the case of the syntactic error ex. (10), where the subjunction vai 'so that' before the finite verb beassaba 'get to' is missing. 10 'Many times a week she fetches the foreign dog so that they get to walk.'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative evaluation", "sec_num": "4.2" }, { "text": "In this paper we have shown that regression testing is necessary to provide reliable results (i.e. in particular a stable precision) for the users of higher level NLP applications like grammar checkers. A rulebased approach is successful for applications like grammar checking which require a high level of systematicity and reliable results. For low-resourced languages, where availability of resources such as expert-curated error-correction corpora are scarce, the development of rule-based tools is the most efficient approach. We showed that by using comprehensive regression testings we can keep developing the grammar checking and correction on a day-today basis and provide the end users with the newest updates without worrying about their quality. In the future we would like to see if it is possible to gather enough resources for a neural network based grammar checking and correction. Regression testing of the kind we described is applicable for neural network approaches as well. However, neural network systems do not allow for specific adjustments within the error types, which is rather a weakness of the system itself. It is therefore natural to apply these regression tests for neural network models as well, and we expect that the system will work in conjunction to neural network without any major changes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and future outlook", "sec_num": "5" }, { "text": "We have started with neural network-approaches (forthcoming) for the correction of certain error types from our rule-based grammar checker. These require a preparation of the data by means of our existing rule-based tools, both for part-of-speech tagging and marking up error data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and future outlook", "sec_num": "5" }, { "text": "One of the interesting features of a rule-based system, that has been brought to focus on the NLP community recently, is the energy-footprint of the used models. In case of our models, the rules can be compiled into finite-state automata on an average consumer desktop within minutes, and the ac-tual models can be run on low-end mobile devices, so the energy footprint is trivially multiple orders of magnitude lower than that of any neural language models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and future outlook", "sec_num": "5" }, { "text": "\u2074https://www.guru99.com/regression-testing. html (Accessed 2021-03-23)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank Ritva Nystad for marking up part of the evaluation corpus, testing GramDivvun and contributing to critical discussions about grammatical errors. We also thank Sjur N\u00f8rsteb\u00f8 Moshagen and Brendan Molloy for the initial test setup.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Finite State Morphology. CSLI publications", "authors": [ { "first": "R", "middle": [], "last": "Kenneth", "suffix": "" }, { "first": "Lauri", "middle": [], "last": "Beesley", "suffix": "" }, { "first": "", "middle": [], "last": "Karttunen", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth R Beesley and Lauri Karttunen. 2003. Finite State Morphology. CSLI publications.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Grammar Writing, Testing, and Evaluation", "authors": [ { "first": "Miriam", "middle": [], "last": "Butt", "suffix": "" }, { "first": "Tracy Holloway", "middle": [], "last": "King", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "129--179", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miriam Butt and Tracy Holloway King. 2003. Gram- mar Writing, Testing, and Evaluation, pages 129-179. CSLI Publications, Stanford.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Bidirectional lstm tagger for latvian grammatical error detection", "authors": [ { "first": "Daiga", "middle": [], "last": "Deksne", "suffix": "" } ], "year": 2019, "venue": "Text, Speech, and Dialogue. TSD 2019", "volume": "11697", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1007/978-3-030-27947-9_5" ] }, "num": null, "urls": [], "raw_text": "Daiga Deksne. 2019. Bidirectional lstm tagger for lat- vian grammatical error detection. In Ek\u0161tein K. (eds) Text, Speech, and Dialogue. TSD 2019. Lecture Notes in Computer Science, vol 11697. Springer.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Context-dependent regression testing for natural language processing", "authors": [ { "first": "Elaine", "middle": [], "last": "Farrow", "suffix": "" }, { "first": "Myroslava", "middle": [ "O" ], "last": "Dzikovska", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Workshop on Software Engineering, Testing, and Quality Assurance for Natural Language Processing", "volume": "", "issue": "", "pages": "5--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elaine Farrow and Myroslava O. Dzikovska. 2009. Context-dependent regression testing for natural lan- guage processing. In Proceedings of the Workshop on Software Engineering, Testing, and Quality Assurance for Natural Language Processing (SETQA-NLP 2009), pages 5-13, Boulder, Colorado. Association for Com- putational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "From Xerox to Aspell: A first prototype of a north s\u00e1mi speller based on twol technology", "authors": [ { "first": "B\u00f8rre", "middle": [], "last": "Gaup", "suffix": "" }, { "first": "Sjur", "middle": [], "last": "Moshagen", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Omma", "suffix": "" }, { "first": "Maaren", "middle": [], "last": "Palismaa", "suffix": "" }, { "first": "Tomi", "middle": [], "last": "Pieski", "suffix": "" }, { "first": "Trond", "middle": [], "last": "Trosterud", "suffix": "" } ], "year": 2006, "venue": "Finite-State Methods and Natural Language Processing", "volume": "", "issue": "", "pages": "306--307", "other_ids": {}, "num": null, "urls": [], "raw_text": "B\u00f8rre Gaup, Sjur Moshagen, Thomas Omma, Maaren Palismaa, Tomi Pieski, and Trond Trosterud. 2006. From Xerox to Aspell: A first prototype of a north s\u00e1mi speller based on twol technology. In Finite-State Methods and Natural Language Processing, pages 306-307, Berlin, Heidelberg. Springer Berlin Heidel- berg.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Web Based Application Development", "authors": [ { "first": "F", "middle": [], "last": "Ralph", "suffix": "" }, { "first": "", "middle": [], "last": "Grove", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ralph F Grove. 2009. Web Based Application Develop- ment. Jones & Bartlett Publishers.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Constraint grammar as a framework for parsing unrestricted text", "authors": [ { "first": "Fred", "middle": [], "last": "Karlsson", "suffix": "" } ], "year": 1990, "venue": "Proceedings of the 13th International Conference of Computational Linguistics", "volume": "3", "issue": "", "pages": "168--173", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fred Karlsson. 1990. Constraint grammar as a frame- work for parsing unrestricted text. In Proceedings of the 13th International Conference of Computational Linguistics, volume 3, pages 168-173, Helsinki.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Test data and testing of spelling checkers. Presentation at the NorWEST2014 workshop", "authors": [ { "first": "", "middle": [], "last": "Sjur Moshagen", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sjur Moshagen. 2014. Test data and testing of spelling checkers. Presentation at the NorWEST2014 work- shop.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Opensource infrastructures for collaborative work on underresourced languages", "authors": [ { "first": "Sjur", "middle": [], "last": "Moshagen", "suffix": "" }, { "first": "Jack", "middle": [], "last": "Rueter", "suffix": "" }, { "first": "Tommi", "middle": [], "last": "Pirinen", "suffix": "" }, { "first": "Trond", "middle": [], "last": "Trosterud", "suffix": "" }, { "first": "Francis", "middle": [ "M" ], "last": "Tyers", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation, LREC", "volume": "", "issue": "", "pages": "71--77", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sjur Moshagen, Jack Rueter, Tommi Pirinen, Trond Trosterud, and Francis M Tyers. 2014. Open- source infrastructures for collaborative work on under- resourced languages. In Proceedings of the Ninth Inter- national Conference on Language Resources and Eval- uation, LREC, pages 71-77.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Samisk grammatikk", "authors": [ { "first": "Klaus", "middle": [ "Peter" ], "last": "Nickel", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Klaus Peter Nickel. 1994. Samisk grammatikk, second edition. Davvi Girji, K\u00e1r\u00e1\u0161johka.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Empiricism is not a matter of faith", "authors": [ { "first": "Ted", "middle": [], "last": "Pedersen", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics", "volume": "34", "issue": "", "pages": "465--470", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ted Pedersen. 2008. Empiricism is not a matter of faith. Computational Linguistics, 34:465-470.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "State-ofthe-art in weighted finite-state spell-checking", "authors": [ { "first": "Tommi", "middle": [ "A" ], "last": "Pirinen", "suffix": "" }, { "first": "Krister", "middle": [], "last": "Lind\u00e9n", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 15th International Conference on Computational Linguistics and Intelligent Text Processing", "volume": "8404", "issue": "", "pages": "519--532", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tommi A. Pirinen and Krister Lind\u00e9n. 2014. State-of- the-art in weighted finite-state spell-checking. In Pro- ceedings of the 15th International Conference on Com- putational Linguistics and Intelligent Text Processing - Volume 8404, CICLing 2014, pages 519-532, Berlin, Heidelberg. Springer-Verlag.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Grammar error correction in morphologically rich languages: The case of russian", "authors": [ { "first": "Alla", "middle": [], "last": "Rozovskaya", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2019, "venue": "In Transactions of the Association for Computational Linguistics", "volume": "7", "issue": "", "pages": "1--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alla Rozovskaya and Dan Roth. 2019. Grammar er- ror correction in morphologically rich languages: The case of russian. In Transactions of the Association for Computational Linguistics, vol. 7, pp. 1-17, 2019.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Ethnologue: Languages of the World, twenty-first edition", "authors": [ { "first": "F", "middle": [], "last": "Gary", "suffix": "" }, { "first": "Charles", "middle": [ "D" ], "last": "Simons", "suffix": "" }, { "first": "", "middle": [], "last": "Fennig", "suffix": "" } ], "year": 2018, "venue": "SIL International", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gary F. Simons and Charles D. Fennig, editors. 2018. Ethnologue: Languages of the World, twenty-first edi- tion. SIL International, Dallas, Texas.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "When grammar can't be trusted -Valency and semantic categories in North S\u00e1mi syntactic analysis and error detection", "authors": [ { "first": "Linda", "middle": [], "last": "Wiechetek", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linda Wiechetek. 2017. When grammar can't be trusted -Valency and semantic categories in North S\u00e1mi syn- tactic analysis and error detection. PhD thesis, UiT The Arctic University of Norway.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Many shades of grammar checking -launching a constraint grammar tool for north s\u00e1mi", "authors": [ { "first": "Linda", "middle": [], "last": "Wiechetek", "suffix": "" }, { "first": "B\u00f8rre", "middle": [], "last": "Sjur N\u00f8rsteb\u00f8 Moshagen", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Gaup", "suffix": "" }, { "first": "", "middle": [], "last": "Omma", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the NoDaLiDa 2019 Workshop on Constraint Grammar -Methods, Tools and Applications", "volume": "33", "issue": "", "pages": "35--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linda Wiechetek, Sjur N\u00f8rsteb\u00f8 Moshagen, B\u00f8rre Gaup, and Thomas Omma. 2019a. Many shades of gram- mar checking -launching a constraint grammar tool for north s\u00e1mi. In Proceedings of the NoDaLiDa 2019 Workshop on Constraint Grammar -Methods, Tools and Applications, NEALT Proceedings Series 33:8, pages 35-44.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Many shades of grammar checking -Launching a Constraint Grammar tool for North S\u00e1mi", "authors": [ { "first": "Linda", "middle": [], "last": "Wiechetek", "suffix": "" }, { "first": "B\u00f8rre", "middle": [], "last": "Sjur N\u00f8rsteb\u00f8 Moshagen", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Gaup", "suffix": "" }, { "first": "", "middle": [], "last": "Omma", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the NoDaLiDa 2019 Workshop on Constraint Grammar -Methods, Tools and Applications", "volume": "", "issue": "", "pages": "35--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linda Wiechetek, Sjur N\u00f8rsteb\u00f8 Moshagen, B\u00f8rre Gaup, and Thomas Omma. 2019b. Many shades of gram- mar checking -Launching a Constraint Grammar tool for North S\u00e1mi. In Proceedings of the NoDaLiDa 2019 Workshop on Constraint Grammar -Methods, Tools and Applications (NoDaLiDa 2019), pages 35- 44.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Rules ruling neural networks -how can rule-based and neural models benefit from each other when building a grammar checker", "authors": [ { "first": "Linda", "middle": [], "last": "Wiechetek", "suffix": "" }, { "first": "Tommi", "middle": [ "A" ], "last": "Pirinen", "suffix": "" }, { "first": "Mika", "middle": [], "last": "H\u00e4m\u00e4l\u00e4inen", "suffix": "" }, { "first": "Chiara", "middle": [], "last": "Argese", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linda Wiechetek, Tommi A Pirinen, Mika H\u00e4m\u00e4l\u00e4inen, and Chiara Argese. 2021. Rules ruling neural net- works -how can rule-based and neural models benefit from each other when building a grammar checker? In forthcoming.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "Figure 1. These \u2075http://divvun.no/korrektur/korrektur.html are:", "type_str": "figure" }, "FIGREF1": { "num": null, "uris": null, "text": "is a light and simple tool.' Instead of resulting in a simple non-word, in North S\u00e1mi vowel confusion can have grammatical \u2076https://github.com/giellalt/lang-sme/ releases/tag/naacl-2021-4 \u2077All examples are original examples or fragments from SIKOR and are most likely native speaker texts or translations.", "type_str": "figure" }, "FIGREF2": { "num": null, "uris": null, "text": "System architecture of GramDivvun", "type_str": "figure" }, "FIGREF3": { "num": null, "uris": null, "text": "Output of GramDivvun in the command line", "type_str": "figure" }, "FIGREF4": { "num": null, "uris": null, "text": "thought that it was true.'", "type_str": "figure" }, "FIGREF5": { "num": null, "uris": null, "text": "Development of GramDivvun precision and recall in the regression tests", "type_str": "figure" }, "FIGREF6": { "num": null, "uris": null, "text": "should also have simple' A number of error type rules are causing false positives in certain contexts such as ex. (8), where the infinitive oastit 'buy' is a correct form. However, it is homonymous with a second-person plural imperative reading of the same verb, and is falsely corrected to the third-person plural reading ostet.", "type_str": "figure" }, "FIGREF7": { "num": null, "uris": null, "text": "Motivation and instruction are important and central tasks for SOR's project leader'", "type_str": "figure" }, "TABREF0": { "html": null, "type_str": "table", "num": null, "text": ". Here, the (derived) noun v\u00e1k\u0161un '(the act of) observing' is confused with the past participle v\u00e1k\u0161on 'observed'.", "content": "
(2)Politiijatleatotnev\u00e1k\u0161unjohtolaga
policebe.3PLtodayobserving.NOMtraffic
" }, "TABREF2": { "html": null, "type_str": "table", "num": null, "text": "", "content": "" }, "TABREF5": { "html": null, "type_str": "table", "num": null, "text": "The first test suite verifies our system's quality in the regression test sense, and the second test ensures that the system works for open text case.", "content": "
naacl-1 naacl-2 naacl-4
baseline
Precision70.9%68.9%88.8%
Recall66.9%84.0%91.0%
F 1 -score68.875.789.9
" }, "TABREF7": { "html": null, "type_str": "table", "num": null, "text": "", "content": "
: Performance of GramDivvun over the span
of a year, before and after introducing regression
tests
seamma:
same
'Everybody I know thinks the same'
" } } } }