{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:31:29.988265Z" }, "title": "Creating Data in Icelandic for Text Normalization", "authors": [ { "first": "Helga", "middle": [ "Svala" ], "last": "Sigur\u00f0ard\u00f3ttir", "suffix": "", "affiliation": {}, "email": "helgas@ru.is" }, { "first": "Anna", "middle": [ "Bj\u00f6rk" ], "last": "Nikul\u00e1sd\u00f3ttir", "suffix": "", "affiliation": {}, "email": "" }, { "first": "J\u00f3n", "middle": [], "last": "Gu\u00f0nason", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We introduce Reg\u00edna, a rule-based system that can automatically normalize data for a text-to-speech (TTS) system. Normalized data do not generally exist so we created good enough data for more advanced methods in text normalization (TN). We manually annotated the first normalized corpus in Icelandic, 40,000 sentences, and developed Reg\u00edna, a TN-system based on regular expressions. The new system gets 89.82% accuracy compared to the manually annotated corpus on non-standard words and showed a significant improvement in accuracy when compared to an older normalization system for Icelandic. The normalized corpus and Reg\u00edna will be released as open source.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We introduce Reg\u00edna, a rule-based system that can automatically normalize data for a text-to-speech (TTS) system. Normalized data do not generally exist so we created good enough data for more advanced methods in text normalization (TN). We manually annotated the first normalized corpus in Icelandic, 40,000 sentences, and developed Reg\u00edna, a TN-system based on regular expressions. The new system gets 89.82% accuracy compared to the manually annotated corpus on non-standard words and showed a significant improvement in accuracy when compared to an older normalization system for Icelandic. The normalized corpus and Reg\u00edna will be released as open source.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Text normalization is an integral part of a TTS system. Unrestricted input texts can contain so-called non-standard words (NSWs), which are impossible for a computer to read without being formatted into regular strings of alphabetical letters and punctuation marks. These NSWs are divided into semiotic classes and include abbreviations, numbers, and special characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The degree of importance of text normalization in TTS is not obvious even though its utility is known. Most words do not need to be normalized, and therefore normalized datasets and their unnormalized counterparts are almost identical. However, without expanding NSWs, a TTS system skips those words, making the text inaccurate and incomplete.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To clarify, let us look at an example of a sentence before and after normalization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Haesti tindur Esjunnar er 914 m. (Esjan's highest peak is 914m.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Haesti tindur Esjunnar er n\u00edu hundru\u00f0 og fj\u00f3rt\u00e1n metrar. (Esjan's highest peak is nine hundred and fourteen meters.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2193", "sec_num": null }, { "text": "Text normalization systems are customarily rule-based but are moving in the direction of neural networks (NNs). Models made with NNs require less human effort (Graves and Jaitly, 2014) but need a vast amount of correctly annotated data to learn from, and these do not naturally exist for text normalization. People can generally read NSWs without requiring an explanation, so there is no motivation to create data with normalized text, such as in translation. To acquire data in Icelandic for the training of more sophisticated systems, we start by making a system that can make data good enough for further training. We compare the results of this system with manually annotated data to better assess the quality.", "cite_spans": [ { "start": 159, "end": 184, "text": "(Graves and Jaitly, 2014)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "\u2193", "sec_num": null }, { "text": "In 1996, Sproat (Sproat, 1996) published work for a unifying model for most text normalization problems, built with Weighted Finite-State Transducers (WFSTs). The transducers were constructed using a lexical toolkit that allows descriptions of lexicons, morphological rules, numeral-expansion rules, and phonological rules. In 2001, Sproat (Sproat et al., 2001 ) expanded on this work and described challenges that heavily inflected languages like Russian (and Icelandic) face. This work was the first that treated the problem as essentially a language modelling problem.", "cite_spans": [ { "start": 16, "end": 30, "text": "(Sproat, 1996)", "ref_id": "BIBREF11" }, { "start": 340, "end": 360, "text": "(Sproat et al., 2001", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "1.1" }, { "text": "Up until recently, the primary approach to the text normalization problem was with WFSTs. In 2015, Ebden et al. (Ebden and Sproat, 2015) released a paper where they described the Kestrel text normalization system, a component of the Google TTS system. It differed from previous systems by separating the tokenization and classifica-tion (determining whether a word should be normalized and, if so, which semiotic class it belongs to) from the verbalization step. Kestrel recognizes a large set of semiotic classes: various categories of numbers, times, telephone numbers and electronic addresses.", "cite_spans": [ { "start": 112, "end": 136, "text": "(Ebden and Sproat, 2015)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "1.1" }, { "text": "Work on Icelandic spoken language technologies is defined within the Language Technology Programme for Icelandic (2019-2023) (Nikul\u00e1sd\u00f3ttir et al., 2020) . Previous work on language resources for Automatic Speech Recognition (ASR) and TTS include acoustic data gathering (Gu\u00f0nason et al., 2012; Steingr\u00edmsson et al., 2017; Mollberg et al., 2020 ) and text corpus building for Icelandic (Steingr\u00edmsson et al., 2018) . Spoken language technologies for Icelandic commenced with building ASR systems (Helgad\u00f3ttir et al., 2017) with resource work on TTS aimed at a pronunciation lexicon (Nikul\u00e1sd\u00f3ttir et al., 2018) and acoustic data recordings (Sigurgeirsson et al., 2020) following.", "cite_spans": [ { "start": 125, "end": 153, "text": "(Nikul\u00e1sd\u00f3ttir et al., 2020)", "ref_id": null }, { "start": 271, "end": 294, "text": "(Gu\u00f0nason et al., 2012;", "ref_id": "BIBREF2" }, { "start": 295, "end": 322, "text": "Steingr\u00edmsson et al., 2017;", "ref_id": "BIBREF15" }, { "start": 323, "end": 344, "text": "Mollberg et al., 2020", "ref_id": "BIBREF4" }, { "start": 386, "end": 414, "text": "(Steingr\u00edmsson et al., 2018)", "ref_id": "BIBREF16" }, { "start": 496, "end": 522, "text": "(Helgad\u00f3ttir et al., 2017)", "ref_id": "BIBREF3" }, { "start": 582, "end": 610, "text": "(Nikul\u00e1sd\u00f3ttir et al., 2018)", "ref_id": "BIBREF7" }, { "start": 640, "end": 668, "text": "(Sigurgeirsson et al., 2020)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "1.1" }, { "text": "The only research that has been done on text normalization in Icelandic was done in 2019, (Nikul\u00e1sd\u00f3ttir and Gu\u00f0nason, 2019) focusing exclusively on numbers. The system built follows the open-source version of Kestrel, Sparrowhawk 1 (Ebden and Sproat, 2015) , and contains a set of grammar rules written in Thrax. Numbers are handled with a classification grammar, which classifies input containing digits into several semiotic classes, and a verbalization grammar, which inflates the numbers. The verbalization grammar labels possible verbalizations with part-of-speech tags and a language model is then used to choose the most probable word form where verbalization is ambiguous.", "cite_spans": [ { "start": 233, "end": 257, "text": "(Ebden and Sproat, 2015)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "1.1" }, { "text": "In the last few years, people have been experimenting with deep learning (neural networks) for text normalization (Pusateri et al., 2017; Pramanik and Hussain, 2019; Zhang et al., 2019) . This works well for many tasks, but the task of text normalization is fragile. Neural networks are prone to so-called unrecoverable errors; they do not only expand the words incorrectly, but the result is misleading. For instance, a navigation system could send the user to another side of town because it incorrectly expanded the postal code. Some experiments have been performed with hybrid systems, using a neural model and then applying a grammar system, such as Kestrel. The grammar system implements an overgenerating grammar, which includes the correct verbalization, and can be used to guide the system (Sproat and Jaitly, 2017; Zhang et al., 2019 Zhang et al., , 2020 .", "cite_spans": [ { "start": 114, "end": 137, "text": "(Pusateri et al., 2017;", "ref_id": "BIBREF9" }, { "start": 138, "end": 165, "text": "Pramanik and Hussain, 2019;", "ref_id": "BIBREF8" }, { "start": 166, "end": 185, "text": "Zhang et al., 2019)", "ref_id": "BIBREF18" }, { "start": 799, "end": 824, "text": "(Sproat and Jaitly, 2017;", "ref_id": "BIBREF14" }, { "start": 825, "end": 843, "text": "Zhang et al., 2019", "ref_id": "BIBREF18" }, { "start": 844, "end": 864, "text": "Zhang et al., , 2020", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "1.1" }, { "text": "In 2016, Sproat et al. (Sproat and Jaitly, 2016 ) released a challenge: given a large corpus of written text aligned to its normalized spoken form, train an RNN to learn the correct normalization function. The authors presented a dataset of general text with generated normalizations using an existing text normalization component of a TTS system (Kestrel).", "cite_spans": [ { "start": 23, "end": 47, "text": "(Sproat and Jaitly, 2016", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "1.1" }, { "text": "The data used are 40,000 sentences (741,909 words) from the 2017 version of the Icelandic Gigaword Corpus (IGC). We use sentences that include many NSWs, such as numbers, abbreviations, and symbols. They are from all sources in the IGC. 534 of the sentences deal with sports results and were handled separately. The sentences were manually annotated and make up the first manually curated normalization corpus for Icelandic. For a small experiment on inter annotator agreement, three people from Reykjav\u00edk University normalized 30 sentences with 205 NSWs, using the guidelines in Appendix B. The annotators expanded words without regard to a semiotic class. The inter-annotator agreement for NSWs was \u03ba = 0.85.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "Icelandic is an inflected language, where each word can have various forms of words depending on the context. For example, the number 2 (two) can be expanded as tveir, tvo, tveimur, tveggja, tvaer, or tv\u00f6, depending on the next word's case. The ordinal number 2. (second) can then be annar, annan, \u00f6\u00f0rum, annars, \u00f6nnur, a\u00f0ra, annarri, annarrar, anna\u00f0, \u00f6\u00f0ru, annars, a\u00f0rir, annarra, or a\u00f0rar. Only the first four numbers (one, two, three, and four) have this inflected nature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "The most significant ambiguity in the data was whether to write hyphens and dashes as til (to) or silence when it was used to describe sports results. In Icelandic, a sentence like Leiknum lauk me\u00f0 2-1 sigri (The game ended with a 2-1 victory), is read as Leiknum lauk me\u00f0 tv\u00f6 (2) eitt (1) sigri and the hyphen is silent. In a TTS system, the idea is that the user can either mark the topic herself or run the text through data-driven topic classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "The system built in this research uses regular expressions and grammar rules to determine how a word should be expanded. It has been given the name Reg\u00edna. The first step of Reg\u00edna is to run rules for expansions of abbreviations, measurements, money, weblinks, and roman numerals through the unnormalized text. The rules for measurements take prepositions into account. For example, this could help when the base version of km is k\u00edl\u00f3metrar. If we say til 2 km, Reg\u00edna uses the preposition til to expand the word to the genitive case, k\u00edl\u00f3metra. The next step is to run this expanded text through a part-of-speech (POS) tagger. (Steingr\u00edmsson et al., 2019) Instead of reading km as an abbreviation (and giving it a tag as such), the tagger now recognizes the word k\u00edl\u00f3metra and knows from context it is in genitive case. Now Reg\u00edna is preserving part-of-speech tags for each word. Next, the semiotic class of remaining NSWs is determined. Rules for numbers are applied to cardinal and ordinal numbers, decimals and fractions. In this step, the words tagged as numbers consider the next word's tag. The numbers that are not followed by an adjective or a noun are assigned a default case. The final step of the system is to run the text through rules for other semiotic classes: time, sports results, digits, letters, dates, and symbols. For comparison, the normalized text was re-aligned with the manually annotated text, with each sentence and word indexed to keep the structure clear. In Appendix A, the pipeline for Reg\u00edna is shown.", "cite_spans": [ { "start": 628, "end": 656, "text": "(Steingr\u00edmsson et al., 2019)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "The dataset with general news had 729,763 words, of which 701,088 did not need normalization. The baseline of the system without any work was thus 96.08%. The remaining 28,675 words were split into cardinal, ordinal, and decimal numbers, digits, fractions, letter sequences, abbreviations, weblinks, measurements, clock times, dates, and symbols. The accuracy and size of each class are shown in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 396, "end": 403, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "The only specific domain looked at were sports because of the ambiguity regarding hyphens. The portion regarding sports was 12,106 words, 1.7% of the dataset. The ratio of NSWs in need of normalization is relatively high in sports, 14.66%. We looked at the same semiotic classes, with an addition of a special one for sports results. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sports", "sec_num": null }, { "text": "We considered error division for the classes and listed them in Table 4 . All classes are handled alike in the two domains except for the symbol class (where a dash is generally a til (to) but silent in the sport domain), and the SPORT class is unique to sports news. The errors are divided up to:", "cite_spans": [], "ref_spans": [ { "start": 64, "end": 71, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Error division", "sec_num": null }, { "text": "\u2022 CLASS -incorrect normalization due to misclassification of the token", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error division", "sec_num": null }, { "text": "\u2022 FORM -incorrect grammatical form of the normalization but otherwise correct", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error division", "sec_num": null }, { "text": "\u2022 NON-ERRORS -errors due to errors in the manual data, misalignment of whitespaces, or instances where both expansions are correct but different (e.g. \u00fe\u00fasund and eitt \u00fe\u00fasund (thousand and one thousand)). ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error division", "sec_num": null }, { "text": "To compare Reg\u00edna with the old Thrax normalizer, Textahaukur (Nikul\u00e1sd\u00f3ttir and Gu\u00f0nason, 2019) , 400 sentences from the whole dataset were normalized with both systems. 147 of those contained NSWs and were observed for more meaningful results. Reg\u00edna had an accuracy score of 83.67%, with 20 sentences containing 22 words that did not match the manual annotation. Textahaukur had an accuracy score of 61.22%, with 55 sentences containing 106 incorrectly normalized words.", "cite_spans": [ { "start": 80, "end": 95, "text": "Gu\u00f0nason, 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison with an existing system", "sec_num": null }, { "text": "Normalization systems are either rule-based, made with neural models or a hybrid of those two. The drawback of a rule-based system is that it is less generalizable and requires more maintenance. The main advantage is that it never makes unrecoverable errors. The worst errors Reg\u00edna makes is not expanding a non-standard word, which happens when it does not find an appropriate semiotic class. It can also happen that it assigns the wrong class to it -making the expansion comprehensible but awkward.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.1" }, { "text": "As mentioned, the main problem with an inflected language like Icelandic is that each word has several forms. A part-of-speech tagger helps determine the expansion of the preceding number, but if the word following a number is not a noun or an adjective, it is given a default form. For cardinal numbers, that is the neutral, nominative, singular version, which works well with sports results, years, timings, addresses, et cetera. For decimals, it is the masculine, nominative, singular version. For ordinal numbers, it is the masculine, dative, singular form. This covers most cases, especially dates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.1" }, { "text": "These default cases, plus the next word's tag, covered a vast majority of examples in the data. The incorrect examples from these semiotic classes, as seen in 4, are mostly from the target word neither having a tag for reference nor being in the default form. Abbreviations, measurements, and fractions have the same problem, i.e., the default class is not correct. The system also marks dates written as 6/6 as fractions and expands them to sex sj\u00f6ttu (six sixths) instead of sj\u00f6tti j\u00fan\u00ed (the sixth of June).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.1" }, { "text": "The system is built with an intention of a spellcorrecting layer before the normalization. In Icelandic, the rule is to write thousands separators with a dot and decimal separators with a comma, opposite to English. Reg\u00edna sends numbers that do not conform to Icelandic rules to the digit class and writes them out, digit by digit, sometimes going against the author's intention.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.1" }, { "text": "The time class only has rules for the 24-hour clock format, so when it read results from timekeeping, it did not expand the numbers correctly. The symbol class mostly suffers from the strict PLAIN 384 280 0 103 0 0 1 CARDINAL 882 17 820 23 8 13 1 ORDINAL 223 6 212 0 0 5 0 DIGIT 118 110 0 4 0 4 0 DECIMAL 51 27 23 0 1 0 0 FRACTION 27 3 21 0 1 0 2 LETTERS 142 9 0 11 120 2 0 ABBREVIATIONS 328 51 83 1 184 8 1 ROMAN NUMBERS 69 0 47 0 22 0 2 MONEY 188 1 84 3 5 64 31 WLINK 4 2 0 0 1 0 1 MEASURE 595 6 524 2 0 60 3 TIME 140 4 3 1 0 132 0 DATE 182 16 81 663 4 8 Finally, the slight inaccuracy of the plain class, which should remain unchanged, resulted mainly from words being misclassified to the LETTERS class (NATO \u2212 \u2192 N A T O) and mistakes in the manual data.", "cite_spans": [], "ref_spans": [ { "start": 191, "end": 666, "text": "PLAIN 384 280 0 103 0 0 1 CARDINAL 882 17 820 23 8 13 1 ORDINAL 223 6 212 0 0 5 0 DIGIT 118 110 0 4 0 4 0 DECIMAL 51 27 23 0 1 0 0 FRACTION 27 3 21 0 1 0 2 LETTERS 142 9 0 11 120 2 0 ABBREVIATIONS 328 51 83 1 184 8 1 ROMAN NUMBERS 69 0 47 0 22 0 2 MONEY 188 1 84 3 5 64 31 WLINK 4 2 0 0 1 0 1 MEASURE 595 6 524 2 0 60 3 TIME 140 4 3 1 0 132 0 DATE 182 16 81 663 4 8", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Discussion", "sec_num": "4.1" }, { "text": "The errors made by Reg\u00edna and Textahaukur were examined. Reg\u00edna had some abbreviations that were not expanded because of possible ambiguity. Otherwise, a majority of the errors was the wrong case of an expanded number.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison between systems", "sec_num": "4.1.1" }, { "text": "These were also the most common errors for Textahaukur. More serious errors were a strong tendency to change cases in the middle of a token. For example, the number 110 was normalized in the feminine for the first part (hundra\u00f0asta og) and then masculine (t\u00edundi). Textahaukur deleted tokens when they were followed by a token it could not handle (5,5\u00b0C became\u00b0) or skipped handling a whole sentence. In some cases, Textahaukur did not have any rules implemented. These were cases of weblinks and sports, which Reg\u00edna handles almost perfectly with rigid rules on both ends.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison between systems", "sec_num": "4.1.1" }, { "text": "Reg\u00edna and Textahaukur both had cases where they expanded correctly, but the manual normalization was incorrect, showing that even when a computer knows less than a person, it is more consistent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison between systems", "sec_num": "4.1.1" }, { "text": "Reg\u00edna works well and does not return misleading results. The manually annotated data inevitably became a development dataset, since it was always visible for the developer of Reg\u00edna. However, this is exclusively a problem for comparing the system with the corpus. Reg\u00edna will be used to normalize text for TTS synthesis. Although the exact expansion might differ from person to person, that does not indicate an incorrect normalization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "4.2" }, { "text": "In the future, we want to do more thorough experiments on inter-annotator agreement. For the 205 words, the annotators mostly disagreed on words that can be expanded in multiple ways. Reg\u00edna will be used to normalize more data for further development in text normalization, using neural models. For the TTS application, we will create a test set-up for extrinsic evaluation given the new dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "4.2" }, { "text": "https://github.com/google/sparrowhawk", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work is supported by the Language Technology Programme for Icelandic 2019-2023, funded by the Icelandic government. We want to thank Haukur P\u00e1ll J\u00f3nsson and David Erik Mollberg for annotation and conversations about data structure, and Ari P\u00e1ll Kristinsson, for advice on grammar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "\u2022 Delete a dash at the start of the line.\u2022 If a word ends in dash it is ignored.\u2022 @ is written hj\u00e1.\u2022 = is written jafnt og.\u2022 Links are written like www.mbl.is/123 \u2212 \u2192 w w w punktur m b l punktur i s sk\u00e1strik einn tveir \u00fer\u00edr, all letters are separated except for symbols and numbers, they are written out.\u2022 For basketball results like 24/14 fr\u00e1k\u00f6st, the / is written as , i.e., 24/14 fr\u00e1k\u00f6st \u2212 \u2192 tuttugu og fj\u00f6gur fj\u00f3rt\u00e1n fr\u00e1k\u00f6st.\u2022 In digit sequences, dashes are written as , e.g., 234-353-42 \u2212 \u2192 tveir \u00fer\u00edr fj\u00f3rir \u00fer\u00edr fimm \u00fer\u00edr fj\u00f3rir tveir", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Reg\u00edna Pipeline", "sec_num": null }, { "text": "\u2022 DASH: can imply bandstrik (dash) (links), (sports results), til (number intervals) or nothing.\u2022 SLASH: can imply sk\u00e1strik (slash) (links), og (and, e\u00f0a (or) , a fraction, a or nothing.", "cite_spans": [ { "start": 150, "end": 164, "text": "(and, e\u00f0a (or)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "B.2 Ambiguities", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The Kestrel TTS text normalization system", "authors": [ { "first": "Peter", "middle": [], "last": "Ebden", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Sproat", "suffix": "" } ], "year": 2015, "venue": "Natural Language Engineering", "volume": "21", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Ebden and Richard Sproat. 2015. The Kestrel TTS text normalization system. Natural Language Engineering, 21(3):333.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Towards endto-end speech recognition with recurrent neural networks", "authors": [ { "first": "Alex", "middle": [], "last": "Graves", "suffix": "" }, { "first": "Navdeep", "middle": [], "last": "Jaitly", "suffix": "" } ], "year": 2014, "venue": "International conference on machine learning", "volume": "", "issue": "", "pages": "1764--1772", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Graves and Navdeep Jaitly. 2014. Towards end- to-end speech recognition with recurrent neural net- works. In International conference on machine learning, pages 1764-1772. PMLR.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "J\u00f6kull J\u00f3hannsson, El\u00edn Carstensd\u00f3ttir, Hannes H\u00f6gni Vilhj\u00e1lmsson", "authors": [ { "first": "J\u00f3n", "middle": [], "last": "Gu\u00f0nason", "suffix": "" }, { "first": "Oddur", "middle": [], "last": "Kjartansson", "suffix": "" } ], "year": 2012, "venue": "Hrafn Loftsson, Sigr\u00fan Helgad\u00f3ttir, Krist\u00edn M J\u00f3hannsd\u00f3ttir, and Eir\u00edkur R\u00f6gnvaldsson", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J\u00f3n Gu\u00f0nason, Oddur Kjartansson, J\u00f6kull J\u00f3hanns- son, El\u00edn Carstensd\u00f3ttir, Hannes H\u00f6gni Vilhj\u00e1lms- son, Hrafn Loftsson, Sigr\u00fan Helgad\u00f3ttir, Krist\u00edn M J\u00f3hannsd\u00f3ttir, and Eir\u00edkur R\u00f6gnvaldsson. 2012. Almannaromur: An open icelandic speech cor- pus. In Spoken Language Technologies for Under- Resourced Languages.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Building an asr corpus using althingi's parliamentary speeches", "authors": [ { "first": "R\u00f3bert", "middle": [], "last": "Inga R\u00fan Helgad\u00f3ttir", "suffix": "" }, { "first": "", "middle": [], "last": "Kjaran", "suffix": "" } ], "year": 2017, "venue": "INTERSPEECH", "volume": "", "issue": "", "pages": "2163--2167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Inga R\u00fan Helgad\u00f3ttir, R\u00f3bert Kjaran, Anna Bj\u00f6rk Nikul\u00e1sd\u00f3ttir, and J\u00f3n Gu\u00f0nason. 2017. Building an asr corpus using althingi's parliamentary speeches. In INTERSPEECH, pages 2163-2167.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Samr\u00f3mur: Crowd-sourcing data collection for icelandic speech recognition", "authors": [ { "first": "David", "middle": [ "Erik" ], "last": "Mollberg", "suffix": "" }, { "first": "\u00d3lafur", "middle": [], "last": "Helgi J\u00f3nsson", "suffix": "" }, { "first": "Sunneva", "middle": [], "last": "\u00deorsteinsd\u00f3ttir", "suffix": "" }, { "first": "Stein\u00fe\u00f3r", "middle": [], "last": "Steingr\u00edmsson", "suffix": "" }, { "first": "Eyd\u00eds", "middle": [], "last": "Huld Magn\u00fasd\u00f3ttir", "suffix": "" }, { "first": "Jon", "middle": [], "last": "Gudnason", "suffix": "" } ], "year": 2020, "venue": "Proceedings of The 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "3463--3467", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Erik Mollberg, \u00d3lafur Helgi J\u00f3nsson, Sunneva \u00deorsteinsd\u00f3ttir, Stein\u00fe\u00f3r Steingr\u00edmsson, Eyd\u00eds Huld Magn\u00fasd\u00f3ttir, and Jon Gudnason. 2020. Samr\u00f3mur: Crowd-sourcing data collection for icelandic speech recognition. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 3463- 3467.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Bootstrapping a Text Normalization System for an Inflected Language. Numbers as a Test Case", "authors": [ { "first": "Anna", "middle": [], "last": "Bj\u00f6rk Nikul\u00e1sd\u00f3ttir", "suffix": "" }, { "first": "J\u00f3n", "middle": [], "last": "Gu\u00f0nason", "suffix": "" } ], "year": 2019, "venue": "IN-TERSPEECH", "volume": "", "issue": "", "pages": "4455--4459", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Bj\u00f6rk Nikul\u00e1sd\u00f3ttir and J\u00f3n Gu\u00f0nason. 2019. Bootstrapping a Text Normalization System for an Inflected Language. Numbers as a Test Case. In IN- TERSPEECH, pages 4455-4459.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Anton Karl Ingason, Hrafn Loftsson, Eir\u00edkur R\u00f6gnvaldsson, Einar Freyr Sigur\u00f0sson, and Stein\u00fe\u00f3r Steingr\u00edmsson. 2020. Language technology programme for icelandic", "authors": [ { "first": "Anna", "middle": [], "last": "Bj\u00f6rk Nikul\u00e1sd\u00f3ttir", "suffix": "" }, { "first": "J\u00f3n", "middle": [], "last": "Gu\u00f0nason", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "2019--2023", "other_ids": { "arXiv": [ "arXiv:2003.09244" ] }, "num": null, "urls": [], "raw_text": "Anna Bj\u00f6rk Nikul\u00e1sd\u00f3ttir, J\u00f3n Gu\u00f0nason, An- ton Karl Ingason, Hrafn Loftsson, Eir\u00edkur R\u00f6gn- valdsson, Einar Freyr Sigur\u00f0sson, and Stein\u00fe\u00f3r Steingr\u00edmsson. 2020. Language technology pro- gramme for icelandic 2019-2023. arXiv preprint arXiv:2003.09244.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "An icelandic pronunciation dictionary for tts", "authors": [ { "first": "Anna", "middle": [], "last": "Bj\u00f6rk Nikul\u00e1sd\u00f3ttir", "suffix": "" }, { "first": "J\u00f3n", "middle": [], "last": "Gu\u00f0nason", "suffix": "" }, { "first": "Eir\u00edkur", "middle": [], "last": "R\u00f6gnvaldsson", "suffix": "" } ], "year": 2018, "venue": "2018 IEEE Spoken Language Technology Workshop (SLT)", "volume": "", "issue": "", "pages": "339--345", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Bj\u00f6rk Nikul\u00e1sd\u00f3ttir, J\u00f3n Gu\u00f0nason, and Eir\u00edkur R\u00f6gnvaldsson. 2018. An icelandic pronunciation dictionary for tts. In 2018 IEEE Spoken Language Technology Workshop (SLT), pages 339-345. IEEE.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Text normalization using memory augmented neural networks", "authors": [ { "first": "Subhojeet", "middle": [], "last": "Pramanik", "suffix": "" }, { "first": "Aman", "middle": [], "last": "Hussain", "suffix": "" } ], "year": 2019, "venue": "Speech Communication", "volume": "109", "issue": "", "pages": "15--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Subhojeet Pramanik and Aman Hussain. 2019. Text normalization using memory augmented neural net- works. Speech Communication, 109:15-23.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A Mostly Data-Driven Approach to Inverse Text Normalizatio7n", "authors": [ { "first": "Ernest", "middle": [], "last": "Pusateri", "suffix": "" }, { "first": "Bharat", "middle": [ "Ram" ], "last": "Ambati", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Brooks", "suffix": "" }, { "first": "Ondrej", "middle": [], "last": "Platek", "suffix": "" }, { "first": "Donald", "middle": [], "last": "Mcallaster", "suffix": "" }, { "first": "Venki", "middle": [], "last": "Nagesha", "suffix": "" } ], "year": 2017, "venue": "INTER-SPEECH", "volume": "", "issue": "", "pages": "2784--2788", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ernest Pusateri, Bharat Ram Ambati, Elizabeth Brooks, Ondrej Platek, Donald McAllaster, and Venki Nagesha. 2017. A Mostly Data-Driven Ap- proach to Inverse Text Normalizatio7n. In INTER- SPEECH, pages 2784-2788. Stockholm.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Manual speech synthesis data acquisition-from script design to recording speech", "authors": [ { "first": "Atli", "middle": [], "last": "Sigurgeirsson", "suffix": "" }, { "first": "Gunnar", "middle": [], "last": "\u00d6rn\u00f3lfsson", "suffix": "" }, { "first": "J\u00f3n", "middle": [], "last": "Gu\u00f0nason", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)", "volume": "", "issue": "", "pages": "316--320", "other_ids": {}, "num": null, "urls": [], "raw_text": "Atli Sigurgeirsson, Gunnar \u00d6rn\u00f3lfsson, and J\u00f3n Gu\u00f0- nason. 2020. Manual speech synthesis data acquisition-from script design to recording speech. In Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced lan- guages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL), pages 316-320.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Multilingual Text Analysis for Text-to-Speech Synthesis", "authors": [ { "first": "Richard", "middle": [], "last": "Sproat", "suffix": "" } ], "year": 1996, "venue": "Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP'96", "volume": "3", "issue": "", "pages": "1365--1368", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Sproat. 1996. Multilingual Text Analysis for Text-to-Speech Synthesis. In Proceeding of Fourth International Conference on Spoken Language Pro- cessing. ICSLP'96, volume 3, pages 1365-1368. IEEE.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Normalization of non-standard words. Computer speech & language", "authors": [ { "first": "Richard", "middle": [], "last": "Sproat", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" }, { "first": "Stanley", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Shankar", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Mari", "middle": [], "last": "Ostendorf", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Richards", "suffix": "" } ], "year": 2001, "venue": "", "volume": "15", "issue": "", "pages": "287--333", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Sproat, Alan W Black, Stanley Chen, Shankar Kumar, Mari Ostendorf, and Christopher Richards. 2001. Normalization of non-standard words. Com- puter speech & language, 15(3):287-333.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "RNN Approaches to Text Normalization: A Challenge", "authors": [ { "first": "Richard", "middle": [], "last": "Sproat", "suffix": "" }, { "first": "Navdeep", "middle": [], "last": "Jaitly", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1611.00068" ] }, "num": null, "urls": [], "raw_text": "Richard Sproat and Navdeep Jaitly. 2016. RNN Approaches to Text Normalization: A Challenge. arXiv preprint arXiv:1611.00068.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "An RNN Model of Text Normalization", "authors": [ { "first": "Richard", "middle": [], "last": "Sproat", "suffix": "" }, { "first": "Navdeep", "middle": [], "last": "Jaitly", "suffix": "" } ], "year": 2017, "venue": "INTERSPEECH", "volume": "", "issue": "", "pages": "754--758", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Sproat and Navdeep Jaitly. 2017. An RNN Model of Text Normalization. In INTERSPEECH, pages 754-758. Stockholm.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "M\u00e1lr\u00f3mur: A manually verified corpus of recorded icelandic speech", "authors": [ { "first": "Stein\u00fe\u00f3r", "middle": [], "last": "Steingr\u00edmsson", "suffix": "" }, { "first": "J\u00f3n", "middle": [], "last": "Gu\u00f0nason", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 21st Nordic Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "237--240", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stein\u00fe\u00f3r Steingr\u00edmsson, J\u00f3n Gu\u00f0nason, Sigr\u00fan Hel- gad\u00f3ttir, and Eir\u00edkur R\u00f6gnvaldsson. 2017. M\u00e1lr\u00f3- mur: A manually verified corpus of recorded ice- landic speech. In Proceedings of the 21st Nordic Conference on Computational Linguistics, pages 237-240.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Risam\u00e1lheild: A very large icelandic text corpus", "authors": [ { "first": "Stein\u00fe\u00f3r", "middle": [], "last": "Steingr\u00edmsson", "suffix": "" }, { "first": "Sigr\u00fan", "middle": [], "last": "Helgad\u00f3ttir", "suffix": "" }, { "first": "Eir\u00edkur", "middle": [], "last": "R\u00f6gnvaldsson", "suffix": "" }, { "first": "Starka\u00f0ur", "middle": [], "last": "Barkarson", "suffix": "" }, { "first": "J\u00f3n", "middle": [], "last": "Gu\u00f0nason", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stein\u00fe\u00f3r Steingr\u00edmsson, Sigr\u00fan Helgad\u00f3ttir, Eir\u00edkur R\u00f6gnvaldsson, Starka\u00f0ur Barkarson, and J\u00f3n Gu\u00f0- nason. 2018. Risam\u00e1lheild: A very large icelandic text corpus. In Proceedings of the Eleventh Interna- tional Conference on Language Resources and Eval- uation (LREC 2018).", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Augmenting a bilstm tagger with a morphological lexicon and a lexical category identification step", "authors": [ { "first": "Stein\u00fe\u00f3r", "middle": [], "last": "Steingr\u00edmsson", "suffix": "" }, { "first": "\u00d6rvar", "middle": [], "last": "K\u00e1rason", "suffix": "" }, { "first": "Hrafn", "middle": [], "last": "Loftsson", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.09038" ] }, "num": null, "urls": [], "raw_text": "Stein\u00fe\u00f3r Steingr\u00edmsson, \u00d6rvar K\u00e1rason, and Hrafn Loftsson. 2019. Augmenting a bilstm tagger with a morphological lexicon and a lexical category iden- tification step. arXiv preprint arXiv:1907.09038.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Neural Models of Text Normalization for Speech Applications", "authors": [ { "first": "Hao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Sproat", "suffix": "" }, { "first": "Axel", "middle": [ "H" ], "last": "Ng", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Stahlberg", "suffix": "" }, { "first": "Xiaochang", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Gorman", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Roark", "suffix": "" } ], "year": 2019, "venue": "Computational Linguistics", "volume": "45", "issue": "2", "pages": "293--337", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Zhang, Richard Sproat, Axel H Ng, Felix Stahlberg, Xiaochang Peng, Kyle Gorman, and Brian Roark. 2019. Neural Models of Text Nor- malization for Speech Applications. Computational Linguistics, 45(2):293-337.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A hybrid text normalization system using multi-head self-attention for mandarin", "authors": [ { "first": "Junhui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Junjie", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Chen", "middle": [], "last": "Li", "suffix": "" }, { "first": "Shichao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yuxuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zejun", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2020, "venue": "ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "6694--6698", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junhui Zhang, Junjie Pan, Xiang Yin, Chen Li, Shichao Liu, Yang Zhang, Yuxuan Wang, and Ze- jun Ma. 2020. A hybrid text normalization sys- tem using multi-head self-attention for mandarin. In ICASSP 2020-2020 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 6694-6698. IEEE.", "links": null } }, "ref_entries": { "TABREF1": { "text": "Results for general news", "html": null, "num": null, "content": "
SEMIOTIC CLASS ACCURACY [%] # examples
ALL98.4512,106
PLAIN99.988,923
CARDINAL96.84538
ORDINAL91.8974
DIGIT0.01
DECIMAL0.673
FRACTION0.01
LETTERS99.06106
ABBREVIATIONS75.020
WLINK100.01
MEASURE60.05
TIME100.02
DATE88.443
SYMB90.9188
SPORT84.55893
PUNCT1.01,408
", "type_str": "table" }, "TABREF2": { "text": "Results for sports news", "html": null, "num": null, "content": "", "type_str": "table" }, "TABREF4": { "text": "", "html": null, "num": null, "content": "
: Incorrect results from Reg\u00edna
\u2022 NO ACTION -the token was not expanded
\u2022 INSUFFICIENT -the token was only par-
tially expanded
\u2022 OTHER -the token was normalized incor-
rectly, not due to class or grammatical form.
Examples include dates written in English,
incorrectly expanded dashes, and reverse or-
der of money, such as $5 incorrectly being
expanded to dollarar fimm (dollars five).
", "type_str": "table" }, "TABREF6": { "text": "Error division translation of / to sk\u00e1strik (slash) and -/to til in general text, silence in sports. Reg\u00edna tried to catch all non-standard words, sometimes outside its scope. Parts of sentences in Icelandic text are sometimes written with spelling errors, in English, or as with the separators, with rules that do not apply to Icelandic. Both ends have rigid rules about weblinks and sports results, and the results are almost 100% accurate. The only incorrect examples are misclassified -like 24/7 (twenty-four-seven) is classified as a sports result.", "html": null, "num": null, "content": "", "type_str": "table" } } } }