{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:31:13.806792Z" }, "title": "An Unsupervised method for OCR Post-Correction and Spelling Normalisation for Finnish", "authors": [ { "first": "Quan", "middle": [], "last": "Duong", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Helsinki", "location": {} }, "email": "" }, { "first": "Mika", "middle": [], "last": "H\u00e4m\u00e4l\u00e4inen", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Helsinki", "location": {} }, "email": "" }, { "first": "Simon", "middle": [], "last": "Hengchen", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Rootroo", "middle": [], "last": "Ltd", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Historical corpora are known to contain errors introduced by OCR (optical character recognition) methods used in the digitization process, often said to be degrading the performance of NLP systems. Correcting these errors manually is a timeconsuming process and a great part of the automatic approaches have been relying on rules or supervised machine learning. We build on previous work on fully automatic unsupervised extraction of parallel data to train a character-based sequenceto-sequence NMT (neural machine translation) model to conduct OCR error correction designed for English, and adapt it to Finnish by proposing solutions that take the rich morphology of the language into account. Our new method shows increased performance while remaining fully unsupervised, with the added benefit of spelling normalisation. The source code and models are available on GitHub 1 and Zenodo 2 .", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Historical corpora are known to contain errors introduced by OCR (optical character recognition) methods used in the digitization process, often said to be degrading the performance of NLP systems. Correcting these errors manually is a timeconsuming process and a great part of the automatic approaches have been relying on rules or supervised machine learning. We build on previous work on fully automatic unsupervised extraction of parallel data to train a character-based sequenceto-sequence NMT (neural machine translation) model to conduct OCR error correction designed for English, and adapt it to Finnish by proposing solutions that take the rich morphology of the language into account. Our new method shows increased performance while remaining fully unsupervised, with the added benefit of spelling normalisation. The source code and models are available on GitHub 1 and Zenodo 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Nature language processing (NLP) is arguably tremendously difficult to tackle in Finnish, due to an extremely rich morphology. This difficulty is reinforced by the limited availability of NLP tools for Finnish in general, and perhaps even more so for historical data by the fact that morphology has evolved through time -some older inflections either do not exist anymore, or are hardly used in modern Finnish. As historical data comes with its own challenges, the presence of OCR errors makes the data even more burdensome to modern NLP methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Obviously, this problematic situation is not unique to Finnish. There are several other languages in the world with rich morphologies and relatively poor support for both historical and modern NLP. Such is the case with most of the languages that are related to Finnish like Erzya, Sami and Komi, these Uralic languages are severely endangered but have valuable historical resources in books that are not yet available in a digital format. OCR remains a problem especially for endangered languages (Partanen, 2017) , although OCR quality for such languages can be improved by limiting the domain in which the OCR models are trained and used (Partanen and Rie\u00dfler, 2019) .", "cite_spans": [ { "start": 498, "end": 514, "text": "(Partanen, 2017)", "ref_id": "BIBREF22" }, { "start": 641, "end": 669, "text": "(Partanen and Rie\u00dfler, 2019)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Automated OCR post-correction is usually modelled as a supervised machine learning problem where a model is trained with parallel data consisting of OCR erroneous text and manually corrected text. However, we want to develop a method that can be used even in contexts where no manually annotated data is available. The most viable recent method for such a task is the one presented by H\u00e4m\u00e4l\u00e4inen and Hengchen (2019) . However, their model works only on correcting individual words without considering the context in sentences, and as it focuses on English, it completely ignores the issues rising from a rich morphology. Extending their approach, we introduce a self-supervised model to automatically generate parallel data which is learned from the real OCRed text. Later, we train sequence-to-sequence (seq2seq) NMT models on character level with context information to correct OCR errors. The NMT models are based on the Transformer algorithm (Vaswani et al., 2017) , whose detailed comparison is demonstrated in this article.", "cite_spans": [ { "start": 385, "end": 415, "text": "H\u00e4m\u00e4l\u00e4inen and Hengchen (2019)", "ref_id": "BIBREF7" }, { "start": 946, "end": 968, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As more and more digital humanities (DH) work start to use the large-scale, digitised and OCRed collections made available by national libraries and other digitisation projects, the quality of OCR is a central point for text-based humanities research. Can one trust the output of complex NLP systems, if these are fed with bad OCR? Beyond the common pitfalls inherent to historical data (see Piotrowski (2012) for a very thorough overview), some works have tried to answer the question stated above: Hill and Hengchen (2019) use a subset of 18th-century corpus, ECCO 3 as well as its keyed-in counterpart ECCO-TCP to compare the output of common NLP tasks used in DH and conclude that OCR noise does not seem to be a large factor in quantitative analyses. A conclusion similar to previous work by Rodriquez et al. (2012) in the case of NER and to Franzini et al. (2018) for authorship attribution, but in opposition to Mutuvi et al. (2018) who focus on topic modelling for historical newspapers and conclude that OCR does play a role. More recently and still on historical newspapers, van Strien et al. (2020) conclude that while OCR noise does have an impact, its effect widely differs between downstream tasks.", "cite_spans": [ { "start": 392, "end": 409, "text": "Piotrowski (2012)", "ref_id": "BIBREF25" }, { "start": 797, "end": 820, "text": "Rodriquez et al. (2012)", "ref_id": "BIBREF28" }, { "start": 847, "end": 869, "text": "Franzini et al. (2018)", "ref_id": "BIBREF6" }, { "start": 919, "end": 939, "text": "Mutuvi et al. (2018)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "It has become apparent that OCR quality for historical texts has become central for funding bodies and collection-holding institutions alike. Reports such as the one put forward by Smith and Cordell (2019) rise OCR initiatives, while the Library-of-Congress-commissioned report by Cordell (2020) underlines the importance of OCR for culturage heritage collections. These reports echo earlier work by, among others, Tanner et al. (2009) who tackle the digitisation of British newspapers, the EU-wide IMPACT project 4 that gathers 26 national libraries, or Adesam et al. (2019) who set out to analyse the quality of OCR made available by the Swedish language bank.", "cite_spans": [ { "start": 181, "end": 205, "text": "Smith and Cordell (2019)", "ref_id": "BIBREF33" }, { "start": 281, "end": 295, "text": "Cordell (2020)", "ref_id": "BIBREF2" }, { "start": 415, "end": 435, "text": "Tanner et al. (2009)", "ref_id": "BIBREF35" }, { "start": 555, "end": 575, "text": "Adesam et al. (2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "OCR post-correction has been tackled in previous work. Specifically for Finnish, Drobac et al. (2017) correct the OCR of newspapers using weighted finite-state methods, accordance with, Silfverberg and Rueter (2015) do the same for Finnish (and Erzya) . Most recent approaches rely on the machine translation (MT) of \"dirty\" text 3 Eighteenth Century Collections Online, https://www.gale.com/primary-sources/ eighteenth-century-collections-online 4 http://www.impact-project.eu", "cite_spans": [ { "start": 81, "end": 101, "text": "Drobac et al. (2017)", "ref_id": "BIBREF5" }, { "start": 186, "end": 215, "text": "Silfverberg and Rueter (2015)", "ref_id": "BIBREF32" }, { "start": 240, "end": 251, "text": "(and Erzya)", "ref_id": null }, { "start": 330, "end": 331, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "into \"clean\" texts. These MT approaches are quickly moving from statistical MT (SMT) -as previously used for historical text normalisation, e.g. the work by Pettersson et al. (2013) -to NMT: Dong and Smith (2018) use a word-level seq2seq NMT approach for OCR post-correction, while H\u00e4m\u00e4l\u00e4inen and Hengchen (2019) , on which we base our work, mobilised character-level NMT. Very recently, Nguyen et al. (2020) use BERT embeddings to improve an NMT-based OCR postcorrection system on English.", "cite_spans": [ { "start": 157, "end": 181, "text": "Pettersson et al. (2013)", "ref_id": "BIBREF24" }, { "start": 282, "end": 312, "text": "H\u00e4m\u00e4l\u00e4inen and Hengchen (2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "In this section, we describe our methods for automatically generating parallel data that can be used in a character-level NMT model to conduct OCR post-correction. In short, our method requires only a corpus with OCRed text that we want to automatically correct, a word list, a morphological analyzer and any corpus of error free text. Since we focus on Finnish only, it is important to note that such resources exist for many endangered Uralic languages as well as they have extensive XML dictionaries and FSTs available (see (H\u00e4m\u00e4l\u00e4inen and Rueter, 2018) ) together with a growing number of Universal Dependencies (Nivre et al., 2016) treebanks such as Komi-Zyrian (Lim et al., 2018) , Erzya (Rueter and Tyers, 2018) , Komi-Permyak (Rueter et al., 2020) and North Sami (Sheyanova and Tyers, 2017) .", "cite_spans": [ { "start": 527, "end": 556, "text": "(H\u00e4m\u00e4l\u00e4inen and Rueter, 2018)", "ref_id": "BIBREF8" }, { "start": 616, "end": 636, "text": "(Nivre et al., 2016)", "ref_id": "BIBREF21" }, { "start": 667, "end": 685, "text": "(Lim et al., 2018)", "ref_id": "BIBREF16" }, { "start": 694, "end": 718, "text": "(Rueter and Tyers, 2018)", "ref_id": "BIBREF30" }, { "start": 734, "end": 755, "text": "(Rueter et al., 2020)", "ref_id": "BIBREF29" }, { "start": 771, "end": 798, "text": "(Sheyanova and Tyers, 2017)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "3" }, { "text": "We design the first experiment based on the previous work (H\u00e4m\u00e4l\u00e4inen and Hengchen, 2019) , who train a character-level NMT system. Their research indicates that there is a strong semantic relationship between the correct word to its erroneous forms and we can generate OCR error candidates using semantic similarity. To be able to train the NMT model, we need to extract the parallel data of correct words and their OCR errors. Accordingly, we trained the Word2Vec model (Mikolov et al., 2013) on the Historical Newspaper of Finland from 1771 to 1929 using the Gensim library (\u0158eh\u016f\u0159ek and Sojka, 2010). After obtaining the Word2Vec model and its trained vocabulary, we extract the parallel data by using the Finnish morphological FST, Omorfi (Pirinen, 2015), provided in the UralicNLP library (H\u00e4m\u00e4l\u00e4inen, 2019) and -following H\u00e4m\u00e4l\u00e4inen and Hengchen (2019) -Levenshtein edit distance (Levenshtein, 1965) . The original approach used a lemma list for English for the data extraction, but we use an FST so that we can distinguish morphological forms from OCR errors. Without the FST, different inflectional forms would also be considered to be OCR errors, which is particularly counterproductive with a highly-inflected language.", "cite_spans": [ { "start": 58, "end": 89, "text": "(H\u00e4m\u00e4l\u00e4inen and Hengchen, 2019)", "ref_id": "BIBREF7" }, { "start": 472, "end": 494, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF18" }, { "start": 794, "end": 812, "text": "(H\u00e4m\u00e4l\u00e4inen, 2019)", "ref_id": "BIBREF9" }, { "start": 886, "end": 905, "text": "(Levenshtein, 1965)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": "3.1" }, { "text": "We build a list of correct Finnish words by lemmatisating all words in the Word2Vec model's vocabulary: if the lemma is present in the Finnish Wiktionary lemma list, 5 it is considered as correct and saved as such. Next, for each word in this \"correct\" list, we retrieve the most similar words from the Word2Vec model. Those similar words are checked to see whether they exist in the correct list or not and separated into two different groups of correct words and OCR errors. Notice that not all the words in the error list are the wrong OCR format of the given correct word, and thus need to be filtered out. Following H\u00e4m\u00e4l\u00e4inen and Hengchen (2019), we calculate the Levenshtein edit distance scores of the OCR errors to the correct word and empirically set a threshold of 4 as the maximum distance to accept as the true error form of that given word. As a result, for each given correct word, we have a set of similar correct words including the given one and a set of error words. From the two extracted groups, we do pairwise mapping to have one error word as training input and one correct word as the target output. Finally, the parallel data is converted into a character level format before feeding it to the NMT model for training. For example: j o l e e n \u2192 j o k e e n (\"into a river\") pair has the first word is incorrect and the second one is the right form. We follow H\u00e4m\u00e4l\u00e4inen and Hengchen (2019) and use OpenNMT (Klein et al., 2017) with default settings, i.e. bi-directional LSTM with global attention (Luong et al., 2015) . We train for 10,000 steps and keep the last checkpoint as a baseline, which will be referred to as \"NATAS\" in the remainder of this paper.", "cite_spans": [ { "start": 1384, "end": 1414, "text": "H\u00e4m\u00e4l\u00e4inen and Hengchen (2019)", "ref_id": "BIBREF7" }, { "start": 1431, "end": 1451, "text": "(Klein et al., 2017)", "ref_id": "BIBREF14" }, { "start": 1522, "end": 1542, "text": "(Luong et al., 2015)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": "3.1" }, { "text": "In the following subsections we introduce a different method to create a parallel dataset and apply a new sequence to the sequence model to train the data. The baseline approach presented above might introduce noise when we are unable to confidently know that the error word is mapped cor-5 https://fi.wiktionary.org/wiki/Wikisanakirja:Etusivu rectly to the given correct word, especially in the case of semantically similar words that have similar lengths. Another limitation of the baseline approach is that NMT model usually requires more variants to achieve better performance -something limited by the vocabulary of the Word2Vec model, which is trained with a frequency threshold so as to provide semantically similar words. To solve these problems we artificially introduce OCR-like errors in a modern corpus, and thus obtain more variants of the training word pairs and less noise in the data. We further specialise our approach by applying the Transformer model with context and non-context words experiments instead of the default OpenNMT algorithms for training. In the next section, we detail our implementation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "3.2" }, { "text": "For the artificial dataset, we use the Yle News corpus 6 which contains more than 700 thousand articles written in Finnish from 2011 to 2018. All the articles are stored in a text file. Punctuation and characters not present in the Finnish alphabet are removed before tokenisation. After cleaning, we generate an artificial dataset by two different methods: random generator and a trained OCR error generator model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Construction", "sec_num": "3.2.1" }, { "text": "As previously stated, we will use a random generator to sample an OCR error word. In OCR text, an error normally happens when a character is misrecognized or ignored. This behavior causes some characters in the word to be missed, altered or introduced. The wrong characters will take a small ratio in the text. Thus, we design algorithm 1 to produce similar errors in the modern corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Random Generator", "sec_num": null }, { "text": "For each word in the dataset, we will introduce errors to that word by deleting, replacing and adding characters randomly with a threshold of noise rate 0.07. The valid characters to be changed, added or removed must be in the Finnish alphabet, we do not introduce special characters as errors. The idea is that we select a random character position in the string with a probability smaller than noise rate multiplied with length of the string to restrict the percentage of errors in the word. This mean with the long word (eg. 15 characters), there will be always an error proposed. This process is repeated for each action of deleting, replac- end for 12: end procedure ing, adding, thus a word could either have all kinds of errors or none if the random rate is bigger than threshold. A longer word is likely to have more errors than a shorter one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Random Generator", "sec_num": null }, { "text": "Trained Generator Similarly to the random generator, we will modify the correct word into an erroneous form, but with a different approach. Instead of pure randomness, we build a model to better simulate OCR erroneous forms. The hypothesis is that if the artificial errors introduced to words have the same pattern as found in the real OCRed text, it would be more effective when applying the resulting model back to the real dataset. For example, the letter \"i\" and \"l\" are more likely to be misrecognized than \"i\" and \"g\" by the OCR engine. To build the error generation model, we use the extracted parallel dataset from the NATAS experiment. However, the source and target for the NMT model are reversed to have correctly spelled words as the input and erroneous words as the output from the training. By trying to predict an OCR erroneous form for a given correct spelling, the model can learn an error pattern that mimics the real OCRed text. OpenNMT uses cross entropy loss by default, which causes an issue when applied to solve this problem. In our experiments, the model eventually predicted an output identical to the source because it is the most optimal way to reduce the loss. If we want to generate different output for the input, there is a need to penalize the model when having the same prediction as the input. To solve the problem, we built a simple RNN translation model with GRU (gated recurrent unit) layers and a custom loss function as shown in Equation 2. The loss function is built up from cross entropy cost function in Equation 1, where H = {h (1) , ..., h (n) } is a set of predicted outcomes from the model and T = {t (1) , ..., t (n) } is the set of targets. We calculate normal cross entropy of predicted output\u0176 and the labels Y for finding an optimal way to mimic the target Y , on the other hand, the inverted cross entropy between Y and the inputs X is to punish the model if the outcomes are identical to the inputs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Random Generator", "sec_num": null }, { "text": "The model's encoder and decoder each have one embedding layer with 128 dimensions and one GRU layer of 512 hidden units. The input sequences are encoded to have the source's context, this context is then passed through the decoder. For each next character of the output, the decoder concatenates the source's context, hidden context and character's embedded vector. The merged vectors then are passed through a linear layer to give the prediction. The model is trained by teacher enforcing technique with the rate 0.5. This means for the next input character, we either select the top one from the previous output or use the already known next one from the target label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Random Generator", "sec_num": null }, { "text": "Parallelisation and long memorisation are weakness characteristic of RNNs in NMT (Bai et al., 2018) . Fortunately, Transformer proved to be much faster (mainly due to the absence of recursion), and since they process sequences as a whole they are shown to \"remember\" information better through their multi-head attention mechanism and positional embedding (Vaswani et al., 2017) . Transformer has been shown to be extremely efficient in various tasks (see e.g. BERT (Devlin et al., 2018) ), which is why we apply this model to our problem. Our implementation of the Trans-", "cite_spans": [ { "start": 81, "end": 99, "text": "(Bai et al., 2018)", "ref_id": "BIBREF1" }, { "start": 356, "end": 378, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF36" }, { "start": 466, "end": 487, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3.2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "cross_entropy(H, T ) = \u2212 1 n n i=1 t (i) ln h (i) + (1 \u2212 t (i) ) ln(1 \u2212 h (i) ) loss = cross_entropy(\u0176 , Y ) + 1 \u00f7 cross_entropy(\u0176 , X) (1)", "eq_num": "(2)" } ], "section": "Models", "sec_num": "3.2.2" }, { "text": "former model is based on (Vaswani et al., 2017) and uses the Pytorch framework. 7 The model contains 3 encoder and decoder layers, each of which has 8 heads of self-attention. We also implement a learned positional encoding and use Adam (Kingma and Ba, 2014) as the optimizer with a static learning rate of 5 \u2022 10 \u22124 which gave a better convergence compared to the default value of 0.001 based on our experiment. Following prior work, cross entropy was again used as the loss function.", "cite_spans": [ { "start": 25, "end": 47, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3.2.2" }, { "text": "Our baseline NATAS only has fixed training samples extracted from the Word2Vec model. In this experiment, we design a dynamic data loader which generates new erroneous words for every mini-batch while training, allowing the model to learn from more variants at every iteration. As was mentioned in the introduction, we train contextualized sequence-to-sequence character-based models. Instead of feeding a single error word to the model as the input, we combine it with the context words before and after it in sequence. We only consider the correct form of that error word as the label, and are not predicting the context words. The input includes the error (target) word in the middle and its two sides context make up a window of odd number of words. Hence, a valid window sliding over the corpus must have an odd size, for instance 3, 5, etc. The way we construct the input and gold label is presented as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3.2.2" }, { "text": "\u2022 The window size of n words is selected. The middle word is considered the target word \u2022 The words on left and right of the target are context words \u2022 The input sequence is converted in proper format, for example with window n=5: l e f t c o n t e x t f a r g e t r i g h t c o n t e x t , where:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3.2.2" }, { "text": "- indicates the start of a sequence; - is the separator for the context words;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3.2.2" }, { "text": "7 https://pytorch.org/ - separates left and right context with the target; - indicates the end of a sequence; - indicates the padding if needed for mini-batch training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3.2.2" }, { "text": "Following the previous section, the \"target\" word is generated by creating artificial errors in two different ways: using random generator, and a trained generator. For instance, the word \"target\" in the example above is modified to \"farget\", and the model is trained to predict the output \"target\". The gold label is also formatted in the same format, but without any context words. In this case, the label should have this form: t a r g e t . After having the pairs of input and label formatted properly, we feed them into the Transformer model with a batch size of 256 -a balance between the speed and accuracy in our case. In this experiment, we evaluate our model with 3 different window sizes: 1, 3, and 5, with the window size of 1 as a special case: there are no context words, and the input is f a r g e t . For every window size we train with two different error generators (Random and Trained), and have thus 6 models in total. These models are named hereafter TFRandW1, TFRandW3, TFRandW5, TFTrainW1, TF-TrainW3, and TFTrainW5, where T F stands for Transformer, Rand is for the random generator, T rain is for the trained generator and W n for a window of n words. We proceeded with the training until the loss converged. All models converged after around 20 epochs. The losses for the T rain models are \u223c 0.064 and those for Rand are slightly lower, with \u223c 0.059.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3.2.2" }, { "text": "We evaluate all proposed models and the NATAS baseline on the Ground Truth Finnish Fraktur dataset 8 made available by the National Library of Finland, a collection of 479 journal and newspaper pages from the time period 1836 -1918 (Kettunen et al., 2018 . The data format is constructed as a csv table with 471,903 lines of words or characters and there are four columns of ground truth (GT) aligned with the output coming from 3 different OCR methods TESSERACT, OLD and FR11 (Kettunen et al., 2018) .", "cite_spans": [ { "start": 221, "end": 225, "text": "1836", "ref_id": null }, { "start": 226, "end": 231, "text": "-1918", "ref_id": null }, { "start": 232, "end": 254, "text": "(Kettunen et al., 2018", "ref_id": "BIBREF11" }, { "start": 464, "end": 500, "text": "OLD and FR11 (Kettunen et al., 2018)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4" }, { "text": "Despite the existence of character-level benchmarks for OCR post-correction (e.g. Drobac et al. (2017)), we elect to evaluate models on the more realistic setting of whole words. We would like to note that Finnish has very long words, and as a result this metric is actually tougher. In the previous section, our models are trained without non-alphabet characters, so all the tokens which have non-alphabet will be removed. We also removed the blank lines which have no result from OCR. After having the ground truth and OCR text cleaned, the number of tokens for each OCR method (TESSERACT, OLD, FR11) are 458,799, 464,543 and 470,905 with accuracies of 88.29%, 75.34% and 79.79% respectively. The OCR words will be used as input data for the evaluation of our post-correction systems. The translation processes apply for each OCR method separately with the input tokens formatted based on the model's requirement. In NATAS, we used OpenNMT to translate with the default settings. In Transformer models with context, we created a sliding window over the rows of the OCRed text. For the non-context model, we only need a single token for source input. These models do the translation with beam search k = 3 and the highest probability sequence is chosen as the output. The result is shown in Table 1. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4" }, { "text": "From the result in Table 1 , we can see all the models could not make any improvement on OCR text. However, there is clearly an advantage of using an artificial dataset and Transformer model for training, which has a 7 percentage points higher accuracy compared to NATAS. After analyzing the result, we found that there are many interesting cases where the output words are considered as errors when compared to the ground truth directly but they are still correct. The difference is that the ground truth has been corrected by maintaining the historical spelling, but as our model has been trained to correct words to a modern spelling, these forms will appear as incorrect when compared directly with the ground truth. However, our models still corrected many of them right, but just happened to normalize the spelling to modern Finnish at the same time. As examples, the word lukuwuoden (\"academic year\") is normalized to lukuvuoden, and the word kortt (\"card\") is normalized to korrti, which are the correct spellings in modern Finnish. So, the problem here is that many words have acquired a new spelling in modern Finnish but are seen as the wrong result if compared to the ground truth, which affects the real accuracy of our models. In the 19th century Finnish text, the most obvious difference compared to modern Finnish is the variation of w/v, where most of the words containing v are written as w in old text, whereas in modern Finnish w is not used in any regular word. Kettunen and P\u00e4\u00e4kk\u00f6nen (2016) showed in their experiments that the number of tokens containing letter w contribute to 3.3% of all tokens and 97.5% of those tokens is missrecognized by FINTWOL -a morphological analyzer. They also tried to replace the w with v and the unrecognized tokens decreased to 30.6%. These numbers are significant which give us an idea to apply it on our results to get a better evaluation. Furthermore, there is another issue for our models when they try to make up the new words which do not exist in Finnish vocabulary. For example the word samppaajaa is likely created from the word samppanjaa (\"of Champagne\") which must be the correct one. To solve these issues, we suggested a fixing pipeline for our result:", "cite_spans": [ { "start": 1483, "end": 1512, "text": "Kettunen and P\u00e4\u00e4kk\u00f6nen (2016)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 19, "end": 26, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.1" }, { "text": "1. Check if the words exist in Finnish vocabulary using Omorfi with UralicNLP, if not then keep the OCRed words as the output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.1" }, { "text": "2. Find all words containing letter v, replace by letter w.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.1" }, { "text": "After the processing with the strategy above, we get updated results which can be found in Tables 2, 3 , and 4. Tables 2, 3 and 4 show a vast improvement for all models with the accuracy increased by 10-12 percentage points. In Tesseract, where the original OCR already has a very high quality with an accuracy of about 88%, there is no gain for all models. The best model in this case is TFTrainW5 with 84% accuracy. The reason for the models' worse performance is that they intro-duced more errors on the already correct words by OCR than fixing actual error words. While the ratio of fixing the error words (18.02%) is much higher than the ratio of confounding the correct words (7.25%), however, due to the number of correct words taking a much larger part in the corpus, the overall accuracy is decreased. In the OLD setting with accuracy of about 75%, 5 out of 7 models have successfully improved the accuracy of the original text. The highest number comes to TFTrainW3 which outperforms OLD by 3.92 percentage points, and following closely is TF-TrainW5 with an accuracy of 79.17%. In OLD, we see better error words corrected (36.03%) compared to Tesseract. The accuracy of the TF-TrainW5 model for the already corrected words is also slightly higher with 93.5% versus Tesseract 92.75%. The last OCR method for evaluation is FR11 (79%), where -just like in OLD -5 out of 7 models surpass the OCR result. Again, the TF-TrainW3 gives the highest number with 3.71 percentage points improvement on the OCRed text. While the TFTrainW3 shows surprisingly good results on fixing the wrong words with 45.17% accuracy, the TFTrainW5 performs slightly better at handling the right words. Common to all our proposed models, the window size of 1 somewhat unsurprisingly performs worse within both the Rand and T rain variants.", "cite_spans": [], "ref_spans": [ { "start": 91, "end": 104, "text": "Tables 2, 3", "ref_id": "TABREF3" }, { "start": 114, "end": 131, "text": "Tables 2, 3 and 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.1" }, { "text": "In this paper, we have shown that creating and using an artificial error dataset clearly outperforms the NATAS baseline (H\u00e4m\u00e4l\u00e4inen and Hengchen, 2019) , with a clear advantage for the T rain over the Rand configuration. Another clear conclusion is that a larger context window results in increasing the accuracy of the models. Comparing the new results for all three OCR methods, we see the models are most effective with FR11 when the ratio of fixing wrong words (45.17%) is high enough to overcome the issue of breaking the right words (6.7%). Our methods also work very well on OLD with ability to fix 36.03% of wrong words and handle more than 93% of right words correctly. However, our models are not compelling enough to beat the accuracy achieved by Tesseract, a conclusion we see as further work.", "cite_spans": [ { "start": 120, "end": 151, "text": "(H\u00e4m\u00e4l\u00e4inen and Hengchen, 2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future work", "sec_num": "5" }, { "text": "In spite of the effectiveness of the postcorrection strategy, it does not guarantee that all the words with w/v replaced are correct, nor that UralicNLP manages to recognize all the existing Finnish words. For example: the wrong OCR word mcntoistamuotiscn was fixed to metoistavuotisen which is the correct one according to the gold standard, but UralicNLP has filtered it out due to not considering that is the valid Finnish word. This is true, as the first syllable kol was dropped out due to a line break in the data, and without the line break, the word would be kolmetoistavuotisen (\"13 years old\"). This means that in the future, we need to develop better strategies more suitable to OCR contexts for telling correct and incorrect words apart.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future work", "sec_num": "5" }, { "text": "This implies that in reality the corrected cases can be higher if we don't revert the already normalized w/v words. In addition, if there is a better method to ensure a word is valid in Finnish, the result could be improved. Thus, our evaluation provides an overall view of how the Transformer and Trained Error Generator models with context words could improve the post OCR correction notably. Our methods also show that using artificial dataset from a modern corpus is very beneficial to normalize the historical text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future work", "sec_num": "5" }, { "text": "Importantly, we would like to underline that our method does not rely on huge amounts of hand annotated gold data, but can rather be applied for as long as one has access to an OCRed text, a vocabulary list, a morphological FST and error-free data. There are several endangered languages related to Finnish that already have these aforementioned resources in place. In the future, we are interested in trying our method out in those contexts as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future work", "sec_num": "5" }, { "text": "Source Code, https://github.com/ruathudo/ post-ocr-correction 2 Trained models, https://doi.org/10.5281/ zenodo.4242890", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://urn.fi/urn:nbn:fi:lb-2019030701", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "\"OCR Ground Truth Pages (Finnish Fraktur) [v1](4.8 GB)\", available at https://digi. kansalliskirjasto.fi/opendata", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Exploring the quality of the digital historical newspaper archive KubHist", "authors": [ { "first": "Yvonne", "middle": [], "last": "Adesam", "suffix": "" }, { "first": "Dana", "middle": [], "last": "Dann\u00e9lls", "suffix": "" }, { "first": "Nina", "middle": [], "last": "Tahmasebi", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yvonne Adesam, Dana Dann\u00e9lls, and Nina Tahmasebi. 2019. Exploring the quality of the digital historical newspaper archive KubHist. Proceedings of DHN.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "An empirical evaluation of generic convolutional and recurrent networks for sequence modeling", "authors": [ { "first": "Shaojie", "middle": [], "last": "Bai", "suffix": "" }, { "first": "J", "middle": [ "Zico" ], "last": "Kolter", "suffix": "" }, { "first": "Vladlen", "middle": [], "last": "Koltun", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shaojie Bai, J. Zico Kolter, and Vladlen Koltun. 2018. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Machine learning and libraries: a report on the state of the field", "authors": [ { "first": "Ryan", "middle": [], "last": "Cordell", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan Cordell. 2020. Machine learning and libraries: a report on the state of the field. Technical report, Library of Congress.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language under- standing.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Multi-input attention for unsupervised OCR correction", "authors": [ { "first": "Rui", "middle": [], "last": "Dong", "suffix": "" }, { "first": "David", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rui Dong and David Smith. 2018. Multi-input atten- tion for unsupervised OCR correction. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers).", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "OCR and post-correction of historical Finnish texts", "authors": [ { "first": "Senka", "middle": [], "last": "Drobac", "suffix": "" }, { "first": "Pekka", "middle": [], "last": "Sakari Kauppinen", "suffix": "" }, { "first": "Bo Krister Johan", "middle": [], "last": "Linden", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 21st Nordic Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Senka Drobac, Pekka Sakari Kauppinen, and Bo Kris- ter Johan Linden. 2017. OCR and post-correction of historical Finnish texts. In Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017, Gothenburg, Sweden. Link\u00f6ping University Electronic Press.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Attributing authorship in the noisy digitized correspondence of Jacob and Wilhelm Grimm", "authors": [ { "first": "Greta", "middle": [], "last": "Franzini", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Kestemont", "suffix": "" }, { "first": "Gabriela", "middle": [], "last": "Rotari", "suffix": "" }, { "first": "Melina", "middle": [], "last": "Jander", "suffix": "" }, { "first": "K", "middle": [], "last": "Jeremi", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Ochab", "suffix": "" }, { "first": "Joanna", "middle": [], "last": "Franzini", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Byszuk", "suffix": "" }, { "first": "", "middle": [], "last": "Rybicki", "suffix": "" } ], "year": 2018, "venue": "Frontiers in Digital Humanities", "volume": "5", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Greta Franzini, Mike Kestemont, Gabriela Rotari, Melina Jander, Jeremi K Ochab, Emily Franzini, Joanna Byszuk, and Jan Rybicki. 2018. Attribut- ing authorship in the noisy digitized correspondence of Jacob and Wilhelm Grimm. Frontiers in Digital Humanities, 5:4.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "From the paft to the fiiture: a fully automatic NMT and word embeddings method for OCR post-correction", "authors": [ { "first": "Mika", "middle": [], "last": "H\u00e4m\u00e4l\u00e4inen", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Hengchen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "431--436", "other_ids": { "DOI": [ "10.26615/978-954-452-056-4_051" ] }, "num": null, "urls": [], "raw_text": "Mika H\u00e4m\u00e4l\u00e4inen and Simon Hengchen. 2019. From the paft to the fiiture: a fully automatic NMT and word embeddings method for OCR post-correction. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019), pages 431-436.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Advances in synchronized xml-media wiki dictionary development in the context of endangered uralic languages", "authors": [ { "first": "Mika", "middle": [], "last": "H\u00e4m\u00e4l\u00e4inen", "suffix": "" }, { "first": "Jack", "middle": [], "last": "Rueter", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the XVIII EURALEX International Congress: Lexicography in Global Contexts", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mika H\u00e4m\u00e4l\u00e4inen and Jack Rueter. 2018. Advances in synchronized xml-media wiki dictionary develop- ment in the context of endangered uralic languages. In Proceedings of the XVIII EURALEX International Congress: Lexicography in Global Contexts.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "UralicNLP: An NLP library for Uralic languages", "authors": [ { "first": "Mika", "middle": [], "last": "H\u00e4m\u00e4l\u00e4inen", "suffix": "" } ], "year": 2019, "venue": "Journal of Open Source Software", "volume": "4", "issue": "37", "pages": "", "other_ids": { "DOI": [ "10.21105/joss.01345" ] }, "num": null, "urls": [], "raw_text": "Mika H\u00e4m\u00e4l\u00e4inen. 2019. UralicNLP: An NLP library for Uralic languages. Journal of Open Source Soft- ware, 4(37):1345.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Quantifying the impact of dirty OCR on historical text analysis: Eighteenth century collections online as a case study", "authors": [ { "first": "J", "middle": [], "last": "Mark", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Hill", "suffix": "" }, { "first": "", "middle": [], "last": "Hengchen", "suffix": "" } ], "year": 2019, "venue": "Digital Scholarship in the Humanities", "volume": "34", "issue": "4", "pages": "825--843", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark J Hill and Simon Hengchen. 2019. Quanti- fying the impact of dirty OCR on historical text analysis: Eighteenth century collections online as a case study. Digital Scholarship in the Humanities, 34(4):825-843.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Creating and using ground truth ocr sample data for finnish historical newspapers and journals", "authors": [ { "first": "Kimmo", "middle": [], "last": "Tapio Kettunen", "suffix": "" }, { "first": "Jukka", "middle": [], "last": "Kervinen", "suffix": "" }, { "first": "Jani Mika Olavi", "middle": [], "last": "Koistinen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Digital Humanities in the Nordic Countries 3rd Conference, CEUR Workshop proceedings", "volume": "", "issue": "", "pages": "7--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kimmo Tapio Kettunen, Jukka Kervinen, and Jani Mika Olavi Koistinen. 2018. Creating and using ground truth ocr sample data for finnish historical newspapers and journals. In Proceedings of the Dig- ital Humanities in the Nordic Countries 3rd Con- ference, CEUR Workshop proceedings, pages 162- 169. Technical University of Aachen. Digital Hu- manities in the Nordic Countries ; Conference date: 07-03-2018 Through 09-03-2018.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Measuring lexical quality of a historical finnish newspaper collection -analysis of garbled ocr data with basic language technology tools and means", "authors": [ { "first": "Kimmo", "middle": [], "last": "Tapio Kettunen", "suffix": "" }, { "first": "Tuula", "middle": [], "last": "Anneli P\u00e4\u00e4kk\u00f6nen", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kimmo Tapio Kettunen and Tuula Anneli P\u00e4\u00e4kk\u00f6nen. 2016. Measuring lexical quality of a historical finnish newspaper collection -analysis of garbled ocr data with basic language technology tools and means. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Open-NMT: Open-Source Toolkit for Neural Machine Translation", "authors": [ { "first": "Guillaume", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Yuntian", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Senellart", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" } ], "year": 2017, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P17-4012" ] }, "num": null, "urls": [], "raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M. Rush. 2017. Open- NMT: Open-Source Toolkit for Neural Machine Translation. In Proc. ACL.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "\u0414\u0432\u043e\u0438\u0447\u043d\u044b\u0435 \u043a\u043e\u0434\u044b \u0441 \u0438\u0441\u043f\u0440\u0430\u0432\u043b\u0435\u043d\u0438\u0435\u043c \u0432\u044b\u043f\u0430\u0434\u0435\u043d\u0438\u0439, \u0432\u0441\u0442\u0430\u0432\u043e\u043a \u0438 \u0437\u0430\u043c\u0435\u0449\u0435\u043d\u0438\u0439 \u0441\u0438\u043c\u0432\u043e\u043b\u043e\u0432", "authors": [ { "first": "Vladimir", "middle": [ "I" ], "last": "Levenshtein", "suffix": "" } ], "year": 1965, "venue": "\u0414\u043e\u043a\u043b\u0430\u0434\u044b \u0410\u043a\u0430\u0434\u0435\u043c\u0438\u0439 \u041d\u0430\u0443\u043a \u0421\u0421\u0421\u0420", "volume": "63", "issue": "4", "pages": "845--848", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vladimir I. Levenshtein. 1965. \u0414\u0432\u043e\u0438\u0447\u043d\u044b\u0435 \u043a\u043e\u0434\u044b \u0441 \u0438\u0441- \u043f\u0440\u0430\u0432\u043b\u0435\u043d\u0438\u0435\u043c \u0432\u044b\u043f\u0430\u0434\u0435\u043d\u0438\u0439, \u0432\u0441\u0442\u0430\u0432\u043e\u043a \u0438 \u0437\u0430\u043c\u0435\u0449\u0435\u043d\u0438\u0439 \u0441\u0438\u043c\u0432\u043e\u043b\u043e\u0432. \u0414\u043e\u043a\u043b\u0430\u0434\u044b \u0410\u043a\u0430\u0434\u0435\u043c\u0438\u0439 \u041d\u0430\u0443\u043a \u0421\u0421\u0421\u0420, 63(4):845-848.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Multilingual dependency parsing for lowresource languages: Case studies on North Saami and Komi-Zyrian", "authors": [ { "first": "Kyungtae", "middle": [], "last": "Lim", "suffix": "" }, { "first": "Niko", "middle": [], "last": "Partanen", "suffix": "" }, { "first": "Thierry", "middle": [], "last": "Poibeau", "suffix": "" } ], "year": 2018, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "KyungTae Lim, Niko Partanen, and Thierry Poibeau. 2018. Multilingual dependency parsing for low- resource languages: Case studies on North Saami and Komi-Zyrian. In Proceedings of LREC 2018.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Effective approaches to attentionbased neural machine translation", "authors": [ { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.04025" ] }, "num": null, "urls": [], "raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention- based neural machine translation. arXiv preprint arXiv:1508.04025.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Evaluating the impact of OCR errors on topic modeling", "authors": [ { "first": "Stephen", "middle": [], "last": "Mutuvi", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Doucet", "suffix": "" }, { "first": "Moses", "middle": [], "last": "Odeo", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Jatowt", "suffix": "" } ], "year": 2018, "venue": "International Conference on Asian Digital Libraries", "volume": "", "issue": "", "pages": "3--14", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Mutuvi, Antoine Doucet, Moses Odeo, and Adam Jatowt. 2018. Evaluating the impact of OCR errors on topic modeling. In International Con- ference on Asian Digital Libraries, pages 3-14. Springer.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Neural machine translation with bert for postocr error detection and correction", "authors": [], "year": 2020, "venue": "Proceedings of the ACM/IEEE Joint Conference on Digital Libraries in 2020", "volume": "", "issue": "", "pages": "333--336", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thi Tuyet Hai Nguyen, Adam Jatowt, Nhu-Van Nguyen, Mickael Coustaty, and Antoine Doucet. 2020. Neural machine translation with bert for post- ocr error detection and correction. In Proceedings of the ACM/IEEE Joint Conference on Digital Li- braries in 2020, pages 333-336.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Universal dependencies v1: A multilingual treebank collection", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Ginter", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Hajic", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Sampo", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Pyysalo", "suffix": "" }, { "first": "", "middle": [], "last": "Silveira", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "1659--1666", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Yoav Goldberg, Jan Hajic, Christopher D Man- ning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal dependencies v1: A multilingual treebank collection. In Proceed- ings of the Tenth International Conference on Lan- guage Resources and Evaluation (LREC'16), pages 1659-1666.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Challenges in ocr today: Report on experiences from INEL", "authors": [ { "first": "", "middle": [], "last": "Niko Partanen", "suffix": "" } ], "year": 2017, "venue": "\u042d\u043b\u0435\u043a\u0442\u0440\u043e\u043d\u043d\u0430\u044f \u041f\u0438\u0441\u044c\u043c\u0435\u043d\u043d\u043e\u0441\u0442\u044c \u041d\u0430\u0440\u043e\u0434\u043e\u0432 \u0420\u043e\u0441\u0441\u0438\u0439\u0441\u043a\u043e\u0439 \u0424\u0435\u0434\u0435\u0440\u0430\u0446\u0438\u0438: \u041e\u043f\u044b\u0442, \u041f\u0440\u043e\u0431\u043b\u0435\u043c\u044b \u0418 \u041f\u0435\u0440\u0441\u043f\u0435\u043a\u0442\u0438\u0432\u044b", "volume": "", "issue": "", "pages": "263--273", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niko Partanen. 2017. Challenges in ocr today: Re- port on experiences from INEL. In \u042d\u043b\u0435\u043a\u0442\u0440\u043e\u043d\u043d\u0430\u044f \u041f\u0438\u0441\u044c\u043c\u0435\u043d\u043d\u043e\u0441\u0442\u044c \u041d\u0430\u0440\u043e\u0434\u043e\u0432 \u0420\u043e\u0441\u0441\u0438\u0439\u0441\u043a\u043e\u0439 \u0424\u0435\u0434\u0435\u0440\u0430- \u0446\u0438\u0438: \u041e\u043f\u044b\u0442, \u041f\u0440\u043e\u0431\u043b\u0435\u043c\u044b \u0418 \u041f\u0435\u0440\u0441\u043f\u0435\u043a\u0442\u0438\u0432\u044b, pages 263-273.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "An OCR system for the unified northern alphabet", "authors": [ { "first": "Niko", "middle": [], "last": "Partanen", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Rie\u00dfler", "suffix": "" } ], "year": 2019, "venue": "The fifth International Workshop on Computational Linguistics for Uralic Languages", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niko Partanen and Michael Rie\u00dfler. 2019. An OCR system for the unified northern alphabet. In The fifth International Workshop on Computational Linguis- tics for Uralic Languages.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "An SMT approach to automatic annotation of historical text", "authors": [ { "first": "Eva", "middle": [], "last": "Pettersson", "suffix": "" }, { "first": "Be\u00e1ta", "middle": [], "last": "Megyesi", "suffix": "" }, { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the workshop on computational historical linguistics at NODALIDA", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eva Pettersson, Be\u00e1ta Megyesi, and J\u00f6rg Tiedemann. 2013. An SMT approach to automatic annotation of historical text. In Proceedings of the workshop on computational historical linguistics at NODALIDA 2013.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Natural language processing for historical texts", "authors": [ { "first": "Michael", "middle": [], "last": "Piotrowski", "suffix": "" } ], "year": 2012, "venue": "", "volume": "5", "issue": "", "pages": "1--157", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Piotrowski. 2012. Natural language process- ing for historical texts. Synthesis lectures on human language technologies, 5(2):1-157.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Development and use of computational morphology of Finnish in the open source and open science era: Notes on experiences with Omorfi development", "authors": [ { "first": "", "middle": [], "last": "Tommi A Pirinen", "suffix": "" } ], "year": 2015, "venue": "SKY Journal of Linguistics", "volume": "28", "issue": "", "pages": "381--393", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tommi A Pirinen. 2015. Development and use of com- putational morphology of Finnish in the open source and open science era: Notes on experiences with Omorfi development. SKY Journal of Linguistics, 28:381-393.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Software Framework for Topic Modelling with Large Corpora", "authors": [ { "first": "Petr", "middle": [], "last": "Radim\u0159eh\u016f\u0159ek", "suffix": "" }, { "first": "", "middle": [], "last": "Sojka", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks", "volume": "", "issue": "", "pages": "45--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Frame- work for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Comparison of named entity recognition tools for raw OCR text", "authors": [ { "first": "Kepa Joseba", "middle": [], "last": "Rodriquez", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Bryant", "suffix": "" }, { "first": "Tobias", "middle": [], "last": "Blanke", "suffix": "" }, { "first": "Magdalena", "middle": [], "last": "Luszczynska", "suffix": "" } ], "year": 2012, "venue": "KONVENS", "volume": "", "issue": "", "pages": "410--414", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kepa Joseba Rodriquez, Mike Bryant, Tobias Blanke, and Magdalena Luszczynska. 2012. Comparison of named entity recognition tools for raw OCR text. In KONVENS, pages 410-414.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "On the questions in developing computational infrastructure for komi-permyak", "authors": [ { "first": "Jack", "middle": [], "last": "Rueter", "suffix": "" }, { "first": "Niko", "middle": [], "last": "Partanen", "suffix": "" }, { "first": "Larisa", "middle": [], "last": "Ponomareva", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Sixth International Workshop on Computational Linguistics of Uralic Languages", "volume": "", "issue": "", "pages": "15--25", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jack Rueter, Niko Partanen, and Larisa Ponomareva. 2020. On the questions in developing computational infrastructure for komi-permyak. In Proceedings of the Sixth International Workshop on Computational Linguistics of Uralic Languages, pages 15-25.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Towards an Open-Source Universal-Dependency Treebank for Erzya", "authors": [ { "first": "Jack", "middle": [], "last": "Rueter", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Tyers", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Fourth International Workshop on Computatinal Linguistics of Uralic Languages", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jack Rueter and Francis Tyers. 2018. Towards an Open-Source Universal-Dependency Treebank for Erzya. In Proceedings of the Fourth International Workshop on Computatinal Linguistics of Uralic Languages.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Annotation schemes in North S\u00e1mi dependency parsing", "authors": [ { "first": "Mariya", "middle": [], "last": "Sheyanova", "suffix": "" }, { "first": "Francis", "middle": [ "M" ], "last": "Tyers", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 3rd International Workshop for Computational Linguistics of Uralic Languages", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mariya Sheyanova and Francis M. Tyers. 2017. Anno- tation schemes in North S\u00e1mi dependency parsing. In Proceedings of the 3rd International Workshop for Computational Linguistics of Uralic Languages.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Can morphological analyzers improve the quality of optical character recognition", "authors": [ { "first": "Miikka", "middle": [], "last": "Silfverberg", "suffix": "" }, { "first": "Jack", "middle": [], "last": "Rueter", "suffix": "" } ], "year": 2015, "venue": "Septentrio Conference Series", "volume": "2", "issue": "", "pages": "45--56", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miikka Silfverberg and Jack Rueter. 2015. Can mor- phological analyzers improve the quality of optical character recognition? In Septentrio Conference Se- ries, 2, pages 45-56.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "A research agenda for historical and multilingual optical character recognition", "authors": [ { "first": "David", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Cordell", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David A. Smith and Ryan Cordell. 2019. A research agenda for historical and multilingual optical char- acter recognition. Technical report, Northeastern University.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Assessing the impact of ocr quality on downstream nlp tasks", "authors": [ { "first": "Kaspar", "middle": [], "last": "Daniel Van Strien", "suffix": "" }, { "first": "", "middle": [], "last": "Beelen", "suffix": "" }, { "first": "Kasra", "middle": [], "last": "Mariona Coll Ardanuy", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Hosseini", "suffix": "" }, { "first": "Giovanni", "middle": [], "last": "Mcgillivray", "suffix": "" }, { "first": "", "middle": [], "last": "Colavizza", "suffix": "" } ], "year": 2020, "venue": "ICAART (1)", "volume": "", "issue": "", "pages": "484--496", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel van Strien, Kaspar Beelen, Mariona Coll Ar- danuy, Kasra Hosseini, Barbara McGillivray, and Giovanni Colavizza. 2020. Assessing the impact of ocr quality on downstream nlp tasks. In ICAART (1), pages 484-496.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Measuring mass text digitization quality and usefulness. D-lib Magazine", "authors": [ { "first": "Simon", "middle": [], "last": "Tanner", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Mu\u00f1oz", "suffix": "" }, { "first": "Pich", "middle": [], "last": "Hemy", "suffix": "" }, { "first": "Ros", "middle": [], "last": "", "suffix": "" } ], "year": 2009, "venue": "", "volume": "15", "issue": "", "pages": "1082--9873", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simon Tanner, Trevor Mu\u00f1oz, and Pich Hemy Ros. 2009. Measuring mass text digitization quality and usefulness. D-lib Magazine, 15(7/8):1082-9873.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", "links": null } }, "ref_entries": { "TABREF0": { "text": "Algorithm 1 Random errors generator 1: procedure RANDOMERROR(W ord, N oiseRate)", "type_str": "table", "html": null, "num": null, "content": "
2:Alphas = \"abcdefghijklmnopqrstuvwxyz\u00e4\u00e5\u00f6\"
3:for Action in [delete, add, replace] do
4:generate Rand is a random number between 0 and 1
5:if Rand < N oiseRate \u00d7 W ordLength then
6:Select a random character position P in W ord
7:if character P is in Alphas then
8:Do the Action on P with Alphas
9:end if
10:end if
11:
" }, "TABREF3": { "text": "", "type_str": "table", "html": null, "num": null, "content": "
: Models accuracy post-processing for
Tesseract (88.29%)
PostErrorCorrect
Modelsprocessedwordswords
accuracyaccuracyaccuracy
NATAS71.1930.6684.45
TFRandW175.1028.1490.47
TFRandW375.4028.2690.83
TFRandW576.2628.6391.85
TFTrainW178.1935.0792.30
TFTrainW379.2636.0393.41
TFTrainW579.1735.4193.50
" }, "TABREF4": { "text": "", "type_str": "table", "html": null, "num": null, "content": "
: Models accuracy post-processing for
OLD (75.34%)
PostErrorCorrect
Modelsprocessedwordswords
accuracyaccuracyaccuracy
NATAS75.0636.5284.81
TFRandW179.6636.0490.71
TFRandW380.0637.0090.96
TFRandW581.0938.0491.99
TFTrainW182.3943.3992.26
TFTrainW383.5045.1793.21
TFTrainW583.3444.0193.30
" }, "TABREF5": { "text": "", "type_str": "table", "html": null, "num": null, "content": "
: Models accuracy post-processing for
FR11 (79.79%)
The results in
" } } } }