ACL-OCL / Base_JSON /prefixI /json /iwclul /2020.iwclul-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:33:36.524535Z"
},
"title": "Towards a Speech Recognizer for Komi, an Endangered and Low-Resource Uralic Language",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Hjortnaes",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indiana University",
"location": {
"settlement": "Bloomington",
"region": "IN"
}
},
"email": ""
},
{
"first": "Niko",
"middle": [],
"last": "Partanen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Helsinki Helsinki",
"location": {
"country": "Finland"
}
},
"email": "niko.partanen@helsinki.fi"
},
{
"first": "Michael",
"middle": [],
"last": "Rie\u00dfler",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Eastern Finland Joensuu",
"location": {
"country": "Finland"
}
},
"email": "michael.riessler@uef.fi"
},
{
"first": "Francis",
"middle": [
"M"
],
"last": "Tyers",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indiana University",
"location": {
"settlement": "Bloomington",
"region": "IN"
}
},
"email": "ftyers@iu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we present and evaluate a first pass speech recognition model for Komi, an endangered and low-resource Uralic language spoken in Russia. We compare a transfer learning approach from English with a baseline model trained from scratch using DeepSpeech (an end-to-end ASR model) and evaluate the impact of fine tuning a language model for correcting the output of the network. We also provides an overview of previous research and perform an error analysis with a focus on the language model and the challenges introduced by a fieldwork based corpus. Though we only achieve a 70.9% Character Error Rate, there is a great deal to be learned from the circumstances presented by our data's structure and origins.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we present and evaluate a first pass speech recognition model for Komi, an endangered and low-resource Uralic language spoken in Russia. We compare a transfer learning approach from English with a baseline model trained from scratch using DeepSpeech (an end-to-end ASR model) and evaluate the impact of fine tuning a language model for correcting the output of the network. We also provides an overview of previous research and perform an error analysis with a focus on the language model and the challenges introduced by a fieldwork based corpus. Though we only achieve a 70.9% Character Error Rate, there is a great deal to be learned from the circumstances presented by our data's structure and origins.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In the creation of any corpus of spoken text, the transcription work can be identified as the major bottleneck that limits how much recorded speech data can be annotated and included in the corpus. The situation is particularly dire with endangered languages for which language technology does not exist (Foley et al., 2018, 206) . But typically, even corpus building projects working with spoken data from majority languages manage to transcribe and analyze only a fraction of the materials for which they have recorded audio data. The need for speech-to-text tools is not restricted to fieldwork-based language documentation producing new speech recordings, but rather a continuum of projects and languages with various levels of resources. There is also an immense build-up of nontranscribed legacy audio recordings of endangered languages stored at various private or institutional archives, in which case even a small and endangered language may have a significant amount of currently unused materials. At the same time, speech recognition technologies have been fully functional for a variety of languages for some time already. Although the use of such tools would potentially offer large improvements for language documentation and corpus building, it is still unclear how to integrate this technology into work with endangered languages in the most successful manner.",
"cite_spans": [
{
"start": 304,
"end": 329,
"text": "(Foley et al., 2018, 206)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Spoken corpora of endangered languages for the study of endangered languages are often relatively small, especially when compared to the resources available for larger languages. This is not necessarily due to lack of relevant audio recordings. There are no statistics about the typical sizes of endangered language corpora, but it can be assumed that transcribed portions are somewhere from a few hours to tens of hours, with magnitudes of hundreds of hours becoming rare. This is much lower than the threshold usually estimated that is needed for major speech recognition systems. From this point of view, the initial goal of using speech recognition in this context could be attempting to improve the transcription speed. This would result in larger transcribed corpora which could continuously improve the speech recognition system. The accuracy needed to reach that point would be such that it is faster to correct than do transcription manually, as before then speech recognition doesn't help the tran-scription task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There have been several earlier attempts to build pipelines that integrate speech recognition into language documentation context, most importantly Elpis (Foley et al., 2019) and Persephone . These systems are still maturing, with desire to make them more easily available for an ordinary linguist with no technical background in speech recognition. There are only individual reports of project having yet adapted these tools, with exceptions such as work described in on the Na language, where an error rate of 17% was reported. Also report that it seems possible to achieve phoneme error rates below 30% with only half an hour of recordings. Both of these experiments were done in a single speaker setting.",
"cite_spans": [
{
"start": 154,
"end": 174,
"text": "(Foley et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Instead of using tools specifically designed in a language documentation context, in this paper we train and evaluate a speech recognition system for Zyrian Komi using DeepSpeech (Hannun et al., 2014) .",
"cite_spans": [
{
"start": 179,
"end": 200,
"text": "(Hannun et al., 2014)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "DeepSpeech has been used previously with a variety of languages. It is most commonly used with large languages when the resources available vastly outnumber what we have. We found several other cases where DeepSpeech was used, for example, with Russian (Iakushkin et al., 2018), Romanian (Panaite et al., 2019) , Tujian (Yu et al., 2019) and Bangla (Saurav et al., 2018) . All of these experiments report higher scores than we do, with the exception of Russian, with smaller data, but there are important differences as well. Romanian recordings were done in studio environment, Tujian sentences were specifically translated to Chinese to take advantage of the Chinese model, and Bangla experiment had a limited vocabulary. The Russian corpus has well over 1000 hours, which brings it, in a way, out of the low-resource scenario where the other mentioned works took place.",
"cite_spans": [
{
"start": 288,
"end": 310,
"text": "(Panaite et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 320,
"end": 337,
"text": "(Yu et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 342,
"end": 370,
"text": "Bangla (Saurav et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "One experiment with DeepSpeech that seems particularly relevant to us is the work done recently on Seneca (Jimerson et al., 2018) because the word error rate was very high and difficult to reduce.",
"cite_spans": [
{
"start": 106,
"end": 129,
"text": "(Jimerson et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "The overview of related work leads us to the conclusion that speech recognition has reached significant results in conditions where very large transcribed datasets are available, or there are other constraints present, such as a small number of speakers and/or studio recording quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Komi is a Uralic language spoken primarily in the North-Eastern corner of European Russia, bordering the Ural mountains in the East. There are, however, numerous settlements where Komi is spoken outside the main speaking areas, and these communities span from the Kola Peninsula to Western Siberia.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Komi language",
"sec_num": "3"
},
{
"text": "Zyrian Komi is closely related to Permian and Jazva Komi. All Komi varieties are mutually intelligible and form a complex dialect continuum. Komi is more distantly related to Udmurt, which is spoken south from main Komi areas. Together Komi and Udmurt form the Permic branch of Uralic languages. Other languages in this family are significantly more distantly related.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Komi language",
"sec_num": "3"
},
{
"text": "The Komi language currently has approximately 160,000 speakers, and it is spoken in a large number of individual settlements in Northern Russia. The language is taught, although to a limited degree, in schools as a subject in some municipalities. There are several weekly publications and the written language is stable and generally well known. There is also continuous online presence. The largest Komi corpus contains over 50 million words (Fu-Lab, 2019). For a more thorough description see, i.e. Hausenberg; \u0426\u044b\u043f\u0430\u043d\u043e\u0432 (2009) .",
"cite_spans": [
{
"start": 513,
"end": 527,
"text": "\u0426\u044b\u043f\u0430\u043d\u043e\u0432 (2009)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Komi language",
"sec_num": "3"
},
{
"text": "Komi is spoken in intensive contact with Russian, a dominant Slavic language of the region. A large portion of the Komi lexicon is borrowed from Russian, and virtually all speakers are currently bilingual. Bilingual phenomena present in contemporary Komi have been studied in detail (Leinonen, 2002 (Leinonen, , 2006 , and with particularly importance for our study, the northern dialect that is predominantly present in our corpus is known for its extensive Russian contact (Leinonen, 2009) .",
"cite_spans": [
{
"start": 283,
"end": 298,
"text": "(Leinonen, 2002",
"ref_id": "BIBREF12"
},
{
"start": 299,
"end": 316,
"text": "(Leinonen, , 2006",
"ref_id": "BIBREF13"
},
{
"start": 475,
"end": 491,
"text": "(Leinonen, 2009)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Komi language",
"sec_num": "3"
},
{
"text": "Komi is written with Cyrillic orthography. The script is essentially phonemic, although different character combinations are used to represent similar sounds in different contexts, as is typical for Cyrillic scripts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Komi language",
"sec_num": "3"
},
{
"text": "The majority of Komi resources used in this study originate from the Kone Foundation funded I\u017ava Komi Documentation Project, the results of which are currently available in the Language Bank of Finland (Blokland et al., 2019) . However, there are numerous Komi materials that are in various stages of being turned into corpora, and these include recordings stored in the Institute for the Languages of Finland. Eventually all these materials should be combined into the Spoken Komi Corpus, and developing speech recognition technologies that can operate on various recording types is an important part in advancing the work on these resources.",
"cite_spans": [
{
"start": 202,
"end": 225,
"text": "(Blokland et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Spoken Komi Corpus",
"sec_num": "4.1"
},
{
"text": "The corpus is relatively large, containing around 35 hours of transcribed utterances. The number of total recorded hours is much higher, as this count includes only the transcribed segments without silences. Also the number of individual speakers is very high, at over 200. This has been made possible by systematic inclusion of archival data, as the goal has been to build a corpus that is representative from different periods from which we have recordings, and also so that different geographical areas would be evenly covered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Spoken Komi Corpus",
"sec_num": "4.1"
},
{
"text": "Specific features of the corpus are that the majority of content consists of conversations between two or more native speakers. These conversations have been arranged in an interview-like setting, so one of the participant is leading the conversation with questions on various topics. The transcriptions are done by native Komi speakers, and have been systematically revised by one additional native speaking project participant besides the person who did the transcription. The recordings are very accurate in that small primary interjections such as 'mm' and 'aha' are transcribed. There is also a large amount of overlapping speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Spoken Komi Corpus",
"sec_num": "4.1"
},
{
"text": "The transcriptions are in a Cyrillic writing system that follows the rules of Komi orthography. A similar system has been used in a recent Komi dialect dictionary (\u0411\u0435\u0437\u043d\u043e\u0441\u0438\u043a\u043e\u0432\u0430 et al., 2012). This convention was selected for various reasons, both practical and methodological. Having the results of language documentation work in written standard, when it exists, makes the work accessible for the community and allows better integration of language technology (Gerstenberger et al., 2017a,b) . This is also obvious with the current study, as the speech recognition system that operates with the orthography is arguably more useful for the community than one which outputs a transcription system that only specialists in the field can easily understand. That being said, the use of orthography also makes some tasks such as speech recog- nition harder, as the phoneme-to-grapheme correspondence is less transparent. The texts in the corpus have been manually segmented into utterances and transcribed in ELAN. These segments have been transformed into pairs of audio and plain text files. For loading into Deep-Speech, the audio samples have been normalized for length such that clips over 10 seconds, Deep-Speech's default cutoff, are excluded.",
"cite_spans": [
{
"start": 460,
"end": 491,
"text": "(Gerstenberger et al., 2017a,b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Spoken Komi Corpus",
"sec_num": "4.1"
},
{
"text": "DeepSpeech (Hannun et al., 2014 ) is a relatively simple Recurrent Neural Network designed specifically for the task of Speech Recognition. It has since been updated and made available\u00b9. The biggest change between the current 0.5.1 release of DeepSpeech and the original is the switch to an LSTM instead of an RNN. In addition, some hyperparameters have been updated. Unless otherwise noted, we use the default parameters in the 0.5.1 release. Figure 1 outlines the structure of the DeepSpeech Neural Network. The feature extraction is a mapping of characters to the nominal values 1-N where N is the length of the set of characters appearing in the data. This is followed by three fully connected ReLU layers, the LSTM layer, and a final ReLU layer. All layers have a width of 2048. The sixth layer is a softmax layer with a width determined by the length of the alphabet.",
"cite_spans": [
{
"start": 11,
"end": 31,
"text": "(Hannun et al., 2014",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 444,
"end": 452,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "DeepSpeech",
"sec_num": "4.2"
},
{
"text": "The final step of DeepSpeech is correction using a language model (lm), which allows us to calculate the probability of a given character sequence. It is integrated into DeepSpeech by balancing the probability of the neural network's output with the probability of a character sequence in the lm (Hannun et al., 2014) . The hyper-parameter alpha controls the degree to which the language model edits the neural network's output and the hyper-parameter beta controls the cost of inserting word breaks. A \u00b9https://github.com/mozilla/DeepSpeech/releases/tag/v0.5.1 Figure 1 : The architecture of Mozilla's DeepSpeech (Meyer, 2019) higher alpha favors language model editing and a higher beta favors inserting word breaks.",
"cite_spans": [
{
"start": 296,
"end": 317,
"text": "(Hannun et al., 2014)",
"ref_id": "BIBREF7"
},
{
"start": 614,
"end": 627,
"text": "(Meyer, 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 562,
"end": 570,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "DeepSpeech",
"sec_num": "4.2"
},
{
"text": "To pre-process the data, we shuffled it and split it into an 8-1-1 ration of training, testing, and development. We then created an alphabet of characters and symbols which appear in the text, the length of which determines the width of the output layer of the DeepSpeech neural network.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "5"
},
{
"text": "As a baseline, we trained DeepSpeech using the default parameters, except for batch sizes, on the Komi corpus from scratch. We then trained a transfer learning model on DeepSpeech, again with the default parameters except batch sizes, for comparison. Rather than using the default batch size of 1 for train, test, and dev, we used 128, 32, and 32 respectively for all experiments. Finally, we tuned the learning rate at factors of 10 from 0.001 to 0.000001 and dropouts of 5, 10, and 15%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "5"
},
{
"text": "We trained the transfer learning model using the transfer_learning2\u00b2 branch of DeepSpeech. This branch allows you to cut off the last N layers of the network and reinitialize them from scratch. This is necessary for the final layer because the alphabet, and therefore the width of the final output layer, will almost certainly change. Meyer (2019) found that cutting off two layers and transferring four when using DeepSpeech, as well as allowing fine-tuning of \u00b2https://github.com/mozilla/DeepSpeech/tree/transfer-learning2 the transferred layers, provides the best boost in performance. We therefore follow suit, and cut off two layers and allow fine-tuning for our transfer models. For convenience, we used English as the source language because it ships with DeepSpeech and is known to have good results. Languages with comparable performance which are historically related to Komi, such as the main contact language Russian, provide potential avenues of research worth further experimentation.",
"cite_spans": [
{
"start": 335,
"end": 347,
"text": "Meyer (2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "5"
},
{
"text": "A language model is a critical piece of Deep-Speech because it corrects for the fact that every character in the orthography is not pronounced in natural speech. We generated out n-gram trie language model, as in Hannun et al. (2014) , using kenlm (Heafield, 2011) with the default parameters. Because a language model is trained on unlabeled text, we can train it on a much larger corpus than the speech dataset. Our corpus is composed of several books, newspaper articles, an old Wikipedia dump, and the Komi Republic website. These are all in the standard, modern Zyrian orthography. We found that the quantity of data provided by these various sources was more effective than using the transcriptions from our data.",
"cite_spans": [
{
"start": 213,
"end": 233,
"text": "Hannun et al. (2014)",
"ref_id": "BIBREF7"
},
{
"start": 248,
"end": 264,
"text": "(Heafield, 2011)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "5"
},
{
"text": "Because the language model is applied to the output of the neural network, it can be tuned separately. Therefore, in the interest of time, we trained the network with the default language model hyperparameters of 0.75 and 1.85 for alpha and beta respectively. We then tuned the language model on the output from the best neural network for the baseline, transfer learning baseline, and tuned models. We tuned the lm for alphas of 0.25, 0.50, and 0.75 and betas of 1, 3, 5, 7, and 9, as can be seen in Tables 3 and 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "5"
},
{
"text": "In order to see whether the language model was helping or hindering our performance, we set both alpha and beta to 0, effectively disabling the influence of the language model entirely. This also allowed us to check the output of the neural network directly, as this also disabled the insertion of word breaks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "5"
},
{
"text": "The best results were achieved using the transfer learning model with a learning rate of 0.00001 and dropout of 10%. Early stopping was disabled as it is very aggressive, and all other parameters were the default or the batch sizes stated above as of release 0.5.1. Table 2 compares the best scores achieved for the baseline and transfer models under different conditions. The transfer models perform better under all respective conditions, but the baseline model outperforms the baseline transfer model when tuned. While tuning clearly has an effect on the Character Error Rate, the tuned models were unable to accurately recognize any full words. An error analysis showed that the words the baseline models were capturing are short filler words rather than content or even common function words. This is further discussed below. Table 3 and Table 4 show the impact of the language model on the accuracy of the speech recognition system. A higher alpha favors correcting the output of the neural network with the language model, and a higher beta favors inserting word breaks. We see in both tables that a lower alpha achieves better results, corroborating Table 2 , where disabling the language model achieved the best results. As alpha increases, the best results are achieved with increasing beta values as well.",
"cite_spans": [],
"ref_spans": [
{
"start": 266,
"end": 273,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 831,
"end": 838,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 843,
"end": 850,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 1158,
"end": 1165,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "These preliminary results show that transfer learning is a promising avenue for developing a speech recognition system for documentary audio data. While the gain is small as compared to the baseline, any improvement in the network will help the language model better predict the true orthography. In addition, we found that the transfer model predicts slightly more sensible guesses than the baseline, even if it is not reflected in the error rates. For example, (1) and (3) are produced by the baseline and (2) and (4) are produced by the transfer model. Despite the overall error rate being high, both of these pairs of examples indicate that the potential for improvement is there, and that transfer learning is slightly more accurate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "(1) \u043d\u043e \u043f\u0435\u0447\u0435\u0440\u0430 \u044e \u0432\u044b\u043b\u044b\u043d \u043d \u043f \u0437\u0438\u043d\u043e \u044e\u043d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "(2) \u043d\u043e \u043f\u0435\u0447\u0435\u0440\u0430 \u044e \u0432\u044b\u043b\u044b\u043d \u0438\u043d\u043e \u0435\u0447\u0435 \u044e \u043d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "(3) \u043d\u043e \u043c\u0435 \u0436\u0435 \u0442\u043e\u043c \u043d\u0430 \u043d \u043c \u0436\u0435 \u043c (4) \u043d\u043e \u043c\u0435 \u0436\u0435 \u0442\u043e\u043c \u043d\u0430 \u043d \u043d\u0435 \u0436\u0435 \u0442 A negative indication of potential, however, is that several of the examples which are boosting the CER in particular are filler words such as \u043d\u043e, \u043c\u043c, or \u0438. That DeepSpeech is only good at identifying these exceptionally simple examples with a high degree of accuracy could be an indication of a class imbalance problem where the simple, small examples become too ingrained in the network and prevent more complex, more desirable behavior from emerging. For example, in (5), \u043d\u043e and \u0438 appear in the output despite having no correlate in the source text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "(5) \u043f\u0435\u0440\u0435\u0434\u043e\u0432\u0438\u043a \u0432\u04e7\u043b\u044d\u043c\u0430 \u0438 \u0435\u0446 \u0442\u0435\u0444 \u0438 \u0442\u0435\u0445\u043e \u043f\u043d\u044f\u0435 \u043d\u043e",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "DeepSpeech has built-in mechanisms for validating data before it is used, including skipping samples deemed too long or too short. For short audio clips, however, the threshold is fairly lenient. For this experiment, only two samples out of the 47232 were excluded for being too short. By increasing the minimum length of the audio clip for it to be valid, we can ignore these confounding data points and potentially improve the quality of the speech recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "Another way to refine the dataset would be to selectively choose data generated by certain speakers, such as those who contributed most to the corpus. As previously mentioned, there are over 200 speakers who have contributed to this corpus, but most of them are only a small portion. While this does decrease the potential for robustness when developing a generalized speech recognition system, it is less of an issue when considering the integration of speech recognition into field work and documentation, as there tend to be few consultants providing large quantities of data each. This would also decrease our total quantity of data, but others have been successful using methods similar to those beta CER/WER 1 3 5 7 9 0.25 77.3/100.0 74.9/100.0 73.8/100.0 74.5/100.0 78.2/100.0 alpha 0.5 80.7/100.0 77.7/100.0 75.2/100.0 74.3/100.0 75.0/100.0 0.75 84.1/98.6 80.7/100.0 77.8/100.0 75.7/100.0 74.8/100.0 we outline above on smaller datasets (Meyer, 2019; Jimerson et al., 2018; Panaite et al., 2019; Yu et al., 2019) . The results in Table 2 show that the language model needs refinement, as it currently hinders rather than helps the performance of the system. The initial lm was trained on the training data from our corpus, and performed even worse than the current one. The current lm is assembled from a mix of domains from several time periods, which may be one explanation for its poor performance. However, tables 3 and 4 show that tuning the language model parameters is still important, and also indicate good parameters for training the neural network, as the language model is used for validation on the dev set.",
"cite_spans": [
{
"start": 945,
"end": 958,
"text": "(Meyer, 2019;",
"ref_id": "BIBREF15"
},
{
"start": 959,
"end": 981,
"text": "Jimerson et al., 2018;",
"ref_id": "BIBREF11"
},
{
"start": 982,
"end": 1003,
"text": "Panaite et al., 2019;",
"ref_id": "BIBREF17"
},
{
"start": 1004,
"end": 1020,
"text": "Yu et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 1038,
"end": 1045,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "Although the accuracy is at the moment rather low, it's worth considering how speech recognition technology could in principle be integrated into language documentation work. Previous work of (Gerstenberger et al., 2017a) presents a very effective approach to integrate a morphological analyser into ELAN through an external Python script, and there is no reason why speech recognition could not be implemented in similar fashion. The task may be computationally more complex, but if the speech recognition system is trained on individual utterances, it should always be possible to send such utterances as input to the system, and to predict their transcriptions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Possible ELAN integration",
"sec_num": "8"
},
{
"text": "From this point of view the most straightforward way to use speech recognition in this context could be to manually segment the ELAN file, as one normally does in manual workflows, and predict the transcription on each of those segments individu-ally. In this paper we have only focused on the problem of speech recognition itself, but actually executing speech recognition on a new audio file involves segmentation and speaker diarization, both of which are complex and, to some degree, unsolved problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Possible ELAN integration",
"sec_num": "8"
},
{
"text": "The most central upcoming task is to repeat the experiment with other speech recognition systems that are currently available. Other potential lines of research would be to repeat this experiment with comparable datasets on other languages, in order to see whether the challenges reported in this paper are more connected to features of Komi dataset, or if they relate more to DeepSpeech infrastructure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion & Further Work",
"sec_num": "9"
},
{
"text": "Meanwhile, there are also several things we can do towards improving the results on Komi. As several projects did report successful experiments when training on data that contains only an individual speaker, it seems logical to select only those speakers who contribute most to our corpus in the future, and retrain the system individually on that data. Similarly, simplifying the set of speakers such as male or female speakers only may have a similar effect.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion & Further Work",
"sec_num": "9"
}
],
"back_matter": [
{
"text": "Niko Partanen and Michael Rie\u00dfler collaborate within the project Language Documentation meets Language Technology: The Next Step in the Description of Komi, funded by the Kone Foundation, Finland.This research was supported in part by Lilly Endowment, Inc., through its support for the Indiana University Pervasive Technology Institute.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Evaluating phonemic transcription of low-resource tonal languages for language documentation",
"authors": [
{
"first": "Oliver",
"middle": [],
"last": "Adams",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Hilaria",
"middle": [],
"last": "Cruz",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Michaud",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oliver Adams, Trevor Cohn, Graham Neubig, Hilaria Cruz, Steven Bird, and Alexis Michaud. 2018. Eval- uating phonemic transcription of low-resource tonal languages for language documentation.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Building speech recognition systems for language documentation: The coedl endangered language pipeline and inference system (elpis)",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Foley",
"suffix": ""
},
{
"first": "Joshua",
"middle": [
"T"
],
"last": "Arnold",
"suffix": ""
},
{
"first": "Rolando",
"middle": [],
"last": "Coto-Solano",
"suffix": ""
},
{
"first": "Gautier",
"middle": [],
"last": "Durantin",
"suffix": ""
},
{
"first": "Daan",
"middle": [],
"last": "Mark Ellison",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Van Esch",
"suffix": ""
},
{
"first": "Frantisek",
"middle": [],
"last": "Heath",
"suffix": ""
},
{
"first": "Zara",
"middle": [],
"last": "Kratochvil",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Maxwell-Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nash",
"suffix": ""
}
],
"year": 2018,
"venue": "SLTU",
"volume": "",
"issue": "",
"pages": "205--209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ben Foley, Joshua T Arnold, Rolando Coto-Solano, Gau- tier Durantin, T Mark Ellison, Daan van Esch, Scott Heath, Frantisek Kratochvil, Zara Maxwell-Smith, David Nash, et al. 2018. Building speech recogni- tion systems for language documentation: The coedl endangered language pipeline and inference system (elpis). In SLTU, pages 205-209.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Elpis, an accessible speech-to-text tool",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Foley",
"suffix": ""
},
{
"first": "Alina",
"middle": [],
"last": "Rakhi",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Lambourne",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Buckeridge",
"suffix": ""
},
{
"first": "Janet",
"middle": [],
"last": "Wiles",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "4624--4625",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ben Foley, Alina Rakhi, Nicholas Lambourne, Nicholas Buckeridge, and Janet Wiles. 2019. Elpis, an accessi- ble speech-to-text tool. Proc. Interspeech 2019, pages 4624-4625.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "\u041a\u043e\u0440\u043f\u0443\u0441 \u043a\u043e\u043c\u0438 \u044f\u0437\u044b\u043a\u0430",
"authors": [
{
"first": "",
"middle": [],
"last": "Fu-Lab",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fu-Lab. 2019. \u041a\u043e\u0440\u043f\u0443\u0441 \u043a\u043e\u043c\u0438 \u044f\u0437\u044b\u043a\u0430.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Instant annotations in elan corpora of spoken and written komi, an endangered language of the barents sea region",
"authors": [
{
"first": "Ciprian",
"middle": [],
"last": "Gerstenberger",
"suffix": ""
},
{
"first": "Niko",
"middle": [],
"last": "Partanen",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Rie\u00dfler",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2nd Workshop on the Use of Computational Methods in the Study of Endangered Languages",
"volume": "",
"issue": "",
"pages": "57--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ciprian Gerstenberger, Niko Partanen, and Michael Rie\u00dfler. 2017a. Instant annotations in elan corpora of spoken and written komi, an endangered language of the barents sea region. In Proceedings of the 2nd Workshop on the Use of Computational Methods in the Study of Endangered Languages, pages 57-66.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Instant annotations in ELAN corpora of spoken and written Komi, an endangered language of the Barents Sea region",
"authors": [
{
"first": "Ciprian",
"middle": [],
"last": "Gerstenberger",
"suffix": ""
},
{
"first": "Niko",
"middle": [],
"last": "Partanen",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Rie\u00dfler",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2nd Workshop on the Use of Computational Methods in the Study of Endangered Languages",
"volume": "",
"issue": "",
"pages": "57--66",
"other_ids": {
"DOI": [
"10.18653/v1/W17-0109"
]
},
"num": null,
"urls": [],
"raw_text": "Ciprian Gerstenberger, Niko Partanen, and Michael Rie\u00dfler. 2017b. Instant annotations in ELAN corpora of spoken and written Komi, an endangered language of the Barents Sea region. In Proceedings of the 2nd Workshop on the Use of Computational Methods in the Study of Endangered Languages, pages 57-66, Hon- olulu. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Deep speech: Scaling up endto-end speech recognition",
"authors": [
{
"first": "Awni",
"middle": [],
"last": "Hannun",
"suffix": ""
},
{
"first": "Carl",
"middle": [],
"last": "Case",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Casper",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Catanzaro",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Diamos",
"suffix": ""
},
{
"first": "Erich",
"middle": [],
"last": "Elsen",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Prenger",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Satheesh",
"suffix": ""
},
{
"first": "Shubho",
"middle": [],
"last": "Sengupta",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Coates",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Awni Hannun, Carl Case, Jared Casper, Bryan Catan- zaro, Greg Diamos, Erich Elsen, Ryan Prenger, San- jeev Satheesh, Shubho Sengupta, Adam Coates, and Andrew Y. Ng. 2014. Deep speech: Scaling up end- to-end speech recognition.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Daniel Abondolo, editor, The Uralic languages",
"authors": [
{
"first": "",
"middle": [],
"last": "Anu-Reet Hausenberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Komi",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "305--326",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anu-Reet Hausenberg. Komi. In Daniel Abondolo, edi- tor, The Uralic languages, pages 305-326. Routledge.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Kenlm: Faster and smaller language model queries",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the sixth workshop on statistical machine translation",
"volume": "",
"issue": "",
"pages": "187--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Heafield. 2011. Kenlm: Faster and smaller lan- guage model queries. In Proceedings of the sixth work- shop on statistical machine translation, pages 187- 197. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Russian-language speech recognition system based on deepspeech",
"authors": [
{
"first": "",
"middle": [],
"last": "Oo Iakushkin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fedoseev",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shaleva",
"suffix": ""
},
{
"first": "O",
"middle": [
"S"
],
"last": "Degtyarev",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sedova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "OO Iakushkin, GA Fedoseev, AS Shaleva, AB Degt- yarev, and OS Sedova. 2018. Russian-language speech recognition system based on deepspeech.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Improving ASR output for endangered language documentation",
"authors": [
{
"first": "Robbie",
"middle": [],
"last": "Jimerson",
"suffix": ""
},
{
"first": "Kruthika",
"middle": [],
"last": "Simha",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Raymond",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Ptucha",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Prudhommeaux",
"suffix": ""
}
],
"year": 2018,
"venue": "SLTU",
"volume": "",
"issue": "",
"pages": "187--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robbie Jimerson, Kruthika Simha, Raymond W Ptucha, and Emily Prudhommeaux. 2018. Improving ASR output for endangered language documentation. In SLTU, pages 187-191.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Influence of Russian on the syntax of Komi",
"authors": [
{
"first": "Marja",
"middle": [],
"last": "Leinonen",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "57",
"issue": "",
"pages": "195--358",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marja Leinonen. 2002. Influence of Russian on the syn- tax of Komi. 57:195-358.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The russification of Komi",
"authors": [
{
"first": "Marja",
"middle": [],
"last": "Leinonen",
"suffix": ""
}
],
"year": 2006,
"venue": "Number 27 in Slavica Helsingiensia",
"volume": "",
"issue": "",
"pages": "234--245",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marja Leinonen. 2006. The russification of Komi. Number 27 in Slavica Helsingiensia, pages 234-245. Helsinki University Press.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Russian influence on the I\u017ema Komi dialect",
"authors": [
{
"first": "Marja",
"middle": [],
"last": "Leinonen",
"suffix": ""
}
],
"year": 2009,
"venue": "International Journal of Bilingualism",
"volume": "13",
"issue": "2",
"pages": "309--329",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marja Leinonen. 2009. Russian influence on the I\u017ema Komi dialect. International Journal of Bilingualism, 13(2):309-329.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Multi-task and transfer learning in low-resource speech recognition",
"authors": [
{
"first": "Josh",
"middle": [],
"last": "Meyer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Josh Meyer. 2019. Multi-task and transfer learning in low-resource speech recognition.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Integrating automatic transcription into the language documentation workflow: Experiments with Na data and the Persephone toolkit",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Michaud",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Adams",
"suffix": ""
},
{
"first": "Trevor",
"middle": [
"Anthony"
],
"last": "Cohn",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "S\u00e9verine",
"middle": [],
"last": "Guillaume",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Michaud, Oliver Adams, Trevor Anthony Cohn, Graham Neubig, and S\u00e9verine Guillaume. 2018. Inte- grating automatic transcription into the language doc- umentation workflow: Experiments with Na data and the Persephone toolkit.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Towards a Deep Speech model for Romanian language",
"authors": [
{
"first": "Marilena",
"middle": [],
"last": "Panaite",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Ruseti",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Dascalu",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Trausan-Matu",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 22nd International Conference on Control Systems and Computer Science (CSCS)",
"volume": "",
"issue": "",
"pages": "416--419",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marilena Panaite, Stefan Ruseti, Mihai Dascalu, and Ste- fan Trausan-Matu. 2019. Towards a Deep Speech model for Romanian language. In 2019 22nd Inter- national Conference on Control Systems and Computer Science (CSCS), pages 416-419. IEEE.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Bangla speech recognition for voice search",
"authors": [
{
"first": "Shakhawat",
"middle": [],
"last": "Jillur Rahman Saurav",
"suffix": ""
},
{
"first": "Shafkat",
"middle": [],
"last": "Amin",
"suffix": ""
},
{
"first": "M Shahidur",
"middle": [],
"last": "Kibria",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rahman",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 International Conference on Bangla Speech and Language Processing (ICBSLP)",
"volume": "",
"issue": "",
"pages": "1--4",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jillur Rahman Saurav, Shakhawat Amin, Shafkat Kib- ria, and M Shahidur Rahman. 2018. Bangla speech recognition for voice search. In 2018 International Conference on Bangla Speech and Language Process- ing (ICBSLP), pages 1-4. IEEE.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Crosslanguage end-to-end speech recognition research based on transfer learning for the low-resource Tujia language",
"authors": [
{
"first": "Chongchong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Yunbing",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yueqiao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Shixuan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xueer",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Symmetry",
"volume": "11",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chongchong Yu, Yunbing Chen, Yueqiao Li, Meng Kang, Shixuan Xu, and Xueer Liu. 2019. Cross- language end-to-end speech recognition research based on transfer learning for the low-resource Tujia language. Symmetry, 11(2):179.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "\u041a\u043e\u043c\u0438 \u0441\u0451\u0440\u043d\u0438\u0441\u0438\u043a\u0430\u0441 \u043a\u044b\u0432\u0447\u0443\u043a\u04e7\u0440. \u0421\u043b\u043e\u0432\u0430\u0440\u044c \u0434\u0438\u0430\u043b\u0435\u043a\u0442\u043e\u0432 \u043a\u043e\u043c\u0438 \u044f\u0437\u044b\u043a\u0430: \u0432 2-\u0445 \u0442\u043e\u043c\u0430\u0445/\u0418\u042f\u041b\u0418 \u041a\u043e\u043c\u0438 \u041d\u0426 \u0423\u0440\u041e \u0420\u0410\u041d; \u043f\u043e\u0434 \u0440\u0435\u0434. \u041b\u041c \u0411\u0435\u0437\u043d\u043e\u0441\u0438\u043a\u043e\u0432\u043e\u0439",
"authors": [
{
"first": "",
"middle": [],
"last": "\u041b\u043c \u0411\u0435\u0437\u043d\u043e\u0441\u0438\u043a\u043e\u0432\u0430",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "\u0410\u0439\u0431\u0430\u0431\u0438\u043d\u0430",
"suffix": ""
},
{
"first": "\u0420",
"middle": [
"\u0418"
],
"last": "\u041d\u043a \u0417\u0430\u0431\u043e\u0435\u0432\u0430",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "\u041a\u043e\u0441\u043d\u044b\u0440\u0435\u0432\u0430",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\u041b\u041c \u0411\u0435\u0437\u043d\u043e\u0441\u0438\u043a\u043e\u0432\u0430, \u0415\u0410 \u0410\u0439\u0431\u0430\u0431\u0438\u043d\u0430, \u041d\u041a \u0417\u0430\u0431\u043e\u0435\u0432\u0430, and \u0420\u0418 \u041a\u043e\u0441\u043d\u044b\u0440\u0435\u0432\u0430. 2012. \u041a\u043e\u043c\u0438 \u0441\u0451\u0440\u043d\u0438\u0441\u0438\u043a\u0430\u0441 \u043a\u044b\u0432\u0447\u0443\u043a\u04e7\u0440. \u0421\u043b\u043e\u0432\u0430\u0440\u044c \u0434\u0438\u0430\u043b\u0435\u043a\u0442\u043e\u0432 \u043a\u043e\u043c\u0438 \u044f\u0437\u044b\u043a\u0430: \u0432 2-\u0445 \u0442\u043e\u043c\u0430\u0445/\u0418\u042f\u041b\u0418 \u041a\u043e\u043c\u0438 \u041d\u0426 \u0423\u0440\u041e \u0420\u0410\u041d; \u043f\u043e\u0434 \u0440\u0435\u0434. \u041b\u041c \u0411\u0435\u0437\u043d\u043e\u0441\u0438\u043a\u043e\u0432\u043e\u0439.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "\u041f\u0435\u0440\u044b\u043c \u043a\u044b\u0432\u044a\u044f\u0441\u043b\u00f6\u043d \u0442\u0430\u043b\u0443\u043d\u044a\u044f \u0441\u0435\u0440\u043f\u0430\u0441. Suomalais-Ugrilaisen Seuran Toimituksia = M\u00e9moires de la Soci\u00e9t\u00e9 Finno-Ougrienne",
"authors": [
{
"first": "\u0419\u04e7\u043b\u0433\u0438\u043d\u044c",
"middle": [],
"last": "\u0426\u044b\u043f\u0430\u043d\u043e\u0432",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "258",
"issue": "",
"pages": "191--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\u0419\u04e7\u043b\u0433\u0438\u043d\u044c \u0426\u044b\u043f\u0430\u043d\u043e\u0432. 2009. \u041f\u0435\u0440\u044b\u043c \u043a\u044b\u0432\u044a\u044f\u0441\u043b\u00f6\u043d \u0442\u0430\u043b\u0443\u043d\u044a\u044f \u0441\u0435\u0440\u043f\u0430\u0441. Suomalais-Ugrilaisen Seuran Toimituksia = M\u00e9moires de la Soci\u00e9t\u00e9 Finno-Ougrienne, 258:191- 206.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "Statistics on the training data",
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF3": {
"text": "The best results for our baseline and transfer learning models without tuning the language model, with tuning, and without a language model",
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF4": {
"text": "The impact of tuning the language model parameters on Character and Word Error Rates for the baseline model.",
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF5": {
"text": "The impact of tuning the language model parameters on Character and Word Error Rates for the transfer learning model.",
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null
}
}
}
}