{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:31:44.310828Z" }, "title": "Should we Stop Training More Monolingual Models, and Simply Use Machine Translation Instead?", "authors": [ { "first": "Tim", "middle": [], "last": "Isbister", "suffix": "", "affiliation": {}, "email": "tim.isbister@peltarion.com" }, { "first": "Fredrik", "middle": [], "last": "Carlsson", "suffix": "", "affiliation": {}, "email": "fredrik.carlsson@ri.se" }, { "first": "Magnus", "middle": [], "last": "Sahlgren", "suffix": "", "affiliation": {}, "email": "magnus.sahlgren@ri.se" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Most work in NLP makes the assumption that it is desirable to develop solutions in the native language in question. There is consequently a strong trend towards building native language models even for lowresource languages. This paper questions this development, and explores the idea of simply translating the data into English, thereby enabling the use of pretrained, and large-scale, English language models. We demonstrate empirically that a large English language model coupled with modern machine translation outperforms native language models in most Scandinavian languages. The exception to this is Finnish, which we assume is due to inferior translation quality. Our results suggest that machine translation is a mature technology, which raises a serious counter-argument for training native language models for lowresource languages. This paper therefore strives to make a provocative but important point. As English language models are improving at an unprecedented pace, which in turn improves machine translation, it is from an empirical and environmental standpoint more effective to translate data from low-resource languages into English, than to build language models for such languages.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Most work in NLP makes the assumption that it is desirable to develop solutions in the native language in question. There is consequently a strong trend towards building native language models even for lowresource languages. This paper questions this development, and explores the idea of simply translating the data into English, thereby enabling the use of pretrained, and large-scale, English language models. We demonstrate empirically that a large English language model coupled with modern machine translation outperforms native language models in most Scandinavian languages. The exception to this is Finnish, which we assume is due to inferior translation quality. Our results suggest that machine translation is a mature technology, which raises a serious counter-argument for training native language models for lowresource languages. This paper therefore strives to make a provocative but important point. As English language models are improving at an unprecedented pace, which in turn improves machine translation, it is from an empirical and environmental standpoint more effective to translate data from low-resource languages into English, than to build language models for such languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Although the Transformer architecture for deep learning was only recently introduced (Vaswani et al., 2017) , it has had a profound impact on the development in Natural Language Processing (NLP) during the last couple of years. Starting with the seminal BERT model (Devlin et al., 2019) , we have witnessed an unprecedented development of new model variations (Yang et al., 2019; Clark et al., 2020; Radford et al., 2019; Brown et al., 2020) with new State Of The Art (SOTA) results being produced in all types of NLP benchmarks (Wang et al., 2018 (Wang et al., , 2019 Nie et al., 2020) .", "cite_spans": [ { "start": 85, "end": 107, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF19" }, { "start": 265, "end": 286, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF6" }, { "start": 360, "end": 379, "text": "(Yang et al., 2019;", "ref_id": "BIBREF23" }, { "start": 380, "end": 399, "text": "Clark et al., 2020;", "ref_id": "BIBREF2" }, { "start": 400, "end": 421, "text": "Radford et al., 2019;", "ref_id": "BIBREF16" }, { "start": 422, "end": 441, "text": "Brown et al., 2020)", "ref_id": null }, { "start": 529, "end": 547, "text": "(Wang et al., 2018", "ref_id": "BIBREF21" }, { "start": 548, "end": 568, "text": "(Wang et al., , 2019", "ref_id": "BIBREF20" }, { "start": 569, "end": 586, "text": "Nie et al., 2020)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The leading models are large both with respect to the number of parameters and the size of the training data used to build the model; this correlation between size and performance has been demonstrated by Kaplan et al. (2020) . The ongoing scale race has culminated in the 175-billion parameter model GPT-3, which was trained on some 45TB of data summing to around 500 billion tokens (Brown et al., 2020) . 1 Turning to the Scandinavian languages, there are no such truly large-scale models available. At the time of writing, there are around 300 Scandinavian models available in the Hugging Face Transformers model repository. 2 Most of these are translation models, but there is already a significant number of monolingual models available in the Scandinavian languages. 3 However, none of these Scandinavian language models are even close to the currently leading English models in parameter size or training data used. As such, we can expect that their relative performance in comparison with the leading English models is significantly worse. Furthermore, we can expect that the number of monolingual Scandinavian models will continue to grow at an exponential pace during the near future. The question is: do we need all these models? Or even: do we need any of these models? Can't we simply translate our data and tasks to English and use some suitable English SOTA model to solve the problem? This paper provides an empirical study of this idea. ", "cite_spans": [ { "start": 205, "end": 225, "text": "Kaplan et al. (2020)", "ref_id": null }, { "start": 384, "end": 404, "text": "(Brown et al., 2020)", "ref_id": null }, { "start": 407, "end": 408, "text": "1", "ref_id": null }, { "start": 773, "end": 774, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There is already a large, and rapidly growing, literature on the use of multilingual models (Conneau et al., 2020a; Xue et al., 2020) , and on the possibility to achieve cross-lingual transfer in multilingual language models (Ruder et al., 2019; Artetxe et al., 2020; Lauscher et al., 2020; Conneau et al., 2020b; Karthikeyan et al., 2020; Nooralahzadeh et al., 2020) . From this literature, we know among other things that multilingual models tend to be competitive in comparison with monolingual ones, and that especially languages with smaller amounts of training data available can benefit significantly from transfer effects from related languages with more training data available. This line of study focuses on the possibility to transfer models to a new language, and thereby facilitating the application of the model to data in the original language. By contrast, our interest is to transfer the data to another language, thereby enabling the use of SOTA models to solve whatever task we are interested in. We are only aware of one previous study in this direction: Duh et al. (2011) performs cross-lingual machine translation using outdated methods, resulting in the claim that even if perfect translation would be possible, we will still see degradation of performance. In this paper, we use modern machine translation methods, and demonstrate empirically that no degradation of performance is observable when using large SOTA models.", "cite_spans": [ { "start": 92, "end": 115, "text": "(Conneau et al., 2020a;", "ref_id": null }, { "start": 116, "end": 133, "text": "Xue et al., 2020)", "ref_id": "BIBREF22" }, { "start": 225, "end": 245, "text": "(Ruder et al., 2019;", "ref_id": "BIBREF18" }, { "start": 246, "end": 267, "text": "Artetxe et al., 2020;", "ref_id": "BIBREF0" }, { "start": 268, "end": 290, "text": "Lauscher et al., 2020;", "ref_id": "BIBREF13" }, { "start": 291, "end": 313, "text": "Conneau et al., 2020b;", "ref_id": "BIBREF5" }, { "start": 314, "end": 339, "text": "Karthikeyan et al., 2020;", "ref_id": "BIBREF12" }, { "start": 340, "end": 367, "text": "Nooralahzadeh et al., 2020)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "In order to be able to use comparable data in the languages under consideration (Swedish, Danish, Norwegian, and Finnish), we contribute a Scandinavian sentiment corpus (ScandiSent), 4 consisting of data downloaded from trustpilot.com. For each language, the corresponding subdomain was used 4 https://github.com/timpal0l/ScandiSent to gather reviews with an associated text. This data covers a wide range of topics and are divided into 22 different categories, such as electronics, sports, travel, food, health etc. The reviews are evenly distributed among all categories for each language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "All reviews have a corresponding rating in the range 1 \u2212 5. The review ratings were polarised into binary labels, and the reviews which received neutral rating were discarded. Ratings with 4 or 5 thus corresponds to a positive label, and 1 or 2 correspond to a negative label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "To further improve the quality of the data, we apply fastText's language identification model (Joulin et al., 2016) to filter out any reviews containing incorrect language. This results in a balanced set of 10,000 texts for each language, with 7,500 samples for training and 2,500 for testing. Table 1 summarizes statistics for the various datasets of each respective language.", "cite_spans": [ { "start": 94, "end": 115, "text": "(Joulin et al., 2016)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 294, "end": 301, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "For all the Nordic languages we generate a corresponding English dataset by direct Machine Translation, using the Neural Machine Translation (NMT) model provided by Google. 5 To justifiably isolate the effects of modern day machine translation, we restrict the translation to be executed in prior to all experiments. This means that all translation is executed prior to any fine-tuning, and that the translation model is not updated during training.", "cite_spans": [ { "start": 173, "end": 174, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Translation", "sec_num": "3.1" }, { "text": "In order to fairly select a representative pre-trained model for each considered Scandinavian language, we opt for the most popular native model according to Hugging Face. For each considered language, this corresponds to a BERT-Base model, hence each language is represented by a Language Model Model name in Hugging Face", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "4" }, { "text": "Data size KB/bert-base-swedish-cased sv 3B tokens TurkuNLP/bert-base-finnish-cased-v1 fi 3B tokens ltgoslo/norbert no 2B tokens DJSammy/bert-base-danish-uncased BotXO,ai da 1.6B tokens bert-base-cased en 3.3B tokens bert-base-cased-large en 3.3B tokens xlm-roberta-large multi 295B tokens Table 4 : Accuracy on the various sentiment datasets using XLM-R-Large of identical architecture. The difference between these models is therefore mainly in the quantity and type of texts used during training, in addition to potential differences in training hyperparameters.", "cite_spans": [], "ref_spans": [ { "start": 289, "end": 296, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Language", "sec_num": null }, { "text": "We compare these Scandinavian models against the English BERT-Base and BERT-Large models by Google. English BERT-Base is thus identical in architecture to the Scandinavian models, while BERT-Large is twice as deep and contains more than three times the amount of parameters as BERT-Base. Finally, we include XLM-R-Large, in order to compare with a model trained on significantly larger (and multilingual) training corpora. Table 2 lists both the Scandinavian and English models, together with the size of each models corresponding training corpus.", "cite_spans": [], "ref_spans": [ { "start": 423, "end": 430, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Language", "sec_num": null }, { "text": "We fine-tune and evaluate each model towards each of the different sentiment datasets, using the hyperparameters listed in Appendix 5. From this we report the binary accuracy, with the results for the BERT models available in Table 3 , and the XLM-R results in Table 4 .", "cite_spans": [], "ref_spans": [ { "start": 226, "end": 233, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 261, "end": 268, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Setup", "sec_num": "5.1" }, { "text": "The upper part of Table 3 shows the results using the original monolingual data. From this we note a clear diagonal (marked by underline), where the native models perform best in their own respective language. Bert-Large significantly outperforms BERT-Base for all non-English datasets, and it also performs slightly better on the original English data.", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 25, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Monolingual Results", "sec_num": "5.2" }, { "text": "Comparing these results with the amount of training data for each model (Table 1) , we see a correlation between performance and amount of pretraining data. The Swedish, Finnish and English models have been trained on the most amount of data, leading to slightly higher performance in their native languages. The Danish model which has been trained on the least amount of data, performs the worst on its own native language.", "cite_spans": [], "ref_spans": [ { "start": 72, "end": 81, "text": "(Table 1)", "ref_id": null } ], "eq_spans": [], "section": "Monolingual Results", "sec_num": "5.2" }, { "text": "For the cross-lingual evaluation, BERT-Large clearly outperforms all other non-native models. The Swedish model reaches higher performance on Norwegian and Finnish compared to the other non-native Scandinavian models. However, the Norwegian model performs best of the non-native models on the Danish data. Finally, we observe an interesting anomaly in the results on the English data, where the Norwegian model performs considerably worse than the other Scandinavian models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Monolingual Results", "sec_num": "5.2" }, { "text": "The results for the machine translated data, available as the lower part of Table 3 , show that BERT-Large outperforms all native models on their native data, with the exception of Finish. The English BERT-Base reaches higher performance on the machine translated data than the Norwegian and Danish models on their respective native data. The difference between English BERT-Base using the machine translated data, and the Swedish BERT using native data is about 1% unit.", "cite_spans": [], "ref_spans": [ { "start": 76, "end": 83, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Translation Results", "sec_num": "5.3" }, { "text": "As expected, all Scandinavian models perform significantly worse on their respective machine translated data. We find no clear trend among the Scandinavian models when evaluated on translated data from other languages. But we note that the Danish model performs better on the machine translated Swedish data than on the original Swedish data, and the Finnish model also improves its performance on the other translated data sets (except for Swedish). All models (except, of course, the Finnish model) perform better on the machine trans-lated Finnish data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Results", "sec_num": "5.3" }, { "text": "Finally, 4 shows the results from XLM-R-Large, which has been trained on data several orders of magnitude larger than the other models. XLM-R-Large achieves top scores on the sentiment data for all languages except for Finnish. We note that XLM-R produces slightly better results on the native data for Swedish, Norwegian and Finnish, while the best result for Danish is produced on the machine translated data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Results", "sec_num": "5.3" }, { "text": "Our experiments demonstrate that it is possible to reach better performance in a sentiment analysis task by translating the data into English and using a large pre-trained English language model, compared to using data in the original language and a smaller native language model. Whether this result holds for other tasks as well remains to be shown, but we see no theoretical reasons for why it would not hold. We also find a strong correlation between the quantity of pre-training data and downstream performance. We note that XLM-R in particular performs well, which may be due to data size, and potentially the ability of the model to take advantage of transfer effects between languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion & Conclusion", "sec_num": "6" }, { "text": "An interesting exception in our results is the Finnish data, which is the only task for which the native model performs best, despite XLM-R reportedly having been trained on more Finnish data than the native Finnish BERT model (Conneau et al., 2020a) . One hypothesis for this behavior can be that the alleged transfer effects in XLM-R hold primarily for typologically similar languages, and that the performance on typologically unique languages, such as Finnish, may actually be negatively affected by the transfer. The relatively bad performance of BERT-Large on the translated Finnish data is likely due to insufficient quality of the machine translation.", "cite_spans": [ { "start": 227, "end": 250, "text": "(Conneau et al., 2020a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Discussion & Conclusion", "sec_num": "6" }, { "text": "The proposed approach is thus obviously dependent on the existence of a high-quality machine translation solution. The Scandinavian languages are typologically very similar both to each other and to English, which probably explains the good performance of the proposed approach even when using a generic translation API. For other languages, such as Finnish in our case, one would probably need to be more careful in selecting a suitable translation model. Whether the suggested methodology will be applicable to other language pairs thus depends on the quality of the translations and on the availability of large-scale language models in the target language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion & Conclusion", "sec_num": "6" }, { "text": "Our results can be seen as evidence for the maturity of machine translation. Even using a generic translation API, we can leverage the existence of large-scale English language models to improve the performance in comparison with building a solution in the native language. This raises a serious counter-argument for the habitual practice in applied NLP to develop native solutions to practical problems. Hence, we conclude with the somewhat provocative claim that it might be unnecessary from an empirical standpoint to train models in languages where:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion & Conclusion", "sec_num": "6" }, { "text": "1. there exists high-quality machine translation models to English, 2. there does not exist as much training data to build a language model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion & Conclusion", "sec_num": "6" }, { "text": "In such cases, we may be better off relying on existing large-scale English models. This is a clear case for practical applications, where it would be beneficial to only host one large English model and translate all various incoming requests from different languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion & Conclusion", "sec_num": "6" }, { "text": "The currently largest English model contains 1.6 trillion parameters(Fedus et al., 2021).2 huggingface.co/models 3 At the time of submission, there are 17 monolingual Swedish models available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://cloud.google.com/translate/docs/advanced/translatingtext-v3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "On the cross-lingual transferability of monolingual representations", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Dani", "middle": [], "last": "Yogatama", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4623--4637", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of mono- lingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computa- tional Linguistics, pages 4623-4637, Online. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Ilya Sutskever, and Dario Amodei. 2020", "authors": [ { "first": "Tom", "middle": [ "B" ], "last": "Brown", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Mann", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Ryder", "suffix": "" }, { "first": "Melanie", "middle": [], "last": "Subbiah", "suffix": "" }, { "first": "Jared", "middle": [], "last": "Kaplan", "suffix": "" }, { "first": "Prafulla", "middle": [], "last": "Dhariwal", "suffix": "" }, { "first": "Arvind", "middle": [], "last": "Neelakantan", "suffix": "" }, { "first": "Pranav", "middle": [], "last": "Shyam", "suffix": "" }, { "first": "Girish", "middle": [], "last": "Sastry", "suffix": "" }, { "first": "Amanda", "middle": [], "last": "Askell", "suffix": "" }, { "first": "Sandhini", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Ariel", "middle": [], "last": "Herbert-Voss", "suffix": "" }, { "first": "Gretchen", "middle": [], "last": "Krueger", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Henighan", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Ramesh", "suffix": "" }, { "first": "Daniel", "middle": [ "M" ], "last": "Ziegler", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Clemens", "middle": [], "last": "Winter", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Hesse", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Sigler", "suffix": "" }, { "first": "Mateusz", "middle": [], "last": "Litwin", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam Mc- Candlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. http://arxiv.org/abs/2005.14165 Lan- guage models are few-shot learners.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Electra: Pretraining text encoders as discriminators rather than generators", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. Electra: Pre- training text encoders as discriminators rather than generators. In International Conference on Learn- ing Representations.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Unsupervised cross-lingual representation learning at scale", "authors": [], "year": null, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8440--8451", "other_ids": {}, "num": null, "urls": [], "raw_text": "Unsupervised cross-lingual representation learn- ing at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Online. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Emerging cross-lingual structure in pretrained language models", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Shijie", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Haoran", "middle": [], "last": "Li", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6022--6034", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettle- moyer, and Veselin Stoyanov. 2020b. Emerging cross-lingual structure in pretrained language mod- els. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6022-6034, Online. Association for Compu- tational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Is machine translation ripe for cross-lingual sentiment classification?", "authors": [ { "first": "Kevin", "middle": [], "last": "Duh", "suffix": "" }, { "first": "Akinori", "middle": [], "last": "Fujino", "suffix": "" }, { "first": "Masaaki", "middle": [], "last": "Nagata", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers", "volume": "2", "issue": "", "pages": "429--433", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Duh, Akinori Fujino, and Masaaki Nagata. 2011. Is machine translation ripe for cross-lingual senti- ment classification? In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers -Volume 2, HLT '11, page 429-433, USA. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity", "authors": [ { "first": "William", "middle": [], "last": "Fedus", "suffix": "" }, { "first": "Barret", "middle": [], "last": "Zoph", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2101.03961" ] }, "num": null, "urls": [], "raw_text": "William Fedus, Barret Zoph, and Noam Shazeer. 2021. http://arxiv.org/abs/arXiv:2101.03961 Switch trans- formers: Scaling to trillion parameter models with simple and efficient sparsity.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Bag of tricks for efficient text classification", "authors": [ { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1607.01759" ] }, "num": null, "urls": [], "raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Scaling laws for neural language models", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Scaling laws for neural language models.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Cross-lingual ability of multilingual bert: An empirical study", "authors": [ { "first": "K", "middle": [], "last": "Karthikeyan", "suffix": "" }, { "first": "Zihan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Mayhew", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K Karthikeyan, Zihan Wang, Stephen Mayhew, and Dan Roth. 2020. Cross-lingual ability of multilin- gual bert: An empirical study. In International Con- ference on Learning Representations.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "From zero to hero: On the limitations of zero-shot language transfer with multilingual Transformers", "authors": [ { "first": "Anne", "middle": [], "last": "Lauscher", "suffix": "" }, { "first": "Vinit", "middle": [], "last": "Ravishankar", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Goran", "middle": [], "last": "Glava\u0161", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "4483--4499", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anne Lauscher, Vinit Ravishankar, Ivan Vuli\u0107, and Goran Glava\u0161. 2020. From zero to hero: On the limitations of zero-shot language transfer with mul- tilingual Transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 4483-4499, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Adversarial NLI: A new benchmark for natural language understanding", "authors": [ { "first": "Yixin", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Dinan", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4885--4901", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Ad- versarial NLI: A new benchmark for natural lan- guage understanding. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 4885-4901, Online. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Zero-Shot Cross-Lingual Transfer with Meta Learning", "authors": [ { "first": "Farhad", "middle": [], "last": "Nooralahzadeh", "suffix": "" }, { "first": "Giannis", "middle": [], "last": "Bekoulis", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Bjerva", "suffix": "" }, { "first": "Isabelle", "middle": [], "last": "Augenstein", "suffix": "" } ], "year": 2020, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Farhad Nooralahzadeh, Giannis Bekoulis, Johannes Bjerva, and Isabelle Augenstein. 2020. Zero-Shot Cross-Lingual Transfer with Meta Learning. In Pro- ceedings of EMNLP. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Techni- cal report, Open AI.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter", "middle": [ "J" ], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Journal of Machine Learning Research", "volume": "21", "issue": "140", "pages": "1--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the lim- its of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A survey of cross-lingual word embedding models", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2019, "venue": "Journal of Artificial Intelligence Research", "volume": "65", "issue": "", "pages": "569--631", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Ruder, Ivan Vuli\u0107, and Anders S\u00f8gaard. 2019. A survey of cross-lingual word embedding models. Journal of Artificial Intelligence Research, 65:569-631.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30, pages 5998-6008. Cur- ran Associates, Inc.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Superglue: A stickier benchmark for general-purpose language understanding systems", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yada", "middle": [], "last": "Pruksachatkun", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "3266--3280", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language un- derstanding systems. In Advances in Neural Infor- mation Processing Systems, pages 3266-3280. Cur- ran Associates, Inc.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "353--355", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Pro- ceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "mT5: A massively multilingual pre-trained text-to-text transformer", "authors": [ { "first": "Linting", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Mihir", "middle": [], "last": "Kale", "suffix": "" }, { "first": "Rami", "middle": [], "last": "Al-Rfou", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Siddhant", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Barua", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Sid- dhant, Aditya Barua, and Colin Raffel. 2020. http://arxiv.org/abs/2010.11934 mT5: A massively multilingual pre-trained text-to-text transformer. ArXiv:2010.11934.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "R", "middle": [], "last": "Russ", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "5753--5763", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems, volume 32, pages 5753-5763. Curran Associates, Inc.", "links": null } }, "ref_entries": { "TABREF0": { "content": "
Swedish31,4780.074.3914.75
Norwegian26,1680.064.2114.10
Danish42,3580.064.1719.55
Finnish34,7290.145.8410.69
English27,6100.043.9916.87
Table 1: The vocabulary size, Lexical richness, average word length and average sentence length for the
Trustpilot sentiment data of each language.
", "type_str": "table", "num": null, "text": "Language Vocab size Lexical richness Avg. word length Avg. sentence length", "html": null }, "TABREF1": { "content": "
Modelsvnodafien
BERT-sv96.76 89.32 90.68 83.40 86.76
BERT-no90.40 95.00 92.52 83.16 78.52
BERT-da86.24 89.16 94.72 80.16 85.28
BERT-fi90.24 86.36 87.72 95.72 84.32
BERT-en85.72 87.60 87.72 84.16 96.08
BERT-en-Large 91.16 91.88 92.40 89.56 97.00
Translated Into English
BERT-sv88.24 87.80 89.68 83.60-
BERT-no88.40 86.80 88.44 80.72-
BERT-da88.24 84.20 89.12 83.32-
BERT-fi90.04 90.08 89.36 86.04-
BERT-en95.76 95.48 95.96 92.96-
BERT-en-
", "type_str": "table", "num": null, "text": "Models used in the experiments and the size of their corresponding training data. 'B' is short for billion. Large 97.16 96.56 97.48 94.84 -", "html": null }, "TABREF2": { "content": "
Modelsvnodafien
XLM-R-large 97.48 97.16 97.68 95.60 97.76
Translated Into English
XLM-R-large 97.04 96.84 98.24 95.48-
", "type_str": "table", "num": null, "text": "Accuracy for monolingual models for the native sentiment data (upper part) and machine translated data (lower part). Underlined results are the best results per language in using the native data, while boldface marks the best results considering both native and machine translated data.", "html": null } } } }