Icelandic-Norwegian ELECTRA-Small
This model was pretrained on the following corpora:
- The Icelandic Gigaword Corpus (IGC)
- The Icelandic Common Crawl Corpus (IC3)
- The Icelandic Crawled Corpus (ICC)
- The Multilingual Colossal Clean Crawled Corpus (mC4) - Icelandic and Norwegian text obtained from .is and .no domains, respectively
The total size of the corpus after document-level deduplication and filtering was 7.41B tokens, split equally between the two languages. The model was trained using a WordPiece tokenizer with a vocabulary size of 64,105 for 1.1 million steps, and otherwise with default settings.
Acknowledgments
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by Almannarómur, is funded by the Icelandic Ministry of Education, Science and Culture.
- Downloads last month
- 6