|
--- |
|
language: no |
|
license: cc-by-4.0 |
|
pipeline_tag: fill-mask |
|
tags: |
|
- norwegian |
|
- bert |
|
thumbnail: https://raw.githubusercontent.com/ltgoslo/NorBERT/main/Norbert.png |
|
--- |
|
## Quickstart |
|
**Release 2.0** (February 7, 2022) |
|
Trained on the very large corpus of Norwegian (C4 + NCC, about 15 billion word tokens). |
|
Features a 50 000 words vocabulary and was trained using Whole Word Masking. |
|
|
|
Download the model here: |
|
* Cased Norwegian BERT Base 2.0 (NorBERT 2): [221.zip](http://vectors.nlpl.eu/repository/20/221.zip) |
|
More about NorBERT training corpora, training procedure and evaluation benchmarks: http://norlm.nlpl.eu/ |
|
|
|
Associated code: https://github.com/ltgoslo/NorBERT |
|
|
|
Check this paper for more details: |
|
_Andrey Kutuzov, Jeremy Barnes, Erik Velldal, Lilja Øvrelid, Stephan Oepen. [Large-Scale Contextualised Language Modelling for Norwegian](https://arxiv.org/abs/2104.06546), NoDaLiDa'21 (2021)_ |
|
|
|
NorBERT was trained as a part of NorLM, a joint initiative of the projects [EOSC-Nordic](https://www.eosc-nordic.eu/) (European Open Science Cloud), |
|
coordinated by the [Language Technology Group](https://www.mn.uio.no/ifi/english/research/groups/ltg/) (LTG) at the University of Oslo. |
|
|
|
The computations were performed on resources provided by UNINETT Sigma2 - the National Infrastructure for High Performance Computing and Data Storage in Norway. |