--- tags: - token-classification - sequence-tagger-model language: sv datasets: - suc3_1 widget: - text: "Emil bor i Lönneberga" --- # KB-BERT for NER ## Mixed cased and uncased data This model is based on [KB-BERT](https://huggingface.co/KB/bert-base-swedish-cased) and was fine-tuned on the [SUC 3.1](https://huggingface.co/datasets/KBLab/suc3_1) corpus, using the _simple_ tags and partially lowercased data. For this model we used a variation of the data that did **not** use BIO-encoding to differentiate between the beginnings (B), and insides (I) of named entity tags. The model was trained on the training data only, with the best model chosen by its performance on the validation data. You find more information about the model and the performance on our blog: https://kb-labb.github.io/posts/2022--02-07-suc31