Update README.md
Browse files
README.md
CHANGED
@@ -57,21 +57,11 @@ This is a classification dataset created from a subset of the [Talk of Norway](h
|
|
57 |
|
58 |
### Supported Tasks and Leaderboards
|
59 |
|
60 |
-
This dataset is meant for classification.
|
61 |
-
|
62 |
-
The dataset was also
|
63 |
-
|
64 |
-
|
65 |
-
|:-----------|:------------|
|
66 |
-
|mT5-base|73.2 |
|
67 |
-
|mBERT-base|78.4 |
|
68 |
-
|NorBERT-base|78.2 |
|
69 |
-
|North-T5-small|80.5 |
|
70 |
-
|nb-bert-base|81.8 |
|
71 |
-
|North-T5-base|85.3 |
|
72 |
-
|North-T5-large|86.7 |
|
73 |
-
|North-T5-xl|88.7 |
|
74 |
-
|North-T5-xxl|91.8|
|
75 |
|
76 |
|
77 |
### Languages
|
|
|
57 |
|
58 |
### Supported Tasks and Leaderboards
|
59 |
|
60 |
+
This dataset is meant for classification. The first model tested on this dataset was [NB-BERTbase](NbAiLab/nb-bert-base), claiming an F1-score of [81.9](https://arxiv.org/abs/2104.09617)).
|
61 |
+
|
62 |
+
The dataset was also used for testing the North-T5-models, where the XXL model ](https://huggingface.co/north/t5_xxl_NCC) claims an F1-score of 91.8.
|
63 |
+
|
64 |
+
Please note that some of the text-fields are quite long. Truncating the text fields, for instance when switching tokenizers, will also make the task harder.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
65 |
|
66 |
|
67 |
### Languages
|