Update README.md
Browse files
README.md
CHANGED
@@ -20,41 +20,41 @@ In all cases, this model was finetuned for specific downstream tasks.
|
|
20 |
## NER
|
21 |
Mean F1 scores were used to evaluate performance.
|
22 |
|
23 |
-
| system | dataset
|
24 |
-
|
25 |
-
| **XLM-R-BERTić**
|
26 |
-
| [BERTić](https://huggingface.co/classla/bcms-bertic) | hr500k
|
27 |
-
|
|
28 |
-
| XLM-Roberta-Large |hr500k |
|
29 |
-
| [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | hr500k
|
30 |
-
| XLM-Roberta-Base | hr500k
|
31 |
-
|
32 |
-
| system | dataset
|
33 |
-
|
34 |
-
|
|
35 |
-
| **XLM-R-BERTić**
|
36 |
-
| [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | ReLDI-hr |
|
37 |
-
| [BERTić](https://huggingface.co/classla/bcms-bertic) | ReLDI-hr |
|
38 |
-
| XLM-Roberta-Large |ReLDI-hr
|
39 |
-
| XLM-Roberta-Base | ReLDI-hr |
|
40 |
-
|
41 |
-
| system | dataset
|
42 |
-
|
43 |
-
|
|
44 |
-
| **XLM-R-BERTić**
|
45 |
-
| [BERTić](https://huggingface.co/classla/bcms-bertic) | SETimes.SR |
|
46 |
-
| XLM-Roberta-Large |SETimes.SR
|
47 |
-
| [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | SETimes.SR |
|
48 |
-
| XLM-Roberta-Base | SETimes.SR |
|
49 |
-
|
50 |
-
| system | dataset
|
51 |
-
|
52 |
-
| **XLM-R-BERTić**
|
53 |
-
|
|
54 |
-
| [BERTić](https://huggingface.co/classla/bcms-bertic) | ReLDI-sr |
|
55 |
-
| XLM-Roberta-Large |ReLDI-sr
|
56 |
-
| [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | ReLDI-sr |
|
57 |
-
| XLM-Roberta-Base | ReLDI-sr |
|
58 |
|
59 |
## Sentiment regression
|
60 |
|
@@ -65,33 +65,35 @@ The procedure is explained in greater detail in the dedicated [benchmarking repo
|
|
65 |
|:-----------------------------------------------------------------------|:--------------------|:-------------------------|------:|
|
66 |
| [xlm-r-parlasent](https://huggingface.co/classla/xlm-r-parlasent) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.615 |
|
67 |
| [BERTić](https://huggingface.co/classla/bcms-bertic) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.612 |
|
68 |
-
|
|
69 |
| XLM-Roberta-Large | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.605 |
|
70 |
-
| **XLM-R-BERTić**
|
71 |
| [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.537 |
|
72 |
| XLM-Roberta-Base | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.500 |
|
73 |
| dummy (mean) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | -0.12 |
|
|
|
|
|
74 |
## COPA
|
75 |
|
76 |
|
77 |
-
| system | dataset
|
78 |
-
|
79 |
-
| [BERTić](https://huggingface.co/classla/bcms-bertic) | Copa-SR |
|
80 |
-
|
|
81 |
-
| **XLM-R-BERTić**
|
82 |
-
| [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | Copa-SR |
|
83 |
-
| XLM-Roberta-Base | Copa-SR |
|
84 |
-
| XLM-Roberta-Large |Copa-SR
|
85 |
-
|
86 |
-
|
87 |
-
| system | dataset
|
88 |
-
|
89 |
-
| [BERTić](https://huggingface.co/classla/bcms-bertic) | Copa-HR |
|
90 |
-
|
|
91 |
-
| **XLM-R-BERTić**
|
92 |
-
| [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | Copa-HR |
|
93 |
-
| XLM-Roberta-Base | Copa-HR |
|
94 |
-
| XLM-Roberta-Large |Copa-HR
|
95 |
|
96 |
|
97 |
|
|
|
20 |
## NER
|
21 |
Mean F1 scores were used to evaluate performance.
|
22 |
|
23 |
+
| system | dataset | F1 score |
|
24 |
+
|:-----------------------------------------------------------------------|:--------|---------:|
|
25 |
+
| **XLM-R-BERTić** (this model) | hr500k | 0.927 |
|
26 |
+
| [BERTić](https://huggingface.co/classla/bcms-bertic) | hr500k | 0.925 |
|
27 |
+
| XLM-R-SloBERTić | hr500k | 0.923 |
|
28 |
+
| XLM-Roberta-Large | hr500k | 0.919 |
|
29 |
+
| [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | hr500k | 0.918 |
|
30 |
+
| XLM-Roberta-Base | hr500k | 0.903 |
|
31 |
+
|
32 |
+
| system | dataset | F1 score |
|
33 |
+
|:-----------------------------------------------------------------------|:---------|---------:|
|
34 |
+
| XLM-R-SloBERTić | ReLDI-hr | 0.812 |
|
35 |
+
| **XLM-R-BERTić** (this model) | ReLDI-hr | 0.809 |
|
36 |
+
| [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | ReLDI-hr | 0.794 |
|
37 |
+
| [BERTić](https://huggingface.co/classla/bcms-bertic) | ReLDI-hr | 0.792 |
|
38 |
+
| XLM-Roberta-Large | ReLDI-hr | 0.791 |
|
39 |
+
| XLM-Roberta-Base | ReLDI-hr | 0.763 |
|
40 |
+
|
41 |
+
| system | dataset | F1 score |
|
42 |
+
|:-----------------------------------------------------------------------|:-----------|---------:|
|
43 |
+
| XLM-R-SloBERTić | SETimes.SR | 0.949 |
|
44 |
+
| **XLM-R-BERTić** (this model) | SETimes.SR | 0.940 |
|
45 |
+
| [BERTić](https://huggingface.co/classla/bcms-bertic) | SETimes.SR | 0.936 |
|
46 |
+
| XLM-Roberta-Large | SETimes.SR | 0.933 |
|
47 |
+
| [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | SETimes.SR | 0.922 |
|
48 |
+
| XLM-Roberta-Base | SETimes.SR | 0.914 |
|
49 |
+
|
50 |
+
| system | dataset | F1 score |
|
51 |
+
|:-----------------------------------------------------------------------|:---------|---------:|
|
52 |
+
| **XLM-R-BERTić** (this model) | ReLDI-sr | 0.841 |
|
53 |
+
| XLM-R-SloBERTić | ReLDI-sr | 0.824 |
|
54 |
+
| [BERTić](https://huggingface.co/classla/bcms-bertic) | ReLDI-sr | 0.798 |
|
55 |
+
| XLM-Roberta-Large | ReLDI-sr | 0.774 |
|
56 |
+
| [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | ReLDI-sr | 0.751 |
|
57 |
+
| XLM-Roberta-Base | ReLDI-sr | 0.734 |
|
58 |
|
59 |
## Sentiment regression
|
60 |
|
|
|
65 |
|:-----------------------------------------------------------------------|:--------------------|:-------------------------|------:|
|
66 |
| [xlm-r-parlasent](https://huggingface.co/classla/xlm-r-parlasent) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.615 |
|
67 |
| [BERTić](https://huggingface.co/classla/bcms-bertic) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.612 |
|
68 |
+
| XLM-R-SloBERTić | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.607 |
|
69 |
| XLM-Roberta-Large | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.605 |
|
70 |
+
| **XLM-R-BERTić** (this model) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.601 |
|
71 |
| [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.537 |
|
72 |
| XLM-Roberta-Base | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.500 |
|
73 |
| dummy (mean) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | -0.12 |
|
74 |
+
|
75 |
+
|
76 |
## COPA
|
77 |
|
78 |
|
79 |
+
| system | dataset | Accuracy score |
|
80 |
+
|:-----------------------------------------------------------------------|:--------|---------------:|
|
81 |
+
| [BERTić](https://huggingface.co/classla/bcms-bertic) | Copa-SR | 0.689 |
|
82 |
+
| XLM-R-SloBERTić | Copa-SR | 0.665 |
|
83 |
+
| **XLM-R-BERTić** (this model) | Copa-SR | 0.637 |
|
84 |
+
| [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | Copa-SR | 0.607 |
|
85 |
+
| XLM-Roberta-Base | Copa-SR | 0.573 |
|
86 |
+
| XLM-Roberta-Large | Copa-SR | 0.570 |
|
87 |
+
|
88 |
+
|
89 |
+
| system | dataset | Accuracy score |
|
90 |
+
|:-----------------------------------------------------------------------|:--------|---------------:|
|
91 |
+
| [BERTić](https://huggingface.co/classla/bcms-bertic) | Copa-HR | 0.669 |
|
92 |
+
| XLM-R-SloBERTić | Copa-HR | 0.628 |
|
93 |
+
| **XLM-R-BERTić** (this model) | Copa-HR | 0.635 |
|
94 |
+
| [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | Copa-HR | 0.669 |
|
95 |
+
| XLM-Roberta-Base | Copa-HR | 0.585 |
|
96 |
+
| XLM-Roberta-Large | Copa-HR | 0.571 |
|
97 |
|
98 |
|
99 |
|