5roop commited on
Commit
57a5384
1 Parent(s): 18231e4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -55
README.md CHANGED
@@ -20,41 +20,41 @@ In all cases, this model was finetuned for specific downstream tasks.
20
  ## NER
21
  Mean F1 scores were used to evaluate performance.
22
 
23
- | system | dataset | F1 score |
24
- |:-----------------------------------------------------------------------|:----|------:|
25
- | **XLM-R-BERTić** | hr500k | 0.927 |
26
- | [BERTić](https://huggingface.co/classla/bcms-bertic) | hr500k | 0.925 |
27
- | XLM-R-SloBERTić | hr500k | 0.923 |
28
- | XLM-Roberta-Large |hr500k | 0.919 |
29
- | [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | hr500k | 0.918 |
30
- | XLM-Roberta-Base | hr500k | 0.903 |
31
-
32
- | system | dataset | F1 score |
33
- |:-----------------------------------------------------------------------|:----|------:|
34
- | XLM-R-SloBERTić | ReLDI-hr | 0.812 |
35
- | **XLM-R-BERTić** | ReLDI-hr | 0.809 |
36
- | [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | ReLDI-hr | 0.794 |
37
- | [BERTić](https://huggingface.co/classla/bcms-bertic) | ReLDI-hr | 0.792 |
38
- | XLM-Roberta-Large |ReLDI-hr | 0.791 |
39
- | XLM-Roberta-Base | ReLDI-hr | 0.763 |
40
-
41
- | system | dataset | F1 score |
42
- |:-----------------------------------------------------------------------|:----|------:|
43
- | XLM-R-SloBERTić | SETimes.SR | 0.949 |
44
- | **XLM-R-BERTić** | SETimes.SR | 0.940 |
45
- | [BERTić](https://huggingface.co/classla/bcms-bertic) | SETimes.SR | 0.936 |
46
- | XLM-Roberta-Large |SETimes.SR | 0.933 |
47
- | [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | SETimes.SR | 0.922 |
48
- | XLM-Roberta-Base | SETimes.SR | 0.914 |
49
-
50
- | system | dataset | F1 score |
51
- |:-----------------------------------------------------------------------|:----|------:|
52
- | **XLM-R-BERTić** | ReLDI-sr | 0.841 |
53
- | XLM-R-SloBERTić | ReLDI-sr | 0.824 |
54
- | [BERTić](https://huggingface.co/classla/bcms-bertic) | ReLDI-sr | 0.798 |
55
- | XLM-Roberta-Large |ReLDI-sr | 0.774 |
56
- | [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | ReLDI-sr | 0.751 |
57
- | XLM-Roberta-Base | ReLDI-sr | 0.734 |
58
 
59
  ## Sentiment regression
60
 
@@ -65,33 +65,35 @@ The procedure is explained in greater detail in the dedicated [benchmarking repo
65
  |:-----------------------------------------------------------------------|:--------------------|:-------------------------|------:|
66
  | [xlm-r-parlasent](https://huggingface.co/classla/xlm-r-parlasent) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.615 |
67
  | [BERTić](https://huggingface.co/classla/bcms-bertic) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.612 |
68
- | XLM-R-SloBERTić | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.607 |
69
  | XLM-Roberta-Large | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.605 |
70
- | **XLM-R-BERTić** | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.601 |
71
  | [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.537 |
72
  | XLM-Roberta-Base | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.500 |
73
  | dummy (mean) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | -0.12 |
 
 
74
  ## COPA
75
 
76
 
77
- | system | dataset | Accuracy score |
78
- |:-----------------------------------------------------------------------|:----|------:|
79
- | [BERTić](https://huggingface.co/classla/bcms-bertic) | Copa-SR | 0.689 |
80
- | XLM-R-SloBERTić | Copa-SR | 0.665 |
81
- | **XLM-R-BERTić** | Copa-SR | 0.637 |
82
- | [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | Copa-SR | 0.607 |
83
- | XLM-Roberta-Base | Copa-SR | 0.573 |
84
- | XLM-Roberta-Large |Copa-SR | 0.570 |
85
-
86
-
87
- | system | dataset | Accuracy score |
88
- |:-----------------------------------------------------------------------|:----|------:|
89
- | [BERTić](https://huggingface.co/classla/bcms-bertic) | Copa-HR | 0.669 |
90
- | XLM-R-SloBERTić | Copa-HR | 0.628 |
91
- | **XLM-R-BERTić** | Copa-HR | 0.635 |
92
- | [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | Copa-HR | 0.669 |
93
- | XLM-Roberta-Base | Copa-HR | 0.585 |
94
- | XLM-Roberta-Large |Copa-HR | 0.571 |
95
 
96
 
97
 
 
20
  ## NER
21
  Mean F1 scores were used to evaluate performance.
22
 
23
+ | system | dataset | F1 score |
24
+ |:-----------------------------------------------------------------------|:--------|---------:|
25
+ | **XLM-R-BERTić** (this model) | hr500k | 0.927 |
26
+ | [BERTić](https://huggingface.co/classla/bcms-bertic) | hr500k | 0.925 |
27
+ | XLM-R-SloBERTić | hr500k | 0.923 |
28
+ | XLM-Roberta-Large | hr500k | 0.919 |
29
+ | [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | hr500k | 0.918 |
30
+ | XLM-Roberta-Base | hr500k | 0.903 |
31
+
32
+ | system | dataset | F1 score |
33
+ |:-----------------------------------------------------------------------|:---------|---------:|
34
+ | XLM-R-SloBERTić | ReLDI-hr | 0.812 |
35
+ | **XLM-R-BERTić** (this model) | ReLDI-hr | 0.809 |
36
+ | [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | ReLDI-hr | 0.794 |
37
+ | [BERTić](https://huggingface.co/classla/bcms-bertic) | ReLDI-hr | 0.792 |
38
+ | XLM-Roberta-Large | ReLDI-hr | 0.791 |
39
+ | XLM-Roberta-Base | ReLDI-hr | 0.763 |
40
+
41
+ | system | dataset | F1 score |
42
+ |:-----------------------------------------------------------------------|:-----------|---------:|
43
+ | XLM-R-SloBERTić | SETimes.SR | 0.949 |
44
+ | **XLM-R-BERTić** (this model) | SETimes.SR | 0.940 |
45
+ | [BERTić](https://huggingface.co/classla/bcms-bertic) | SETimes.SR | 0.936 |
46
+ | XLM-Roberta-Large | SETimes.SR | 0.933 |
47
+ | [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | SETimes.SR | 0.922 |
48
+ | XLM-Roberta-Base | SETimes.SR | 0.914 |
49
+
50
+ | system | dataset | F1 score |
51
+ |:-----------------------------------------------------------------------|:---------|---------:|
52
+ | **XLM-R-BERTić** (this model) | ReLDI-sr | 0.841 |
53
+ | XLM-R-SloBERTić | ReLDI-sr | 0.824 |
54
+ | [BERTić](https://huggingface.co/classla/bcms-bertic) | ReLDI-sr | 0.798 |
55
+ | XLM-Roberta-Large | ReLDI-sr | 0.774 |
56
+ | [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | ReLDI-sr | 0.751 |
57
+ | XLM-Roberta-Base | ReLDI-sr | 0.734 |
58
 
59
  ## Sentiment regression
60
 
 
65
  |:-----------------------------------------------------------------------|:--------------------|:-------------------------|------:|
66
  | [xlm-r-parlasent](https://huggingface.co/classla/xlm-r-parlasent) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.615 |
67
  | [BERTić](https://huggingface.co/classla/bcms-bertic) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.612 |
68
+ | XLM-R-SloBERTić | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.607 |
69
  | XLM-Roberta-Large | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.605 |
70
+ | **XLM-R-BERTić** (this model) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.601 |
71
  | [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.537 |
72
  | XLM-Roberta-Base | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | 0.500 |
73
  | dummy (mean) | ParlaSent_BCS.jsonl | ParlaSent_BCS_test.jsonl | -0.12 |
74
+
75
+
76
  ## COPA
77
 
78
 
79
+ | system | dataset | Accuracy score |
80
+ |:-----------------------------------------------------------------------|:--------|---------------:|
81
+ | [BERTić](https://huggingface.co/classla/bcms-bertic) | Copa-SR | 0.689 |
82
+ | XLM-R-SloBERTić | Copa-SR | 0.665 |
83
+ | **XLM-R-BERTić** (this model) | Copa-SR | 0.637 |
84
+ | [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | Copa-SR | 0.607 |
85
+ | XLM-Roberta-Base | Copa-SR | 0.573 |
86
+ | XLM-Roberta-Large | Copa-SR | 0.570 |
87
+
88
+
89
+ | system | dataset | Accuracy score |
90
+ |:-----------------------------------------------------------------------|:--------|---------------:|
91
+ | [BERTić](https://huggingface.co/classla/bcms-bertic) | Copa-HR | 0.669 |
92
+ | XLM-R-SloBERTić | Copa-HR | 0.628 |
93
+ | **XLM-R-BERTić** (this model) | Copa-HR | 0.635 |
94
+ | [crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) | Copa-HR | 0.669 |
95
+ | XLM-Roberta-Base | Copa-HR | 0.585 |
96
+ | XLM-Roberta-Large | Copa-HR | 0.571 |
97
 
98
 
99