Added fasttext comparison
Browse files
README.md
CHANGED
@@ -24,13 +24,14 @@ model_args = {
|
|
24 |
|
25 |
## Performance
|
26 |
|
27 |
-
The same pipeline was run with two other models and
|
28 |
|
29 |
| model | average accuracy | average macro F1|
|
30 |
|---|---|---|
|
31 |
|sloberta-frenk-hate|0.7785|0.7764|
|
32 |
|EMBEDDIA/crosloengual-bert |0.7616|0.7585|
|
33 |
|xlm-roberta-base |0.686|0.6827|
|
|
|
34 |
|
35 |
From recorded accuracies and macro F1 scores p-values were also calculated:
|
36 |
|
|
|
24 |
|
25 |
## Performance
|
26 |
|
27 |
+
The same pipeline was run with two other transformer models and `fasttext` for comparison. Accuracy and macro F1 score were recorded for each of the 6 fine-tuning sessions and post festum analyzed.
|
28 |
|
29 |
| model | average accuracy | average macro F1|
|
30 |
|---|---|---|
|
31 |
|sloberta-frenk-hate|0.7785|0.7764|
|
32 |
|EMBEDDIA/crosloengual-bert |0.7616|0.7585|
|
33 |
|xlm-roberta-base |0.686|0.6827|
|
34 |
+
|fasttext|0.669|0.659|
|
35 |
|
36 |
From recorded accuracies and macro F1 scores p-values were also calculated:
|
37 |
|