bsc-temu
commited on
Commit
•
046dfb6
1
Parent(s):
00d1b7c
Update README.md
Browse files
README.md
CHANGED
@@ -59,11 +59,15 @@ The **roberta-base-ca-cased-te** is a Textual Entailment (TE) model for the Cata
|
|
59 |
We used the TE dataset in Catalan called [TECA](https://huggingface.co/datasets/projecte-aina/viquiquad) for training and evaluation.
|
60 |
|
61 |
## Evaluation and results
|
62 |
-
|
63 |
|
64 |
-
|
|
65 |
| ------------|:----|
|
66 |
-
| BERTa |
|
|
|
|
|
|
|
|
|
67 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/berta).
|
68 |
|
69 |
## Citing
|
|
|
59 |
We used the TE dataset in Catalan called [TECA](https://huggingface.co/datasets/projecte-aina/viquiquad) for training and evaluation.
|
60 |
|
61 |
## Evaluation and results
|
62 |
+
We evaluated the roberta-base-ca-cased-te on the TECA test set against standard multilingual and monolingual baselines:
|
63 |
|
64 |
+
| Model | TECA (accuracy) |
|
65 |
| ------------|:----|
|
66 |
+
| BERTa | 79.12 |
|
67 |
+
| mBERT | x |
|
68 |
+
| XLM-RoBERTa | x |
|
69 |
+
| WikiBERT-ca | x |
|
70 |
+
|
71 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/berta).
|
72 |
|
73 |
## Citing
|