File size: 2,755 Bytes
d1a4c9e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c1ca154
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d1a4c9e
 
f79f64e
d1a4c9e
916d076
d1a4c9e
89875f3
d1a4c9e
89875f3
d1a4c9e
 
 
00d1b7c
c1ca154
 
 
 
 
 
 
046dfb6
c1ca154
046dfb6
c1ca154
046dfb6
 
 
 
 
c1ca154
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
---

language:

- ca

license: ???

tags:

- "catalan"

- "textual entailment"

- "teca"

- "CaText"

- "Catalan Textual Corpus"

datasets:

- "projecte-aina/teca"  

metrics:

- "accuracy"


model-index:
- name: roberta-base-ca-cased-te
  results:
  - task: 
      type: text-classification  # Required. Example: automatic-speech-recognition
    dataset:
      type:   projecte-aina/teca
      name: teca
    metrics:
      - type: accuracy
        value: 0.7912139892578125
        
widget:

- text: "<s> M'agrades.</s></s> T'estimo.</s>" 

- text: "M'agrada el sol i la calor. A la Garrotxa plou molt."

- text: "El llibre va caure per la finestra. El llibre va sortir volant."

- text: "El meu aniversari és el 23 de maig. Faré anys a finals de maig."

---

# Catalan BERTa (RoBERTa-base) finetuned for Textual Entailment.

The **roberta-base-ca-cased-te** is a Textual Entailment (TE) model for the Catalan language fine-tuned from the [BERTa](https://huggingface.co/PlanTL-GOB-ES/roberta-base-ca) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the BERTa model card for more details).

## Datasets
We used the TE dataset in Catalan called [TECA](https://huggingface.co/datasets/projecte-aina/viquiquad) for training and evaluation.

## Evaluation and results
We evaluated the roberta-base-ca-cased-te on the TECA test set against standard multilingual and monolingual baselines:

| Model        | TECA (accuracy) | 
| ------------|:----|
| BERTa       | 79.12 |
| mBERT       | x |
| XLM-RoBERTa | x |
| WikiBERT-ca | x |

For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/berta).

## Citing 
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
    title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
    author = "Armengol-Estap{\'e}, Jordi  and
      Carrino, Casimiro Pio  and
      Rodriguez-Penagos, Carlos  and
      de Gibert Bonet, Ona  and
      Armentano-Oller, Carme  and
      Gonzalez-Agirre, Aitor  and
      Melero, Maite  and
      Villegas, Marta",
    booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
    month = aug,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.findings-acl.437",
    doi = "10.18653/v1/2021.findings-acl.437",
    pages = "4933--4946",
}
```
## Funding
TODO
## Disclaimer
TODO