File size: 1,236 Bytes
c7812b8
 
 
b4e7ad6
c7812b8
 
b4e7ad6
7a26103
7e570bb
56143a6
b4e7ad6
7a26103
 
 
 
 
56143a6
7e570bb
 
b4e7ad6
7a26103
7e570bb
b4e7ad6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
---
language: ti
widget:
- text: "ዓቕሚ መንእሰይ ኤርትራ <mask> ተራእዩ"
---

# TiRoBERTa: RoBERTa Pretrained for the Tigrinya Language

We pretrain a RoBERTa base model for Tigrinya on a dataset of 40 million tokens trained for 40 epochs.

Contained in this repo is the original pretrained Flax model that was trained on a TPU v3.8 and it's corresponding PyTorch version.


## Hyperparameters

The hyperparameters corresponding to model sizes mentioned above are as follows:

| Model Size | L  | AH | HS  | FFN  | P    | Seq  |
|------------|----|----|-----|------|------|------|
| BASE       | 12 | 12 | 768 | 3072 | 125M | 512  |

(L = number of layers; AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters; Seq = maximum sequence length.)

### Framework versions

- Transformers 4.12.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3


## Citation

If you use this model in your product or research, please cite as follows:

```
@article{Fitsum2021TiPLMs,
  author={Fitsum Gaim and Wonsuk Yang and Jong C. Park},
  title={Monolingual Pre-trained Language Models for Tigrinya},
  year=2021,
  publisher={WiNLP 2021 at EMNLP 2021}
}
```