Tetun BERT model

A fine-tune of xlm-roberta-large trained on Tetun data with a masked language modelling objective.

Tetun data used: MADLAD tet clean split (~40k documents).

Trained for 10 epochs with hyper params from the MasakhaNER paper (lr 5e-5 etc).

Downloads last month
108
Safetensors
Model size
560M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.