|
# dv-wave |
|
|
|
This is my third version of a Dhivehi language model trained with |
|
Google Research's [ELECTRA](https://github.com/google-research/electra). |
|
|
|
Tokenization and pre-training CoLab: https://colab.research.google.com/drive/1ZJ3tU9MwyWj6UtQ-8G7QJKTn-hG1uQ9v?usp=sharing |
|
|
|
Using SimpleTransformers to classify news https://colab.research.google.com/drive/1KnyQxRNWG_yVwms_x9MUAqFQVeMecTV7?usp=sharing |
|
|
|
V1: similar performance to mBERT on news classification task after finetuning for 3 epochs (52%) |
|
|
|
V2: fixed tokenizers do_lower_case=False and strip_accents=False to preserve vowel signs of Dhivehi |
|
|
|
> 8-topic news classification score: 88.6% compared to mBERT: 51.8% |
|
|
|
V3: trained longer on larger corpus (added OSCAR and Wikipedia) |
|
|
|
> news classification score: 91.7% |
|
|
|
## Corpus |
|
|
|
Trained on @Sofwath's 307MB corpus of Dhivehi text: https://github.com/Sofwath/DhivehiDatasets |
|
|
|
Plus [OSCAR](https://oscar-corpus.com/) and 2020-10-01 dump of dv.wikipedia.org |
|
|
|
## Vocabulary |
|
|
|
Included as vocab.txt in the upload |
|
|