File size: 1,037 Bytes
4830e28
 
2b6cbe6
 
4830e28
00d9f1a
4830e28
00d9f1a
 
 
2b6cbe6
 
00d9f1a
4830e28
 
 
00d9f1a
 
 
4830e28
00d9f1a
 
4830e28
 
 
2b6cbe6
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# dv-wave

This is a second attempt at a Dhivehi language model trained with
Google Research's [ELECTRA](https://github.com/google-research/electra).

Tokenization and pre-training CoLab: https://colab.research.google.com/drive/1ZJ3tU9MwyWj6UtQ-8G7QJKTn-hG1uQ9v?usp=sharing

Using SimpleTransformers to classify news https://colab.research.google.com/drive/1KnyQxRNWG_yVwms_x9MUAqFQVeMecTV7?usp=sharing

V1: similar performance to mBERT on news classification task after finetuning for 3 epochs (52%)

V2: fixed tokenizers do_lower_case=False and strip_accents=False to preserve vowel signs of Dhivehi
  dv-wave: 89% to mBERT: 52%

## Corpus

Trained on @Sofwath's 307MB corpus of Dhivehi text: https://github.com/Sofwath/DhivehiDatasets

This repo also contains the news classification task CSV

[OSCAR](https://oscar-corpus.com/) was considered but has not been added to pretraining; as of
this writing their web crawl has 126MB of Dhivehi text (79MB deduped).

## Vocabulary

Included as vocab.txt in the upload - vocab_size is 29874