File size: 899 Bytes
5036248
 
a6591f9
5036248
 
 
 
a6591f9
 
 
 
 
 
 
 
 
 
 
b01b989
5036248
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# Bangla-Electra

This is a second attempt at a Bangla/Bengali language model trained with
Google Research's [ELECTRA](https://github.com/google-research/electra).

Tokenization and pre-training CoLab: https://colab.research.google.com/drive/1gpwHvXAnNQaqcu-YNx1kafEVxz07g2jL

V1 - 120,000 steps
V2 - 190,000 steps

## Benchmarks

Classification with SimpleTransformers: https://colab.research.google.com/drive/1vltPI81atzRvlALv4eCvEB0KdFoEaCOb

On Soham Chatterjee's [news classification task](https://github.com/soham96/Bangla2Vec):
(Random: 16.7%, mBERT: 72.3%, Bangla-Electra: 82.3%)

Similar to mBERT on some tasks and configurations described in https://arxiv.org/abs/2004.07807

## Corpus

Trained on a web crawl from https://oscar-corpus.com/ (deduped version, 5.8GB) and 1 July 2020 dump of bn.wikipedia.org (414MB)

## Vocabulary

Included as vocab.txt in the upload - vocab_size is 29898