Mixed-Distil-BERT / README.md
md-nishat-008's picture
Update README.md
1990cf6
|
raw
history blame
565 Bytes
metadata
license: apache-2.0

The model is pretrained on the OSCAR dataset for Bangla, English and Hindi. And further pre-trained on 560k code-mixed data (Bangla-English-Hindi). The base model is Distil-BERT and the intended use for this model is for the datasets that contain a Code-mixing of these languages.

To cite:

@article{raihan2023mixed, title={Mixed-Distil-BERT: Code-mixed Language Modeling for Bangla, English, and Hindi}, author={Raihan, Md Nishat and Goswami, Dhiman and Mahmud, Antara}, journal={arXiv preprint arXiv:2309.10272}, year={2023} }