--- license: apache-2.0 --- The model is pretrained on the OSCAR dataset for Bangla, English and Hindi. The base model is Distil-BERT and the intended use for this model is for the datasets that contain a Code-mixing of these languages. To cite: Raihan, M. N., Goswami, D., & Mahmud, A. (2023). Mixed-Distil-BERT: Code-mixed Language Modeling for Bangla, English, and Hindi. ArXiv. /abs/2309.10272