Edit model card

MeBERT

MeBERT is a Marathi-English code-mixed BERT model trained on Roman text. It is a base BERT model fine-tuned on L3Cube-MeCorpus.
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)

More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2306.14030)

Other models from MeBERT family:
MeBERT
MeRoBERTa

MeBERT-Mixed
MeBERT-Mixed-v2
MeRoBERTa-Mixed

MeLID-RoBERTa
MeHate-RoBERTa
MeSent-RoBERTa
MeHate-BERT
MeLID-BERT

Citing:

@article{chavan2023my,
  title={My Boli: Code-mixed Marathi-English Corpora, Pretrained Language Models and Evaluation Benchmarks},
  author={Chavan, Tanmay and Gokhale, Omkar and Kane, Aditya and Patankar, Shantanu and Joshi, Raviraj},
  journal={arXiv preprint arXiv:2306.14030},
  year={2023}
}
Downloads last month
10
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.