Edit model card

MeBERT-Mixed

MeBERT-Mixed is a Marathi-English code-mixed BERT model trained on Roman + Devanagari text. It is a mBERT model fine-tuned on full L3Cube-MeCorpus.
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)

More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2306.14030)

Other models from MeBERT family:
MeBERT
MeRoBERTa

MeBERT-Mixed
MeBERT-Mixed-v2
MeRoBERTa-Mixed

MeLID-RoBERTa
MeHate-RoBERTa
MeSent-RoBERTa
MeHate-BERT
MeLID-BERT

Citing:

@article{chavan2023my,
  title={My Boli: Code-mixed Marathi-English Corpora, Pretrained Language Models and Evaluation Benchmarks},
  author={Chavan, Tanmay and Gokhale, Omkar and Kane, Aditya and Patankar, Shantanu and Joshi, Raviraj},
  journal={arXiv preprint arXiv:2306.14030},
  year={2023}
}
Downloads last month
4
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.