|
--- |
|
license: cc-by-4.0 |
|
language: mr |
|
datasets: |
|
- L3Cube-MahaCorpus |
|
--- |
|
A MahaBERT (l3cube-pune/marathi-bert-v2) model finetuned on random 1 million Marathi Tweets. |
|
More details on the dataset, models, and baseline results can be found in our [paper] (<a href='https://arxiv.org/abs/2210.04267'> link </a>) |
|
|
|
Released under project: https://github.com/l3cube-pune/MarathiNLP |
|
|
|
``` |
|
@article{gokhale2022spread, |
|
title={Spread Love Not Hate: Undermining the Importance of Hateful Pre-training for Hate Speech Detection}, |
|
author={Gokhale, Omkar and Kane, Aditya and Patankar, Shantanu and Chavan, Tanmay and Joshi, Raviraj}, |
|
journal={arXiv preprint arXiv:2210.04267}, |
|
year={2022} |
|
} |
|
``` |
|
|