|
--- |
|
inference: false |
|
language: pt |
|
datasets: |
|
- ruanchaves/hatebr |
|
--- |
|
|
|
|
|
# BERTimbau base for Offensive Language Detection |
|
|
|
This is the [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) model finetuned for |
|
Offensive Language Detection with the [HateBR](https://huggingface.co/ruanchaves/hatebr) dataset. |
|
This model is suitable for Portuguese. |
|
|
|
- Git Repo: [Evaluation of Portuguese Language Models](https://github.com/ruanchaves/eplm). |
|
- Demo: [Hugging Face Space: Offensive Language Detection](https://ruanchaves-portuguese-offensive-language-de-d4d0507.hf.space) |
|
|
|
### **Labels**: |
|
* 0 : The text is not offensive. |
|
* 1 : The text is offensive. |
|
|
|
|
|
## Full classification example |
|
|
|
```python |
|
from transformers import AutoModelForSequenceClassification, AutoTokenizer, AutoConfig |
|
import numpy as np |
|
import torch |
|
from scipy.special import softmax |
|
|
|
model_name = "ruanchaves/bert-base-portuguese-cased-hatebr" |
|
s1 = "Quem não deve não teme!!" |
|
model = AutoModelForSequenceClassification.from_pretrained(model_name) |
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
config = AutoConfig.from_pretrained(model_name) |
|
model_input = tokenizer(*([s1],), padding=True, return_tensors="pt") |
|
with torch.no_grad(): |
|
output = model(**model_input) |
|
scores = output[0][0].detach().numpy() |
|
scores = softmax(scores) |
|
ranking = np.argsort(scores) |
|
ranking = ranking[::-1] |
|
for i in range(scores.shape[0]): |
|
l = config.id2label[ranking[i]] |
|
s = scores[ranking[i]] |
|
print(f"{i+1}) Label: {l} Score: {np.round(float(s), 4)}") |
|
``` |
|
|
|
## Licensing Information |
|
|
|
The HateBR dataset, including all its components, is provided strictly for academic and research purposes. The use of the dataset for any commercial or non-academic purpose is expressly prohibited without the prior written consent of [SINCH](https://www.sinch.com/). |
|
|
|
|
|
## Citation |
|
|
|
Our research is ongoing, and we are currently working on describing our experiments in a paper, which will be published soon. |
|
In the meanwhile, if you would like to cite our work or models before the publication of the paper, please cite our [GitHub repository](https://github.com/ruanchaves/eplm): |
|
|
|
``` |
|
@software{Chaves_Rodrigues_eplm_2023, |
|
author = {Chaves Rodrigues, Ruan and Tanti, Marc and Agerri, Rodrigo}, |
|
doi = {10.5281/zenodo.7781848}, |
|
month = {3}, |
|
title = {{Evaluation of Portuguese Language Models}}, |
|
url = {https://github.com/ruanchaves/eplm}, |
|
version = {1.0.0}, |
|
year = {2023} |
|
} |
|
``` |