metadata
model-index:
- name: twitter-roberta-base-hate-latest
results: []
pipeline_tag: text-classification
cardiffnlp/twitter-roberta-base-hate-latest
This model is a fine-tuned version of cardiffnlp/twitter-roberta-base-2022-154m for binary hate-speech classification. A combination of 13 different hate-speech datasets in the English language were used to fine-tune the model.
Following metrics are achieved
Dataset | Accuracy | Macro-F1 | Weighted-F1 |
---|---|---|---|
hatEval, SemEval-2019 Task 5: Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter | 0.5848 | 0.5657 | 0.5514 |
ucberkeley-dlab/measuring-hate-speech | 0.8706 | 0.8531 | 0.8701 |
Detecting East Asian Prejudice on Social Media | 0.9276 | 0.8935 | 0.9273 |
Call me sexist, but | 0.9033 | 0.6288 | 0.8852 |
Predicting the Type and Target of Offensive Posts in Social Media | 0.9075 | 0.5984 | 0.8935 |
HateXplain | 0.9594 | 0.8024 | 0.9600 |
Large Scale Crowdsourcing and Characterization of Twitter Abusive BehaviorLarge Scale Crowdsourcing and Characterization of Twitter Abusive Behavior | 0.6817 | 0.5939 | 0.6233 |
Twitter Sentiment Analysis | 0.9808 | 0.9258 | 0.9807 |
Overview of the HASOC track at FIRE 2019:Hate Speech and Offensive Content Identification in Indo-European Languages | 0.8665 | 0.5562 | 0.8343 |
Hateful Symbols or Hateful People? Predictive Features for Hate Speech Detection on Twitter | 0.9465 | 0.8557 | 0.9440 |
Automated Hate Speech Detection and the Problem of Offensive Language | 0.9116 | 0.8797 | 0.9100 |
Hateful Symbols or Hateful People? Predictive Features for Hate Speech Detection on Twitter | 0.8378 | 0.8338 | 0.8385 |
Multilingual and Multi-Aspect Hate Speech Analysis | 0.9655 | 0.4912 | 0.9824 |
Overall | 0.8827 | 0.8383 | 0.8842 |
Usage
Install tweetnlp via pip.
pip install tweetnlp
Load the model in python.
import tweetnlp
model = tweetnlp.Classifier("cardiffnlp/twitter-roberta-base-hate-latest")
model.predict('I love everybody :)')
>> {'label': 'NOT-HATE'}