Edit model card

Danish ELECTRA for hate speech (offensive language) detection

The ELECTRA Offensive model detects whether a Danish text is offensive or not. It is based on the pretrained Danish Ælæctra model.

See the DaNLP documentation for more details.

Here is how to use the model:

from transformers import ElectraTokenizer, ElectraForSequenceClassification

model = ElectraForSequenceClassification.from_pretrained("alexandrainst/da-hatespeech-detection-small")
tokenizer = ElectraTokenizer.from_pretrained("alexandrainst/da-hatespeech-detection-small")

Training data

The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio.

Downloads last month
1,246
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using alexandrainst/da-hatespeech-detection-small 7