Upload 5 files
Browse filesAdded the model files and model card
- README.md +30 -0
- config.json +30 -0
- pytorch_model.bin +3 -0
- sentencepiece.bpe.model +3 -0
- tokenizer.json +0 -0
README.md
ADDED
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
widget:
|
3 |
+
|
4 |
+
- text: "My name is Mark and I live in London. I am a postgraduate student at Queen Mary University."
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
license: mit
|
8 |
+
---
|
9 |
+
|
10 |
+
# Multilingual Hate Speech Classifier for Social Media Content
|
11 |
+
|
12 |
+
A multilingual model for hate speech classification of social media content. The model is based on pre-trained multilingual representations from the XLM-T model (https://arxiv.org/abs/2104.12250) and was jointly fine-tuned on five languages, namely Arabic, Croatian, English, German and Slovenian. The test results on these five languages in terms of F1 score are as follows:
|
13 |
+
|
14 |
+
| Language | F1 |
|
15 |
+
|-----------|:------:|
|
16 |
+
| Arabic | 0.8704 |
|
17 |
+
| Croatian | 0.7226 |
|
18 |
+
| English | 0.7851 |
|
19 |
+
| German | 0.7826 |
|
20 |
+
| Slovenian | 0.7596 |
|
21 |
+
|
22 |
+
## Tokenizer
|
23 |
+
|
24 |
+
During training the text was preprocessed using the original XLM-T tokenizer. The pretrained tokenizer files are included in this repository. We suggest the same tokenizer is used for inference.
|
25 |
+
|
26 |
+
## Model output
|
27 |
+
|
28 |
+
The model classifies each input into one of two distinct classes:
|
29 |
+
* 0 - not-offensive
|
30 |
+
* 1 - offensive
|
config.json
ADDED
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "/home/andrazp/cs_hs_robacofi/src/twitter-xlm-roberta-base/",
|
3 |
+
"architectures": [
|
4 |
+
"XLMRobertaForSequenceClassification"
|
5 |
+
],
|
6 |
+
"attention_probs_dropout_prob": 0.1,
|
7 |
+
"bos_token_id": 0,
|
8 |
+
"classifier_dropout": null,
|
9 |
+
"eos_token_id": 2,
|
10 |
+
"gradient_checkpointing": false,
|
11 |
+
"hidden_act": "gelu",
|
12 |
+
"hidden_dropout_prob": 0.1,
|
13 |
+
"hidden_size": 768,
|
14 |
+
"initializer_range": 0.02,
|
15 |
+
"intermediate_size": 3072,
|
16 |
+
"layer_norm_eps": 1e-05,
|
17 |
+
"max_position_embeddings": 514,
|
18 |
+
"model_type": "xlm-roberta",
|
19 |
+
"num_attention_heads": 12,
|
20 |
+
"num_hidden_layers": 12,
|
21 |
+
"output_past": true,
|
22 |
+
"pad_token_id": 1,
|
23 |
+
"position_embedding_type": "absolute",
|
24 |
+
"problem_type": "single_label_classification",
|
25 |
+
"torch_dtype": "float32",
|
26 |
+
"transformers_version": "4.18.0",
|
27 |
+
"type_vocab_size": 1,
|
28 |
+
"use_cache": true,
|
29 |
+
"vocab_size": 250002
|
30 |
+
}
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5ae9b018c2fedcad4c5df5776701275dc996d5b7fea7b7e09c915cb8a75d5099
|
3 |
+
size 1112267117
|
sentencepiece.bpe.model
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cfc8146abe2a0488e9e2a0c56de7952f7c11ab059eca145a0a727afce0db2865
|
3 |
+
size 5069051
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|