File size: 6,060 Bytes
7bf0eac da4523c 7bf0eac da4523c 7bf0eac 49fe8d3 7bf0eac 187d346 7bf0eac da4523c 7bf0eac da4523c 7bf0eac da4523c 7bf0eac da4523c 7bf0eac |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 |
---
language:
- en
- ha
- yo
- ig
- pcm
pipeline_tag: text-classification
datasets:
- worldbank/NaijaHate
---
# NaijaXLM-T-base Hate
This is a [NaijaXLM-T base](https://huggingface.co/manueltonneau/naija-xlm-twitter-base) model finetuned on Nigerian tweets annotated for hate speech detection. The model is described and evaluated in the [reference paper](https://aclanthology.org/2024.acl-long.488/) and was developed by [@pvcastro](https://huggingface.co/pvcastro) and [@manueltonneau](https://huggingface.co/manueltonneau).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Model type:** xlm-roberta
- **Language(s) (NLP):** (Nigerian) English, Nigerian Pidgin, Hausa, Yoruba, Igbo
- **Finetuned from model:** `worldbank/naija-xlm-twitter-base`
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/worldbank/NaijaHate
- **Paper:** https://aclanthology.org/2024.acl-long.488/
## Training Details
### Training Data
This model was finetuned on the stratified (`dataset=='stratified'`) and active learning (`dataset=='al'`) subset of [NaijaHate](https://huggingface.co/datasets/manueltonneau/NaijaHate).
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
### Training Procedure and Evaluation
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
We perform a 90-10 train-test split and conduct a 5-fold cross-validation with 5 learning rates ranging from 1e-5 to 5e-5. Each fold is trained using 3 different seeds. The train-test split is repeated for 10 different seeds, and the evaluation metrics are averaged across the 10 seeds.
We evaluate model performance on three datasets: the holdout sample from the train-test splits as well as the top-scored sample (`dataset=='eval'`) and the random sample (`dataset=='random'`) from [NaijaHate](https://huggingface.co/datasets/manueltonneau/NaijaHate).
| Model | Holdout | Top-scored | Random |
|---------------|--------------------|--------------------|-------------------|
| GPT-3.5, ZSL | - | 60.3±2.7 | 3.1±1.2 |
| Perspective API | - | 60.2±3.5 | 4.3±2.6 |
| XLM-T | *84.2 ± 0.6* | 51.8 ± 0.7 | 0.6 ± 0.1 |
| XLM-T | *62.0 ± 2.3* | 68.9 ± 0.8 | 3.3 ± 0.6 |
| XLM-T | *70.5 ± 3.7* | 63.7 ± 1.1 | 1.9 ± 0.5 |
| DeBERTaV3 | **82.3 ± 2.3** | 85.3 ± 0.8 | **29.7 ± 4.1** |
| XLM-R | 76.7 ± 2.5 | 83.6 ± 0.8 | 22.1 ± 3.7 |
| mDeBERTaV3 | 29.2 ± 2.0 | 49.6 ± 1.0 | 0.2 ± 0.0 |
| Conv. BERT | 79.2 ± 2.3 | 86.2 ± 0.8 | 22.6 ± 3.6 |
| BERTweet | **83.6 ± 2.0** | **88.5 ± 0.6** | **34.0 ± 4.4** |
| XLM-T | 79.0 ± 2.4 | 84.5 ± 0.9 | 22.5 ± 3.7 |
| AfriBERTa | 70.1 ± 2.7 | 80.1 ± 0.9 | 12.5 ± 2.8 |
| AfroXLM-R | 79.7 ± 2.3 | 86.1 ± 0.8 | 24.7 ± 4.0 |
| XLM-R Naija | 77.0 ± 2.5 | 83.5 ± 0.8 | 19.1 ± 3.4 |
| NaijaXLM-T | **83.4 ± 2.1** | **89.3 ± 0.7** | **33.7 ± 4.5** |
For more information on the evaluation, please read the [reference paper](https://aclanthology.org/2024.acl-long.488/).
## BibTeX entry and citation information
Please cite the [reference paper](https://aclanthology.org/2024.acl-long.488/) if you use this model.
```bibtex
@inproceedings{tonneau-etal-2024-naijahate,
title = "{N}aija{H}ate: Evaluating Hate Speech Detection on {N}igerian {T}witter Using Representative Data",
author = "Tonneau, Manuel and
Quinta De Castro, Pedro and
Lasri, Karim and
Farouq, Ibrahim and
Subramanian, Lakshmi and
Orozco-Olvera, Victor and
Fraiberger, Samuel",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.488",
pages = "9020--9040",
abstract = "To address the global issue of online hate, hate speech detection (HSD) systems are typically developed on datasets from the United States, thereby failing to generalize to English dialects from the Majority World. Furthermore, HSD models are often evaluated on non-representative samples, raising concerns about overestimating model performance in real-world settings. In this work, we introduce NaijaHate, the first dataset annotated for HSD which contains a representative sample of Nigerian tweets. We demonstrate that HSD evaluated on biased datasets traditionally used in the literature consistently overestimates real-world performance by at least two-fold. We then propose NaijaXLM-T, a pretrained model tailored to the Nigerian Twitter context, and establish the key role played by domain-adaptive pretraining and finetuning in maximizing HSD performance. Finally, owing to the modest performance of HSD systems in real-world conditions, we find that content moderators would need to review about ten thousand Nigerian tweets flagged as hateful daily to moderate 60{\%} of all hateful content, highlighting the challenges of moderating hate speech at scale as social media usage continues to grow globally. Taken together, these results pave the way towards robust HSD systems and a better protection of social media users from hateful content in low-resource settings.",
}
``` |