File merged
Browse files
README.md
CHANGED
@@ -1 +1,25 @@
|
|
1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
=======
|
2 |
+
The model is used for classifying a text as Abusive (Hatespeech and Offensive) or Normal. The model is trained using data from Gab and Twitter and Human Rationales were included as part of the training data to boost the performance. The model also has a rationale predictor head that can predict the rationales given an abusive sentence.
|
3 |
+
|
4 |
+
The dataset and models are available here: https://github.com/punyajoy/HateXplainkfk
|
5 |
+
|
6 |
+
**For more details about our paper**
|
7 |
+
|
8 |
+
Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee "[HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection)". Accepted at AAAI 2021.
|
9 |
+
|
10 |
+
***Please cite our paper in any published work that uses any of these resources.***
|
11 |
+
~~~
|
12 |
+
|
13 |
+
@article{mathew2020hatexplain,
|
14 |
+
title={HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection},
|
15 |
+
author={Mathew, Binny and Saha, Punyajoy and Yimam, Seid Muhie and Biemann, Chris and Goyal, Pawan and Mukherjee, Animesh},
|
16 |
+
journal={arXiv preprint arXiv:2012.10289},
|
17 |
+
year={2020}
|
18 |
+
|
19 |
+
}
|
20 |
+
|
21 |
+
~~~
|
22 |
+
|
23 |
+
|
24 |
+
|
25 |
+
|