Commit
·
04c3e3e
1
Parent(s):
dda52b6
Update README.md
Browse files
README.md
CHANGED
@@ -67,17 +67,15 @@ We compare our model in each iteration (R6-T, R7-T, R8-T) to:
|
|
67 |
* **B-D**: A BERT model trained on the [Davidson et al. (2017)](https://github.com/t-davidson/hate-speech-and-offensive-language) dataset
|
68 |
* **B-F**: A BERT model trained on the [Founta et al. (2018)](https://github.com/ENCASEH2020/hatespeech-twitter) dataset
|
69 |
|
70 |
-
| | **Emoji Test Sets** | | | | **Text Test Sets** |
|
71 |
-
| :------- | :-----------------: | :--------: | :------------: | :--------: | :----------------: |
|
72 |
-
| | **R5-R7** | | **HmojiCheck** | | **R1-R4** |
|
73 |
-
| | **Acc** | **F1** | **Acc** | **F1** | **Acc** | **F1**
|
74 |
-
| **P-IA** | 0\.508 | 0\.394 | 0\.689 | 0\.754 | 0\.679 | 0\.720
|
75 |
-
| **P-TX** | 0\.523 | 0\.448 | 0\.650 | 0\.711 | 0\.602 | 0\.659
|
76 |
-
| **B-D** | 0\.489 | 0\.270 | 0\.578 | 0\.636 | 0\.589 | 0\.607
|
77 |
-
| **B-F** | 0\.496 | 0\.322 | 0\.552 | 0\.605 | 0\.562 | 0\.562
|
78 |
-
| **
|
79 |
-
| **R7-T** | **0\.759** | 0\.762 | 0\.867 | 0\.899 | 0\.824 | 0\.842 | 0\.955 | 0\.967 | 0\.813 | 0\.829 |
|
80 |
-
| **R8-T** | 0\.744 | 0\.755 | 0\.871 | 0\.904 | 0\.827 | 0\.844 | **0\.966** | **0\.975** | **0\.814** | **0\.829** |
|
81 |
|
82 |
For full discussion of the model results, see our [paper](https://arxiv.org/abs/2108.05921).
|
83 |
|
|
|
67 |
* **B-D**: A BERT model trained on the [Davidson et al. (2017)](https://github.com/t-davidson/hate-speech-and-offensive-language) dataset
|
68 |
* **B-F**: A BERT model trained on the [Founta et al. (2018)](https://github.com/ENCASEH2020/hatespeech-twitter) dataset
|
69 |
|
70 |
+
| | **Emoji Test Sets** | | | | **Text Test Sets** | | | | **All Rounds** | |
|
71 |
+
| :------- | :-----------------: | :--------: | :------------: | :--------: | :----------------: | :--------: | :-----------: | :--------: | :------------: | :--------: |
|
72 |
+
| | **R5-R7** | | **HmojiCheck** | | **R1-R4** | | **HateCheck** | | **R1-R7** | |
|
73 |
+
| | **Acc** | **F1** | **Acc** | **F1** | **Acc** | **F1** | **Acc** | **F1** | **Acc** | **F1** |
|
74 |
+
| **P-IA** | 0\.508 | 0\.394 | 0\.689 | 0\.754 | 0\.679 | 0\.720 | 0\.765 | 0\.839 | 0\.658 | 0\.689 |
|
75 |
+
| **P-TX** | 0\.523 | 0\.448 | 0\.650 | 0\.711 | 0\.602 | 0\.659 | 0\.720 | 0\.813 | 0\.592 | 0\.639 |
|
76 |
+
| **B-D** | 0\.489 | 0\.270 | 0\.578 | 0\.636 | 0\.589 | 0\.607 | 0\.632 | 0\.738 | 0\.591 | 0\.586 |
|
77 |
+
| **B-F** | 0\.496 | 0\.322 | 0\.552 | 0\.605 | 0\.562 | 0\.562 | 0\.602 | 0\.694 | 0\.557 | 0\.532 |
|
78 |
+
| **Hatemoji** | **0\.744** | **0\.755** | **0\.871** | **0\.904** | **0\.827** | **0\.844** | **0\.966** | **0\.975** | **0\.814** | **0\.829** |
|
|
|
|
|
79 |
|
80 |
For full discussion of the model results, see our [paper](https://arxiv.org/abs/2108.05921).
|
81 |
|