youngggggg
commited on
Commit
·
3fb90a3
1
Parent(s):
4a14975
Update README.md
Browse files
README.md
CHANGED
@@ -17,8 +17,6 @@ The model is pre-trained on a machine-generated dataset for implicit hate speech
|
|
17 |
|
18 |
## Model Details
|
19 |
|
20 |
-
### Model Description
|
21 |
-
|
22 |
<!-- Provide a longer summary of what this model is. -->
|
23 |
|
24 |
- **Base Model:** BERT-base-uncased
|
@@ -48,3 +46,10 @@ While these behavior can lead to social good e.g., constructing training data fo
|
|
48 |
**We strongly emphasize the need for careful handling to prevent unintentional misuse and warn against malicious exploitation of such behaviors.**
|
49 |
|
50 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
|
18 |
## Model Details
|
19 |
|
|
|
|
|
20 |
<!-- Provide a longer summary of what this model is. -->
|
21 |
|
22 |
- **Base Model:** BERT-base-uncased
|
|
|
46 |
**We strongly emphasize the need for careful handling to prevent unintentional misuse and warn against malicious exploitation of such behaviors.**
|
47 |
|
48 |
|
49 |
+
## Acknowledgements
|
50 |
+
- We use the [ToxiGen](https://huggingface.co/datasets/skg/toxigen-data) dataset as a pre-training source to pre-train our model. You can refer to the paper [here](https://aclanthology.org/2022.acl-long.234/).
|
51 |
+
- We anonymize private information in the pre-training source following the code from https://github.com/dhfbk/hate-speech-artifacts.
|
52 |
+
- Our pre-training code is based on the code from https://github.com/princeton-nlp/SimCSE with some modifications.
|
53 |
+
- We use the code from https://github.com/youngwook06/ImpCon to fine-tune and evaluate our model.
|
54 |
+
|
55 |
+
|