Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model:
|
3 |
+
- google-bert/bert-base-cased
|
4 |
+
---
|
5 |
+
|
6 |
+
# Model Card for Model ID
|
7 |
+
|
8 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
+
|
10 |
+
This repository is a backup for the evaluator in [Jailbreak_LLM](https://github.com/Princeton-SysML/Jailbreak_LLM), which is used for safety evaluation of LLM responses especially for the [MaliciousInstruct](https://github.com/Princeton-SysML/Jailbreak_LLM/blob/main/data/MaliciousInstruct.txt) benchmark.
|
11 |
+
|
12 |
+
## Citation
|
13 |
+
|
14 |
+
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
15 |
+
|
16 |
+
```
|
17 |
+
@article{huang2023catastrophic,
|
18 |
+
title={Catastrophic Jailbreak of Open-source LLMs via Exploiting Generation},
|
19 |
+
author={Huang, Yangsibo and Gupta, Samyak and Xia, Mengzhou and Li, Kai and Chen, Danqi},
|
20 |
+
journal={arXiv preprint arXiv:2310.06987},
|
21 |
+
year={2023}
|
22 |
+
}
|
23 |
+
```
|