File size: 3,455 Bytes
c3c1508
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
language:
- en
license: apache-2.0
extra_gated_prompt: 
  Access to this model is automatically granted upon accepting the [AI2
  Responsible Use Guidelines](https://allenai.org/responsible-use.pdf), and
  completing all fields below
extra_gated_fields:
  Your full name: text
  Organization or entity you are affiliated with: text
  State or country you are located in: text
  Contact email: text
  Please describe your intended use of the low risk artifact(s): text
  I understand that this model is a research artifact that may contain or produce unfiltered, toxic, or harmful material: checkbox
  I agree to use this model for research purposes in accordance with the AI2 Responsible Use Guidelines: checkbox
  I agree that AI2 may use my information as described in the Privacy Policy: checkbox
  I certify that the information I have provided is true and accurate: checkbox
---


## Model Card for llama2-13b-WildJailbreak

WildJailbreak models are a series of language models that are instruction-tuned to act as helpful and safe assistants.

For more details, read the paper: [WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models](https://arxiv.org/abs/2406.18510).

## Model description

- **Model type:** The model is fine-tuned with the [WildJailbreak](https://huggingface.co/datasets/allenai/wildjailbreak) safety training dataset + an augmented version of [Tulu2Mix](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture), a general capability instruction-tuning dataset.
- **Model size:** 13B
- **Language(s) (NLP):** English
- **License:** Apache 2.0.
- **Finetuned from model:** [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)

## Results

Please refer to our [paper](https://arxiv.org/abs/2406.18510) for the full detail of model results.

<img src="assets/safety_training_results.png" alt="drawing" width="600"/>

## Intended uses & limitations

The model was fine-tuned on a mixture of [WildJailbreak](https://huggingface.co/datasets/allenai/wildjailbreak) safety training data and an augmented version of [Tulu2Mix](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture) dataset, which contains a diverse range of human created instructions and synthetic dialogues generated primarily by other LLMs.
Although our model went through significant safety enhancement by [WildJailbreak](https://huggingface.co/datasets/allenai/wildjailbreak), it's not bulletproof to all types of jailbreaks (especially in multilingual setup and multiturn conversations).
We hope that by open-sourcing safety-trained models and their safety training resources, we can facilitate a new arena of LLM safety studies regarding the limitations and promises of LLM safety, tailored to models with enhanced safety ability.

### Training details


<img src="assets/params.png" alt="drawing" width="200"/>


## Citation

If you find this resource useful in your work, please cite it with:

```
@misc{wildteaming2024,
      title={WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models}, 
      author={Liwei Jiang and Kavel Rao and Seungju Han and Allyson Ettinger and Faeze Brahman and Sachin Kumar and Niloofar Mireshghallah and Ximing Lu and Maarten Sap and Yejin Choi and Nouha Dziri},
      year={2024},
      eprint={2406.18510},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2406.18510}, 
}
```