Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 4,695 Bytes
55410d3
 
777676e
 
 
 
 
52cee6e
 
 
 
 
 
 
 
 
 
 
 
 
 
000f41a
 
 
 
 
52cee6e
000f41a
52cee6e
 
 
 
55de8b7
52cee6e
 
 
 
55de8b7
52cee6e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ffd63bb
52cee6e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55de8b7
52cee6e
 
 
 
 
 
 
 
 
 
 
55de8b7
52cee6e
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
---
license: apache-2.0
language:
- en
pretty_name: 100 system prompts for benchmarking large language models
size_categories:
- n<1K
---
# Dataset Card for Dataset Name

<!-- Provide a quick summary of the dataset. -->

This datset is a collection of 100 system prompts for large language models. 
## Dataset Details

### Dataset Description

<!-- Provide a longer summary of what this dataset is. -->

These 100 system prompts test a model's ability to follow grammatical patterns; answer basic multiple choice questions; act according to a particular persona; memorize information; and speak in French.

Files:
- **hundred_system_prompts.py**: refer to this to see the (prompt, probe, function) triplets, as well as the helper functions. 
- **hundred_system_prompts.json**: this is purely for display purposes.
- **run_benchmark.py**: this runs the 100 tests on a model, without any context other than the system prompt and the probe.
- **create_json_file.py**: a small file that was used to create the **hundred_system_prompts.py** file.

More info:
- **Curated by:** Naomi Bashkansky
- **Language(s) (NLP):** en
- **License:** apache-2.0

### Dataset Sources

<!-- Provide the basic links for the dataset. -->

- **Repository:** https://github.com/likenneth/persona
- **Paper:** Forthcoming.

## Uses

A benchmark for large language models: how good are LLMs at following a system prompt? Tests both basic capabilities (is a model able to follow the system prompt) and basic alignment (does a model that *can* follow the system prompt do so).

Can be used to compare different models, or to help in performing interventions on a model to make it better at following system prompts.

### Direct Use

<!-- This section describes suitable use cases for the dataset. -->

This dataset is released open source. Researchers are especially encouraged to use this dataset.

## Dataset Structure

<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->

"prompt" is given as a system prompt to a large language model. "probe" is given as a user inquiry; its purpose it to elicit a response that allows us to check if the LLM is following the system prompt. "function" checks whether the LLM's response to the probe follows the system prompt; it returns a number from 0 (not following) to 1 (following).

## Dataset Creation

### Curation Rationale

<!-- Motivation for the creation of this dataset. -->

There exists no benchmark of system prompts.

### Source Data

<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->

#### Data Collection and Processing

<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->

Process: thinking of system prompts, probes, and testing functions. Running the system prompts on GPT-4 to check GPT-4 is (mostly) able to follow them. Testing functions are in Python.

#### Who are the source data producers?

<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->

Naomi Bashkansky made most of the system prompts, and Kenneth Li made the rest.


#### Personal and Sensitive Information

<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->

No.

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

Limitation: as models become more capable, this benchmark may become outdated/too easy. The ideal benchmark is one that tests the model's alignment - its propensity toward following the system prompt - rather than its ability to do so.

Bias: this datset is only in English, with the exception of three French prompts.

## Citation

<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

Forthcoming.

**APA:**

Forthcoming.

## Dataset Card Authors

Naomi Bashkansky, Kenneth Li

## Dataset Card Contact

naomibashkansky@college.harvard.edu, ke_li@g.harvard.edu