Datasets:

Modalities:
Tabular
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 1,695 Bytes
f820e82
 
 
57fd20b
 
 
 
 
826ade3
 
 
 
57fd20b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fbea488
57fd20b
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
license: mit
---


# ๐Ÿ“˜ BookMIA Datasets

The **BookMIA datasets** serve as a benchmark designed to evaluate membership inference attack (MIA) methods, specifically in detecting pretraining data from OpenAI models that are released before 2023 (such as text-davinci-003). 

The dataset contains non-member and member data: 
- non-member data consists of text excerpts from books first published in 2023
- member data includes text excerpts from older books, as categorized by Chang et al. in 2023.

### ๐Ÿ“Œ Applicability

The datasets can be applied to various OpenAI models released before **2023**:

- text-davinci-001
- text-davinci-002
- ... and more.

## Loading the datasets

To load the dataset:

```python
from datasets import load_dataset

LENGTH = 64
dataset = load_dataset("swj0419/BookMIA")
```
* Text Lengths: `512`.
* *Label 0*: Refers to the unseen data during pretraining. *Label 1*: Refers to the seen data.

## ๐Ÿ› ๏ธ Codebase

For evaluating MIA methods on our datasets, visit our [GitHub repository](https://github.com/swj0419/detect-pretrain-code).

## โญ Citing our Work

If you find our codebase and datasets beneficial, kindly cite our work:

```bibtex
@misc{shi2023detecting,
    title={Detecting Pretraining Data from Large Language Models},
    author={Weijia Shi and Anirudh Ajith and Mengzhou Xia and Yangsibo Huang and Daogao Liu and Terra Blevins and Danqi Chen and Luke Zettlemoyer},
    year={2023},
    eprint={2310.16789},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
```

[1] Kent K Chang, Mackenzie Cramer, Sandeep Soni, and David Bamman. Speak, memory: An archaeology of books known to chatgpt/gpt-4. arXiv preprint arXiv:2305.00118, 2023.