Datasets:

Modalities:
Tabular
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
License:
swj0419 commited on
Commit
57fd20b
β€’
1 Parent(s): feb5f9a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -0
README.md CHANGED
@@ -1,3 +1,50 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+
5
+
6
+ # πŸ“˜ BookMIA Datasets
7
+
8
+ The **BookMIA datasets** serve as a benchmark designed to evaluate membership inference attack (MIA) methods, specifically in detecting pretraining data from OpenAI models that are released before 2023 (such as text-davinci-003).
9
+ The dataset contains non-member and member data: non-member data consists of text excerpts from books first published in 2023, while member data includes text excerpts from older books, as categorized by Chang et al. in 2023.
10
+
11
+ ### πŸ“Œ Applicability
12
+
13
+ The datasets can be applied to various OpenAI models released before **2023**:
14
+
15
+ - text-davinci-001
16
+ - text-davinci-002
17
+ - ... and more.
18
+
19
+ ## Loading the datasets
20
+
21
+ To load the dataset:
22
+
23
+ ```python
24
+ from datasets import load_dataset
25
+
26
+ LENGTH = 64
27
+ dataset = load_dataset("swj0419/BookMIA")
28
+ ```
29
+ * Text Lengths: `512`.
30
+ * *Label 0*: Refers to the unseen data during pretraining. *Label 1*: Refers to the seen data.
31
+
32
+ ## πŸ› οΈ Codebase
33
+
34
+ For evaluating MIA methods on our datasets, visit our [GitHub repository](https://github.com/swj0419/detect-pretrain-code).
35
+
36
+ ## ⭐ Citing our Work
37
+
38
+ If you find our codebase and datasets beneficial, kindly cite our work:
39
+
40
+ ```bibtex
41
+ @misc{shi2023detecting,
42
+ title={Detecting Pretraining Data from Large Language Models},
43
+ author={Weijia Shi and Anirudh Ajith and Mengzhou Xia and Yangsibo Huang and Daogao Liu and Terra Blevins and Danqi Chen and Luke Zettlemoyer},
44
+ year={2023},
45
+ eprint={2310.16789},
46
+ archivePrefix={arXiv},
47
+ primaryClass={cs.CL}
48
+ }
49
+
50
+ [1] Kent K Chang, Mackenzie Cramer, Sandeep Soni, and David Bamman. Speak, memory: An archaeology of books known to chatgpt/gpt-4. arXiv preprint arXiv:2305.00118, 2023.