Datasets:

Modalities:
Tabular
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
License:
BookMIA / README.md
swj0419's picture
Update README.md
e0d586e
metadata
license: mit

πŸ“˜ BookMIA Datasets

The BookMIA datasets serve as a benchmark designed to evaluate membership inference attack (MIA) methods, specifically in detecting pretraining data from OpenAI models that are released before 2023 (such as text-davinci-003).

The dataset contains non-member and member data:

  • non-member data consists of text excerpts from books first published in 2023
  • member data includes text excerpts from older books, as categorized by Chang et al. in 2023.

πŸ“Œ Applicability

The datasets can be applied to various OpenAI models released before 2023:

  • text-davinci-001
  • text-davinci-002
  • ... and more.

Loading the datasets

To load the dataset:

from datasets import load_dataset

dataset = load_dataset("swj0419/BookMIA")
  • Text Lengths: 512.
  • Label 0: Refers to the unseen data during pretraining. Label 1: Refers to the seen data.

πŸ› οΈ Codebase

For evaluating MIA methods on our datasets, visit our GitHub repository.

⭐ Citing our Work

If you find our codebase and datasets beneficial, kindly cite our work:

@misc{shi2023detecting,
    title={Detecting Pretraining Data from Large Language Models},
    author={Weijia Shi and Anirudh Ajith and Mengzhou Xia and Yangsibo Huang and Daogao Liu and Terra Blevins and Danqi Chen and Luke Zettlemoyer},
    year={2023},
    eprint={2310.16789},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

[1] Kent K Chang, Mackenzie Cramer, Sandeep Soni, and David Bamman. Speak, memory: An archaeology of books known to chatgpt/gpt-4. arXiv preprint arXiv:2305.00118, 2023.