--- license: mit --- # 📘 BookMIA Datasets The **BookMIA datasets** serve as a benchmark designed to evaluate membership inference attack (MIA) methods, specifically in detecting pretraining data from OpenAI models that are released before 2023 (such as text-davinci-003). The dataset contains non-member and member data: non-member data consists of text excerpts from books first published in 2023, while member data includes text excerpts from older books, as categorized by Chang et al. in 2023. ### 📌 Applicability The datasets can be applied to various OpenAI models released before **2023**: - text-davinci-001 - text-davinci-002 - ... and more. ## Loading the datasets To load the dataset: ```python from datasets import load_dataset LENGTH = 64 dataset = load_dataset("swj0419/BookMIA") ``` * Text Lengths: `512`. * *Label 0*: Refers to the unseen data during pretraining. *Label 1*: Refers to the seen data. ## 🛠️ Codebase For evaluating MIA methods on our datasets, visit our [GitHub repository](https://github.com/swj0419/detect-pretrain-code). ## ⭐ Citing our Work If you find our codebase and datasets beneficial, kindly cite our work: ```bibtex @misc{shi2023detecting, title={Detecting Pretraining Data from Large Language Models}, author={Weijia Shi and Anirudh Ajith and Mengzhou Xia and Yangsibo Huang and Daogao Liu and Terra Blevins and Danqi Chen and Luke Zettlemoyer}, year={2023}, eprint={2310.16789}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` [1] Kent K Chang, Mackenzie Cramer, Sandeep Soni, and David Bamman. Speak, memory: An archaeology of books known to chatgpt/gpt-4. arXiv preprint arXiv:2305.00118, 2023.