metadata
license: apache-2.0
dataset_info:
- config_name: arxiv
features:
- name: text
dtype: string
splits:
- name: forget
num_bytes: 22127152
num_examples: 500
- name: approximate
num_bytes: 371246809
num_examples: 6155
- name: retain
num_bytes: 84373706
num_examples: 2000
download_size: 216767075
dataset_size: 477747667
- config_name: general
features:
- name: text
dtype: string
splits:
- name: evaluation
num_bytes: 4628036
num_examples: 1000
- name: retain
num_bytes: 24472399
num_examples: 5000
download_size: 17206310
dataset_size: 29100435
- config_name: github
features:
- name: text
dtype: string
splits:
- name: forget
num_bytes: 14069535
num_examples: 2000
- name: approximate
num_bytes: 82904771
num_examples: 15815
- name: retain
num_bytes: 28749659
num_examples: 4000
download_size: 43282163
dataset_size: 125723965
configs:
- config_name: arxiv
data_files:
- split: forget
path: arxiv/forget-*
- split: approximate
path: arxiv/approximate-*
- split: retain
path: arxiv/retain-*
- config_name: general
data_files:
- split: evaluation
path: general/evaluation-*
- split: retain
path: general/retain-*
- config_name: github
data_files:
- split: forget
path: github/forget-*
- split: approximate
path: github/approximate-*
- split: retain
path: github/retain-*
π unlearn_dataset
The unlearn_dataset serves as a benchmark for evaluating unlearning methodologies in pre-trained large language models across diverse domains, including arXiv, GitHub.
π Loading the datasets
To load the dataset:
from datasets import load_dataset
dataset = load_dataset("llmunlearn/unlearn_dataset", name="arxiv", split="forget")
- Available configuration names and corresponding splits:
arxiv
:forget, approximate, retain
github
:forget, approximate, retain
general
:evaluation, retain
π οΈ Codebase
For evaluating unlearning methods on our datasets, visit our GitHub repository.
β Citing our Work
If you find our codebase or dataset useful, please consider citing our paper:
@article{yao2024machine,
title={Machine Unlearning of Pre-trained Large Language Models},
author={Yao, Jin and Chien, Eli and Du, Minxin and Niu, Xinyao and Wang, Tianhao and Cheng, Zezhou and Yue, Xiang},
journal={arXiv preprint arXiv:2402.15159},
year={2024}
}