This model is obtained through LoRA fine-tuning based on lmsys/vicuna-7b-v1.5, and it corresponds to the LLM experiments in the paper "Class Machine Unlearning for Complex Data via Concepts Inference and Data Poisoning."

Abstract

In current AI era, users may request AI companies to delete their data from the training dataset due to the privacy concerns. As a model owner, retraining a model will consume significant computational resources. Therefore, machine unlearning is a new emerged technology to allow model owner to delete requested training data or a class with little affecting on the model performance. However, for large-scaling complex data, such as image or text data, unlearning a class from a model leads to a inferior performance due to the difficulty to identify the link between classes and model. An inaccurate class deleting may lead to over or under unlearning. In this paper, to accurately defining the unlearning class of complex data, we apply the definition of Concept, rather than an image feature or a token of text data, to represent the semantic information of unlearning class. This new representation can cut the link between the model and the class, leading to a complete erasing of the impact of a class. To analyze the impact of the concept of complex data, we adopt a Post-hoc Concept Bottleneck Model, and Integrated Gradients to precisely identify concepts across different classes. Next, we take advantage of data poisoning with random and targeted labels to propose unlearning methods. We test our methods on both image classification models and large language models (LLMs). The results consistently show that the proposed methods can accurately erase targeted information from models and can largely maintain the performance of the models.

If you find this paper or model helpful, please cite our work:

@misc{chang2024classmachineunlearningcomplex,
      title={Class Machine Unlearning for Complex Data via Concepts Inference and Data Poisoning}, 
      author={Wenhan Chang and Tianqing Zhu and Heng Xu and Wenjian Liu and Wanlei Zhou},
      year={2024},
      eprint={2405.15662},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2405.15662}, 
}
Downloads last month
12
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.