Papers
arxiv:2503.09330

Group-robust Machine Unlearning

Published on Mar 12
· Submitted by tdemin16 on Mar 17
Authors:
,
,

Abstract

Machine unlearning is an emerging paradigm to remove the influence of specific training data (i.e., the forget set) from a model while preserving its knowledge of the rest of the data (i.e., the retain set). Previous approaches assume the forget data to be uniformly distributed from all training datapoints. However, if the data to unlearn is dominant in one group, we empirically show that performance for this group degrades, leading to fairness issues. This work tackles the overlooked problem of non-uniformly distributed forget sets, which we call group-robust machine unlearning, by presenting a simple, effective strategy that mitigates the performance loss in dominant groups via sample distribution reweighting. Moreover, we present MIU (Mutual Information-aware Machine Unlearning), the first approach for group robustness in approximate machine unlearning. MIU minimizes the mutual information between model features and group information, achieving unlearning while reducing performance degradation in the dominant group of the forget set. Additionally, MIU exploits sample distribution reweighting and mutual information calibration with the original model to preserve group robustness. We conduct experiments on three datasets and show that MIU outperforms standard methods, achieving unlearning without compromising model robustness. Source code available at https://github.com/tdemin16/group-robust_machine_unlearning.

Community

Paper author Paper submitter

TL;DR:
This paper introduces group-robust machine unlearning, addressing the overlooked issue that real-world unlearning requests often come disproportionately from specific groups, potentially harming model robustness after unlearning. To tackle this, we propose (i) REWEIGHT, a simple retraining strategy that preserves group robustness in exact unlearning, and (ii) MIU, a novel approximate unlearning method that uses mutual information minimization and calibration to forget data while maintaining fairness across groups. Experiments on CelebA, Waterbirds, and FairFace show that MIU outperforms existing methods in both unlearning and preserving group robustness.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2503.09330 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2503.09330 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.09330 in a Space README.md to link it from this page.

Collections including this paper 1