Abstract
Machine unlearning is an emerging paradigm to remove the influence of specific training data (i.e., the forget set) from a model while preserving its knowledge of the rest of the data (i.e., the retain set). Previous approaches assume the forget data to be uniformly distributed from all training datapoints. However, if the data to unlearn is dominant in one group, we empirically show that performance for this group degrades, leading to fairness issues. This work tackles the overlooked problem of non-uniformly distributed forget sets, which we call group-robust machine unlearning, by presenting a simple, effective strategy that mitigates the performance loss in dominant groups via sample distribution reweighting. Moreover, we present MIU (Mutual Information-aware Machine Unlearning), the first approach for group robustness in approximate machine unlearning. MIU minimizes the mutual information between model features and group information, achieving unlearning while reducing performance degradation in the dominant group of the forget set. Additionally, MIU exploits sample distribution reweighting and mutual information calibration with the original model to preserve group robustness. We conduct experiments on three datasets and show that MIU outperforms standard methods, achieving unlearning without compromising model robustness. Source code available at https://github.com/tdemin16/group-robust_machine_unlearning.
Community
TL;DR:
This paper introduces group-robust machine unlearning, addressing the overlooked issue that real-world unlearning requests often come disproportionately from specific groups, potentially harming model robustness after unlearning. To tackle this, we propose (i) REWEIGHT, a simple retraining strategy that preserves group robustness in exact unlearning, and (ii) MIU, a novel approximate unlearning method that uses mutual information minimization and calibration to forget data while maintaining fairness across groups. Experiments on CelebA, Waterbirds, and FairFace show that MIU outperforms existing methods in both unlearning and preserving group robustness.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Are We Truly Forgetting? A Critical Re-examination of Machine Unlearning Evaluation Protocols (2025)
- Learning to Unlearn while Retaining: Combating Gradient Conflicts in Machine Unlearning (2025)
- AMUN: Adversarial Machine UNlearning (2025)
- Go Beyond Your Means: Unlearning with Per-Sample Gradient Orthogonalization (2025)
- How Does Overparameterization Affect Machine Unlearning of Deep Neural Networks? (2025)
- UPCORE: Utility-Preserving Coreset Selection for Balanced Unlearning (2025)
- Group-robust Sample Reweighting for Subpopulation Shifts via Influence Functions (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper