Accuracy, Interpretability, and Differential Privacy via Explainable Boosting
Abstract
We show that adding differential <PRE_TAG>privacy</POST_TAG> to Explainable Boosting Machines (EBMs), a recent method for training interpretable ML models, yields state-of-the-art accuracy while protecting privacy. Our experiments on multiple classification and regression datasets show that DP-EBM models suffer surprisingly little <PRE_TAG>accuracy loss</POST_TAG> even with strong <PRE_TAG>differential <PRE_TAG>privacy</POST_TAG></POST_TAG> guarantees. In addition to high accuracy, two other benefits of applying DP to EBMs are: a) trained models provide exact global and local interpretability, which is often important in settings where differential <PRE_TAG>privacy</POST_TAG> is needed; and b) the models can be edited after training without loss of privacy to correct errors which DP noise may have introduced.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 1
Collections including this paper 0
No Collection including this paper