Papers
arxiv:2106.09680

Accuracy, Interpretability, and Differential Privacy via Explainable Boosting

Published on Jun 17, 2021
Authors:
,
,
,

Abstract

We show that adding differential <PRE_TAG>privacy</POST_TAG> to Explainable Boosting Machines (EBMs), a recent method for training interpretable ML models, yields state-of-the-art accuracy while protecting privacy. Our experiments on multiple classification and regression datasets show that DP-EBM models suffer surprisingly little <PRE_TAG>accuracy loss</POST_TAG> even with strong <PRE_TAG>differential <PRE_TAG>privacy</POST_TAG></POST_TAG> guarantees. In addition to high accuracy, two other benefits of applying DP to EBMs are: a) trained models provide exact global and local interpretability, which is often important in settings where differential <PRE_TAG>privacy</POST_TAG> is needed; and b) the models can be edited after training without loss of privacy to correct errors which DP noise may have introduced.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2106.09680 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2106.09680 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.