File size: 19,539 Bytes
d8760c5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
<!--
@license
Copyright 2020 Google. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

<!DOCTYPE html>

<html>
<head>
	<meta charset="utf-8">
  <meta name="viewport" content="width=device-width, initial-scale=1">

  <link rel="apple-touch-icon" sizes="180x180" href="https://pair.withgoogle.com/images/favicon/apple-touch-icon.png">
  <link rel="icon" type="image/png" sizes="32x32" href="https://pair.withgoogle.com/images/favicon/favicon-32x32.png">
  <link rel="icon" type="image/png" sizes="16x16" href="https://pair.withgoogle.com/images/favicon/favicon-16x16.png">
  <link rel="mask-icon" href="https://pair.withgoogle.com/images/favicon/safari-pinned-tab.svg" color="#00695c">
  <link rel="shortcut icon" href="https://pair.withgoogle.com/images/favicon.ico">

  <script>
    !(function(){
      var url = window.location.href
      if (url.split('#')[0].split('?')[0].slice(-1) != '/' && !url.includes('.html')) window.location = url + '/'
    })()
  </script>

  <title>Can a Model Be Differentially Private and Fair?</title>
  <meta property="og:title" content="Can a Model Be Differentially Private and Fair?">
  <meta property="og:url" content="https://pair.withgoogle.com/explorables/private-and-fair/">

  <meta name="og:description" content="Training models with differential privacy stops models from inadvertently leaking sensitive data, but there's an unexpected side-effect: reduced accuracy on underrepresented subgroups.">
  <meta property="og:image" content="https://pair.withgoogle.com/explorables/images/private-and-fair.png">
  <meta name="twitter:card" content="summary_large_image">
  
	<link rel="stylesheet" type="text/css" href="../style.css">

  <link href='https://fonts.googleapis.com/css?family=Roboto+Slab:400,500,700|Roboto:700,500,300' rel='stylesheet' type='text/css'>  
  <link href="https://fonts.googleapis.com/css?family=Google+Sans:400,500,700" rel="stylesheet">

	<meta name="viewport" content="width=device-width">
</head>
<body>
  <div class='header'>
    <div class='header-left'>
      <a href='https://pair.withgoogle.com/'>
        <img src='../images/pair-logo.svg' style='width: 100px'></img>
      </a>
      <a href='../'>Explorables</a> 
    </div>
  </div>
  
  <h1 class='headline'>Can a Model Be Differentially Private and Fair?</h1>
  <div class="post-summary">Training models with differential privacy stops models from inadvertently leaking sensitive data, but there's an unexpected side-effect: reduced accuracy on underrepresented subgroups.</div>
  <p>Imagine you want to use machine learning to suggest new bands to listen to. You could do this by having lots of people list their favorite bands and using them to train a model. The trained model might be quite useful and fun, but if someone pokes and prods at the model in just the right way, they could <a href="https://www.wired.com/2007/12/why-anonymous-data-sometimes-isnt/">extract</a> the music preferences of someone whose data was used to train the model. Other kinds of models are potentially vulnerable; <a href="https://bair.berkeley.edu/blog/2019/08/13/memorization/">credit card numbers</a> have been pulled out of language models and <a href="https://rist.tech.cornell.edu/papers/mi-ccs.pdf">actual faces</a> reconstructed from image models.</p>
<p>Training with <a href="https://desfontain.es/privacy/differential-privacy-awesomeness.html">differential privacy</a> limits the information about any one data point that is extractable but in some cases there’s an unexpected side-effect: reduced accuracy with underrepresented subgroups disparately impacted.  </p>
<div class='info-box'></div>

<p>Recall that machine learning models are typically trained with <a href="https://playground.tensorflow.org/">gradient descent</a>, a series of small steps taken to minimize an error function. To show how a model can leak its training data, we’ve trained two simple models to separate red and blue dots using two simple datasets that differ in one way: a single isolated data point in the upper left has been switched from red to blue.  </p>
<div class='epoch-graph'></div>

<p>Notice that the two models have very different boundary lines near the isolated point by the end of the training. Someone with access to the trained model might be able to <a href="https://pair.withgoogle.com/explorables/data-leak/">infer</a> if the point in the upper left is red or blue — if the color represented sensitive information, like someone’s <a href="https://gothamist.com/news/researchers-know-how-dante-de-blasio-hundreds-other-new-yorkers-voted">voting record</a>, that could be quite bad! </p>
<h3 id="protecting-the-privacy-of-training-points">Protecting the Privacy of Training Points</h3>
<p>We can prevent a single data point from drastically altering the model by <a href="http://www.cleverhans.io/privacy/2019/03/26/machine-learning-with-differential-privacy-in-tensorflow.html">adding</a> two operations to each training step:<a class='footstart'>²</a></p>
<ul>
<li>⚬ Clipping the gradient (here, limiting how much the boundary line can move with each step) to bound the maximum impact a single data point can have on the final model.</li>
<li>⚬ Adding random noise to the gradient.</li>
</ul>
<p>Try <strong>increasing</strong> the random noise below. We’re now training lots of differentially private models; the more the potential models for the red and blue outlier points overlap, the more <a href="https://pair.withgoogle.com/explorables/anonymization/">plausible deniability</a> the person in the upper left has.<a class='footstart'></a>  </p>
<div class='decision-boundry'></div>  

<p>You can also try dragging the other points around and adjusting the gradient clipping. Are points in the center or outliers more likely to modify the boundary lines? In two dimensions there’s a limited number of outliers, but in higher dimensions <a href="https://observablehq.com/@tophtucker/theres-plenty-of-room-in-the-corners">more points</a> are outliers and much more information can be extracted from a trained model.</p>
<p>Correctly combined, adding gradient clipping and random noise to gradient descent make it possible to train a model with <a href="https://desfontain.es/privacy/differential-privacy-awesomeness.html">differential privacy</a> – we can guarantee that a model trained on a given dataset is essentially indistinguishable from a model trained on the same dataset with a single point changed.    </p>
<h3 id="predictions-on-outliers-change-the-most">Predictions on Outliers Change the Most</h3>
<p>What does this look like in practice? In <a href="https://arxiv.org/abs/1910.13427">Distribution Density, Tails, and Outliers in Machine Learning</a>, a series of increasingly differentially private models were trained on <a href="https://en.wikipedia.org/wiki/MNIST_database">MNIST digits</a>. Every digit in the training set was ranked according to the highest level of privacy that correctly classified it. </p>
<div class='top-bot-digits'></div>

<p>On the lower left, you can see digits labeled as “3” in the training data that look more like a “2” and a “9”. They’re very different from the other “3”s in the training data so adding just a bit of privacy protection causes the model to no longer classify them as “3”. Under some <a href="https://arxiv.org/abs/1411.2664">specific circumstances</a>, differential privacy can actually improve how well the model generalizes to data it wasn’t trained on by limiting the influence of spurious examples.<a class='footstart'></a></p>
<p>The right side shows more canonical digits which are classified correctly even with high levels of privacy because they’re quite similar to other digits in the training data.<a class='footstart'></a></p>
<h3 id="the-accuracy-tradeoff">The Accuracy Tradeoff</h3>
<p>Limiting how much a model can learn from a single example does have a downside: it can also decrease the model’s accuracy. With <tp class='tp75'>7,500 training points</tp>, 90% accuracy on MNIST digits is only <a href="https://colab.research.google.com/github/PAIR-code/ai-explorables/blob/master/server-side/private-and-fair/MNIST_DP_Model_Grid.ipynb">achievable</a> with an extremely low level of privacy protection; increasing privacy quickly lowers the model’s accuracy. </p>
<p>Collecting more training data offers a way out of this accuracy/privacy tradeoff. With <tp class='tp60'>60,000 training points,</tp> 90% accuracy can be reached with a higher privacy level than almost all <a href="https://desfontain.es/privacy/real-world-differential-privacy.html">real-world deployments</a> of differential privacy. </p>
<div class='accuracy-v-privacy-dataset_size'></div>

<p>Looking at the differences between predictions by digit class shows another potential complication: some classes are harder to identify than others. Detecting an “8” with high confidence requires more training data and/or lower privacy than detecting a “0” with high confidence. </p>
<div class='accuracy-v-privacy-class'></div>

<p>This problem is exacerbated if the training data has fewer examples of one class than the others. Trying to predict an uncommon event with a differentially private model can require an enormous amount of data.<a class='footstart'></a></p>
<h3 id="implications-for-fairness">Implications for Fairness</h3>
<p>Outliers also aren’t evenly distributed within a class. Below, MNIST digits are colored by their sensitivity to higher privacy levels and projected with <a href="https://pair-code.github.io/understanding-umap/">UMAP</a>, forming several clusters of privacy-sensitive yellow digits. It’s possible to inadvertently train a model with good overall accuracy on a class but very low accuracy on a smaller group within the class. </p>
<div class='umap-digit'></div>

<p>There’s nothing that makes a “1” slanted to the left intrinsically harder to classify, but because there are only a few slanted “1”s in the training data it’s difficult to make a model that classifies them accurately without leaking information. </p>
<p>This disparate impact doesn’t just happen in datasets of differently drawn digits: increased levels of differential privacy in a range of image and language models <a href="https://arxiv.org/pdf/1905.12101.pdf">disproportionality decreased accuracy</a> on underrepresented subgroups. And adding differential privacy to a medical model <a href="https://arxiv.org/pdf/2010.06667v1.pdf">reduced</a> the influence of Black patients’ data on the model while increasing the influence of white patients’ data. </p>
<p>Lowering the privacy level might not help non-majoritarian data points either – they’re the ones most <a href="https://arxiv.org/abs/1906.00389">susceptible</a> to having their information exposed. Again, escaping the accuracy/privacy tradeoff requires collecting more data – this time from underrepresented subgroups.<a class='footstart'></a>   </p>
<h3 id="more-reading">More Reading</h3>
<p>There are deep connections between <a href="https://arxiv.org/abs/1906.05271">generalization, memorization and privacy</a> that are still not well understood. Slightly changing the privacy constraints, for example, can create new options. If public, unlabeled data exists, a “<a href="http://www.cleverhans.io/privacy/2018/04/29/privacy-and-machine-learning.html">Private Aggregation of Teacher Ensembles</a>“ could be used instead of gradient clipping and random noise to train a differentially private model with a <a href="https://arxiv.org/pdf/2106.12576.pdf">smaller disparate impact</a> on accuracy. </p>
<p>Finding ways to increase privacy with a smaller impact on accuracy is an active area of research – <a href="https://arxiv.org/abs/2007.14191">model architectures</a> designed with privacy in mind and better <a href="https://arxiv.org/pdf/2107.06499.pdf">dataset cleaning</a> look like promising avenues.  </p>
<p>There are also additional <a href="http://proceedings.mlr.press/v97/jagielski19a/jagielski19a.pdf">accuracy/privacy/fairness</a> tradeoffs beyond what’s discussed in this post. Even if a differentially private model doesn’t have large accuracy gaps between subgroups, enforcing <a href="https://pair.withgoogle.com/explorables/measuring-fairness/">fairness metrics</a> can reduce privacy or accuracy.</p>
<p>This post focuses on protecting the privacy of individual data points. In practice more work might be necessary to ensure that the <a href="https://queue.acm.org/detail.cfm?id=3501293#:~:text=Computing%20and%20Verifying%20Anonymous%20Aggregates">privacy of users</a> – who could contribute much more than a single data point each – is also protected.    </p>
<p>These questions are also significant outside of machine learning. <a href="https://arxiv.org/abs/2105.07513">Allocating resources</a> based on a differentially private dataset – with no machine learning model involved – can also disproportionately affect different groups. The 2020 Census is the first to use differential privacy and this could have a wide range of impacts, including how <a href="https://statmodeling.stat.columbia.edu/2021/10/20/how-does-post-processed-differentially-private-census-data-affect-redistricting-how-concerned-should-we-be-about-gerrymandering-with-the-new-das/">congressional districts</a> are drawn. </p>
<h3 id="credits">Credits</h3>
<p>Adam Pearce // January 2022</p>
<p>Thanks to Abhradeep Thakurta, Andreas Terzis, Andy Coenen, Asma Ghandeharioun, Brendan McMahan, Ellen Jiang, Emily Reif, Fernanda Viégas, James Wexler, Kevin Robinson, Matthew Jagielski, Martin Wattenberg, Meredith Morris, Miguel Guevara, Nicolas Papernot and Nithum Thain for their help with this piece.</p>
<h3 id="footnotes">Footnotes</h3>
<p><a class='footend'></a> To speed up training at the cost of looser privacy bounds, gradients, clipping and noise can be calculated on a group of data points instead of individual data points.   </p>
<p><a class='footend'></a> The “ε” in ε-differential privacy essentially <a href="https://desfontain.es/privacy/differential-privacy-in-more-detail.html">measures</a> the overlap in two distributions after changing a single data point. </p>
<p><a class='footend'></a> <a href="https://openreview.net/forum?id=BJgnXpVYwS">Clipping</a> and <a href="https://arxiv.org/pdf/1511.06807.pdf">noising</a> are also used outside of differential privacy as regularization techniques to improve accuracy. <br><br> In addition to accidently mislabeled examples, differential privacy can also provide some protection against <a href="https://dp-ml.github.io/2021-workshop-ICLR/files/23.pdf">data poisoning attacks</a>.  </p>
<p><a class='footend'></a> While visually similar digits aren’t necessarily interpreted in similar ways by the model, the clustering of visually similar digits in the UMAP diagram at the bottom of the page (which projects embedding from the penultimate layer of digit classifier) suggests there is a close connection here.   </p>
<p><a class='footend'></a> Rebalancing the dataset without collecting more data doesn’t avoid this privacy/accuracy tradeoff – upsampling the smaller class reduces privacy and downsampling the larger class reduces data and lowers accuracy.  </p>
<p><a class='footend'></a> See the appendix on <a href="#appendix-subgroup-size-and-accuracy">Subgroup Size and Accuracy</a> for more detail.   </p>
<h3 id="appendix-subgroup-size-and-accuracy">Appendix: Subgroup Size and Accuracy</h3>
<p>How, exactly, does the amount of training data, the privacy level and the percentage of data from a subgroup impact accuracy? Using MNIST digits rotated 90° as a stand-in for a smaller subgroup, we can see how the accuracy of a series of simple <a href="https://colab.research.google.com/github/PAIR-code/ai-explorables/blob/master/server-side/private-and-fair/MNIST_Generate_UMAP.ipynb">models</a> that classify “1”s and “7”s change based on these attributes. </p>
<p>On the far left, models without any rotated digits in the training data never classify those digits more accurately than random guessing. By rotating 5% of the training digits, a small slice of models with lots of training data and low privacy can accurately classify rotated digits. </p>
<div class='rotated-accuracy-heatmap'></div>

<p>Increasing the proportion of rotated digits to 10% or 20% or even more makes it possible to train a higher privacy model that performs well on both types of digits with the same amount of training data.  </p>
<p>Click on one of the models above and you can see how the accuracy gap shifts as number of training points, privacy level and percentage of rotated digits are independently changed.</p>
<div class='rotated-accuracy'></div>

<p>Intuitively, adding more training data has diminishing marginal increases to accuracy. Accuracy on the smaller group of rotated digits, which may just be on the cusp of being learned, falls off faster as the effective amount of training data is decreased — a disparate reduction in accuracy.</p>
<h3 id="more-explorables">More Explorables</h3>
<p><p id='recirc'></p></p>
<link rel="stylesheet" href="style.css">

<script type='module'>
  import npyjs from '../third_party/npyjs.js' 
  window.npyjs = npyjs
</script>
<script src='../third_party/d3_.js'></script>
<script src='../third_party/d3-scale-chromatic.v1.min.js'></script>
<script src='../third_party/alea.js'></script>


<script type='module' src='util.js'></script>

<script type='module' src='2d-privacy.js'></script>

<script type='module' src='top-bot-digits.js'></script>
<script type='module' src='accuracy-v-privacy-class.js'></script>
<script type='module' src='accuracy-v-privacy-dataset_size.js'></script>
<script type='module' src='umap-digit.js'></script>

<script type='module' src='rotated-accuracy.js'></script>


<script type='module' src='footnote.js'></script>
<script src='../third_party/recirc.js'></script>
</body>

<script async src="https://www.googletagmanager.com/gtag/js?id=UA-138505774-1"></script>
<script>
  if (window.location.origin === 'https://pair.withgoogle.com'){
    window.dataLayer = window.dataLayer || [];
    function gtag(){dataLayer.push(arguments);}
    gtag('js', new Date());
    gtag('config', 'UA-138505774-1');
  }
</script>

<script>
  // Tweaks for displaying in an iframe
  if (window !== window.parent){
    
    // Open links in a new tab
    Array.from(document.querySelectorAll('a'))
      .forEach(e => {
        // skip anchor links
        if (e.href && e.href[0] == '#') return

        e.setAttribute('target', '_blank')
        e.setAttribute('rel', 'noopener noreferrer')
      })

    // Remove recirc h3
    Array.from(document.querySelectorAll('h3'))
      .forEach(e => {
        if (e.textContent != 'More Explorables') return

        e.parentNode.removeChild(e)
      })

    // Remove recirc container
    var recircEl = document.querySelector('#recirc')
    recircEl.parentNode.removeChild(recircEl)
  }
</script>

</html>