Differential Privacy has Bounded Impact on Fairness in Classification
1 : Machine Learning in Information Networks
* : Corresponding author
Inria Lille - Nord Europe, Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189
We theoretically study the impact of differential privacy on fairness in classification. We prove that, given a class of models, popular group fairness measures are pointwise Lipschitz-continuous with respect to the parameters of the model. We use this Lipschitz property to prove a high probability bound showing that, given enough examples, the fairness level of private models is close to the one of their non-private counterparts.