Private Hypothesis Selection

We provide a differentially private algorithm for hypothesis selection. Given samples from an unknown probability distribution P and a set of m probability distributions <inline-formula> <tex-math notation="LaTeX">\mathcal {H} </tex-math></inline-formula>, the goal...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on information theory Vol. 67; no. 3; pp. 1981 - 2000
Main Authors Bun, Mark, Kamath, Gautam, Steinke, Thomas, Wu, Zhiwei Steven
Format Journal Article
LanguageEnglish
Published New York IEEE 01.03.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN0018-9448
1557-9654
DOI10.1109/TIT.2021.3049802

Cover

More Information
Summary:We provide a differentially private algorithm for hypothesis selection. Given samples from an unknown probability distribution P and a set of m probability distributions <inline-formula> <tex-math notation="LaTeX">\mathcal {H} </tex-math></inline-formula>, the goal is to output, in a <inline-formula> <tex-math notation="LaTeX">\varepsilon </tex-math></inline-formula>-differentially private manner, a distribution from <inline-formula> <tex-math notation="LaTeX">\mathcal {H} </tex-math></inline-formula> whose total variation distance to P is comparable to that of the best such distribution (which we denote by <inline-formula> <tex-math notation="LaTeX">\alpha </tex-math></inline-formula>). The sample complexity of our basic algorithm is <inline-formula> <tex-math notation="LaTeX">\text {O}\left ({\frac {\log \text {m}}{\alpha ^{2}} + \frac {\log \text {m}}{\alpha \varepsilon }}\right) </tex-math></inline-formula>, representing a minimal cost for privacy when compared to the non-private algorithm. We also can handle infinite hypothesis classes <inline-formula> <tex-math notation="LaTeX">\mathcal {H} </tex-math></inline-formula> by relaxing to <inline-formula> <tex-math notation="LaTeX">(\varepsilon,\delta) </tex-math></inline-formula>-differential privacy. We apply our hypothesis selection algorithm to give learning algorithms for a number of natural distribution classes, including Gaussians, product distributions, sums of independent random variables, piecewise polynomials, and mixture classes. Our hypothesis selection procedure allows us to generically convert a cover for a class to a learning algorithm, complementing known learning lower bounds which are in terms of the size of the packing number of the class. As the covering and packing numbers are often closely related, for constant <inline-formula> <tex-math notation="LaTeX">\alpha </tex-math></inline-formula>, our algorithms achieve the optimal sample complexity for many classes of interest. Finally, we describe an application to private distribution-free PAC learning.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0018-9448
1557-9654
DOI:10.1109/TIT.2021.3049802