Publication: Separating Computational and Statistical Differential Privacy in the Client-Server Model
Open/View Files
Date
2016
Published Version
Published Version
Journal Title
Journal ISSN
Volume Title
Publisher
The Harvard community has made this article openly available. Please share how this access benefits you.
Citation
Bun, Mark, Yi-Hsiu Chen, and Salil Vadhan. 2016. Separating Computational and Statistical Differential Privacy in the Client-Server Model. In Proceedings of the 14th Theory of Cryptography Conference (TCC 2016-B), Beijing, China, November 1-3, 2016.
Research Data
Abstract
Differential privacy is a mathematical definition of privacy for statistical data analysis. It guarantees that any (possibly adversarial) data analyst is unable to learn too much information that is specific to an individual. Mironov et al. (CRYPTO 2009) proposed several computational relaxations of differential privacy (CDP), which relax this guarantee to hold only against computationally bounded adversaries. Their work and subsequent work showed that CDP can yield substantial accuracy improvements in various multiparty privacy problems. However, these works left open whether such improvements are possible in the traditional client-server model of data analysis. In fact, Groce, Katz and Yerukhimovich (TCC 2011) showed that, in this setting, it is impossible to take advantage of CDP for many natural statistical tasks. Our main result shows that, assuming the existence of sub-exponentially secure one-way functions and 2-message witness indistinguishable proofs (zaps) for NP, that there is in fact a computational task in the clientserver model that can be efficiently performed with CDP, but is infeasible to perform with information-theoretic differential privacy.
Description
Other Available Sources
Keywords
Terms of Use
This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service