Publication: On Learning vs. Refutation
Open/View Files
Date
2017
Authors
Published Version
Published Version
Journal Title
Journal ISSN
Volume Title
Publisher
The Harvard community has made this article openly available. Please share how this access benefits you.
Citation
Vadhan, Salil, P. 2017. On Learning vs. Refutation. Proceedings of Machine Learning Research, vol. 65: 1835-1848.
Research Data
Abstract
Building on the work of Daniely et al. (STOC 2014, COLT 2016), we study the connection between computationally efficient PAC learning and refutation of constraint satisfaction problems. Specifically, we prove that for every concept class P, PAC-learning P is polynomially equivalent to “random-right-hand-side-refuting” (“RRHS-refuting”) a dual class P ∗ , where RRHS-refutation of a class Q refers to refuting systems of equations where the constraints are (worst-case) functions from the class Q but the right-hand-sides of the equations are uniform and independent random bits. The reduction from refutation to PAC learning can be viewed as an abstraction of (part of) the work of Daniely, Linial, and Shalev-Schwartz (STOC 2014). The converse, however, is new, and is based on a combination of techniques from pseudorandomness (Yao ‘82) with boosting (Schapire ‘90). In addition, we show that PAC-learning the class of DNF formulas is polynomially equivalent to PAC-learning its dual class DNF∗ , and thus PAC-learning DNF is equivalent to RRHS-refutation of DNF, suggesting an avenue to obtain stronger lower bounds for PAC-learning DNF than the quasipolynomial lower bound that was obtained by Daniely and Shalev-Schwartz (COLT 2016) assuming the hardness of refuting k-SAT.
Description
Other Available Sources
Keywords
Terms of Use
This article is made available under the terms and conditions applicable to Open Access Policy Articles (OAP), as set forth at Terms of Service