Amit Daniely (Hebrew Universiy of Jerusalem)
Wednesday, 22.10.2014, 12:30
It is presently still unknown how to show hardness of learning problems. There are huge gaps between our upper and lower bounds in the area. The main obstacle is that standard NP-reductions do not yield hardness of learning. All known lower bounds rely on (unproved) cryptographic assumptions.
We introduce a new technique to this area, using reductions from problems that are hard on average. We put forward a natural generalization of Feige's assumption about the complexity of refuting random $K$-SAT instances. Under this assumption we show:
1. Learning DNFs is hard.
2. Learning an intersection of super logarithmic number of halfspaces is hard.
In addition, the same assumption implies the hardness of virtually all learning problems that were previously shown hard under cryptographic assumptions.
Joint work with Nati Linial and Shai Shelev-Shwartz