Skip to content (access key 's')
Logo of Technion
Logo of CS Department
Events

The Taub Faculty of Computer Science Events and Talks

Approximating the Number of Relevant Variables in a Parity Implies Proper Learning
event speaker icon
George Haddad  (M.Sc. Thesis Seminar)
event date icon
Monday, 13.04.2026, 12:30
event location icon
Taub 601
event speaker icon
Advisor: Prof. Nader Bshouty

Consider a model in which we can access a parity function through random uniformly distributed labeled examples in the presence of random classification noise. In this thesis, we study learning in this model and show that approximating the number of relevant variables of a parity function is as hard as properly learning it.

More specifically, let $\gamma : \mathbb{R}^+ \to \mathbb{R}^+$ be any strictly increasing function satisfying $\gamma(x) \ge x$. In our first result, we show that from any polynomial-time algorithm that returns a $\gamma$-approximation $D$ (i.e., $\gamma^{-1}(d(f)) \leq D \leq \gamma(d(f))$), of the number of relevant variables~$d(f)$ of any parity function $f$, we can, in polynomial time, construct a solution to the long-standing open problem of polynomial-time learning $k(n)$-sparse parities (parities with $k(n)\le n$ relevant variables), where $k(n) = \omega_n(1)$.

In our second result, we show that from any $T(n)$-time algorithm that, for any parity $f$, returns a $\gamma$-approximation of the number of relevant variables $d(f)$ of $f$, we can, in polynomial time, construct a $poly(\Gamma(n))T(\Gamma(n)^2)$-time algorithm that properly learns parities, where $\Gamma(x)=\gamma(\gamma(x))$.

If $T(\Gamma(n)^2)=\exp({o(n/\log n)})$, this would resolve another long-standing open problem of properly learning parities in the presence of random classification noise in time~$\exp({o(n/\log n)})$.