Theoretical Foundations of Selective Prediction

Speaker:
Yair Wiener, Ph.D. Thesis Seminar
Date:
Wednesday, 12.6.2013, 13:00
Place:
Taub 601
Advisor:
Prof. Ran El-Yaniv

In selective prediction, a predictor (e.g. classifier or regressor) is allowed to abstain from prediction on part of the domain. The objective is to improve the accuracy of predictions by compromising coverage. This talk will focus on the theoretical foundations of selective prediction and its applications for selective classification, selective regression, and active learning. A classical result in statistical learning theory is that the excess risk of a classifier can be written as a sum of estimation and approximation errors. In this talk we will show how the estimation error can be reduced to any desirable level by sufficiently compromising coverage. We will present a new family of selective prediction strategies and analyze their coverage rates. These strategies can ensure zero estimation error, thus achieving the same performance as the best predictor in hindsight over the accepted domain. These developments rely on a new complexity measure termed characterizing set complexity. This measure characterizes coverage rates in realizable and agnostic selective classification, as well as label complexity speedup in active learning. Using classical results from probabilistic geometry we will bound this complexity measure for a number of interesting settings. The resulting new technique can be used to upper bound Hannekes disagreement coefficient and yields an exponential label complexity speedup for actively learning general (non-homogeneous) linear classifiers when the data distribution is an arbitrary high dimensional mixture of Gaussians.

Back to the index of events