רון בגלייטר, הרצאה סמינריונית לדוקטורט
יום רביעי, 13.6.2012, 12:30
Supervised learning, one of the core models of machine learning, is concerned with inducing rules from a given sample of examples. Active learning is a variant of this model in which the learning algorithm can choose which examples to learn from. The performance of such active learners is measured by the "speed" of learning: The number of examples they consume in order to produce a competitive solution.
In this talk, I will present a novel active learning algorithmic concept, and discuss two problems for which it provides state-of-the-art theoretical guarantees.
The algorithmic idea is based on a novel characterization of functions over pairs of learning-hypotheses, termed smooth-relative-regret-approximations (SRRA). Such functions lead to a simple iterative meta-algorithm that converges exponentially-fast to a competitive solution. SRRA turns out to be very appealing for problems in which the signals stipulate local structural constraints. Specific problems we were able to address include learning to rank from pairwise-preferences and semi-supervised clustering. I will describe these problems, present the specific SRRA constructions, and explain why any method that is based on standard active learning analysis arguments will fail to be useful for these problems. Additionally, if time permits I will show a corresponding empirical evaluation.