Home | Publications | CS Home

Active Learning with Near Misses


Nela Gurevich, Shaul Markovitch and Ehud Rivlin. Active Learning with Near Misses. In Proceedings of the Twenty-First National Conference on Artificial Intelligence, 362-367 Boston, MA, 2006.


Abstract

Assume that we are trying to build a visual recognizer for a particular class of objects---chairs, for example---using existing induction methods. Assume the assistance of a human teacher who can label an image of an object as a positive or a negative example. As positive examples, we can obviously use images of real chairs. It is not clear, however, what types of objects we should use as negative examples. This is an example of a common problem where the concept we are trying to learn represents a small fraction of a large universe of instances. In this work we suggest learning with the help of \emph{near misses}---negative examples that differ from the learned concept in only a small number of significant points, and we propose a framework for automatic generation of such examples. We show that generating near misses in the feature space is problematic in some domains, and propose a methodology for generating examples directly in the instance space using \emph{modification operators}---functions over the instance space that produce new instances by slightly modifying existing ones. The generated instances are evaluated by mapping them into the feature space and measuring their utility using known active learning techniques. We apply the proposed framework to the task of learning visual concepts from range images. We examine the problem of defining modification operators over the instance space of range images and solve it by using an intermediate instance space---the \emph{functional representation space}. The efficiency of the proposed framework for object recognition is demonstrated by testing it on real-world recognition tasks.


Keywords: Active Learning
Secondary Keywords:
Online version:
Bibtex entry:
 @inproceedings{Gurevich:2006:ALN,
  Author = {Nela Gurevich and Shaul Markovitch and Ehud Rivlin},
  Title = {Active Learning with Near Misses},
  Year = {2006},
  Booktitle = {Proceedings of the Twenty-First National Conference on Artificial Intelligence},
  Pages = {362--367},
  Address = {Boston, MA},
  Url = {http://www.cs.technion.ac.il/~shaulm/papers/pdf/Gurevich-Markovitch-Rivlin-aaai2006.pdf},
  Keywords = {Active Learning},
  Secondary-keywords = {Vision},
  Abstract = {
    Assume that we are trying to build a visual recognizer for a
    particular class of objects---chairs, for example---using existing
    induction methods. Assume the assistance of a human teacher who
    can label an image of an object as a positive or a negative
    example. As positive examples, we can obviously use images of real
    chairs. It is not clear, however, what types of objects we should
    use as negative examples. This is an example of a common problem
    where the concept we are trying to learn represents a small
    fraction of a large universe of instances. In this work we suggest
    learning with the help of \emph{near misses}---negative examples
    that differ from the learned concept in only a small number of
    significant points, and we propose a framework for automatic
    generation of such examples. We show that generating near misses
    in the feature space is problematic in some domains, and propose a
    methodology for generating examples directly in the instance space
    using \emph{modification operators}---functions over the instance
    space that produce new instances by slightly modifying existing
    ones. The generated instances are evaluated by mapping them into
    the feature space and measuring their utility using known active
    learning techniques. We apply the proposed framework to the task
    of learning visual concepts from range images. We examine the
    problem of defining modification operators over the instance space
    of range images and solve it by using an intermediate instance
    space---the \emph{functional representation space}. The efficiency
    of the proposed framework for object recognition is demonstrated
    by testing it on real-world recognition tasks.
  }

  }