Localization And Homing Using Combinations of Model Views

Ronen Basri and Ehud Rivlin.
Localization and Homing Using Combinations of Model Views.
Artif. Intell., 78(1-2):327-354, 1995

Online Version

A pdf version is available for download.

Abstract

Navigation involves recognizing the environment, identifying the current position within the environment, and reaching particular positions. We present a method for localization (the act of recognizing the environment), positioning (the act of computing the exact coordinates of a robot in the environment), and honing (the act of returning to a previously visited position) from visual input. The method is based on representing the scene as a set of 2D views and predicting the appearances of novel views by linear combinations of the model views. The method accurately approximates the appearance of scenes under weak-perspective projection. Analysis of this projection as well as experimental results demonstrate that in many cases this approximation is sufficient to accurately describe the scene. When weak-perspective approximation is invalid, either a larger number of models can be acquired or an iterative solution to account for the perspective distortions can be employed. The method has several advantages over other approaches. It uses relatively rich representations; the representations are 2D rather than 3D; and localization can be done from only a single 2D view without calibration. The same principal method is applied for both the localization and positioning problems, and a simple “qualitative” algorithm for homing is derived from this method.

Co-authors

Bibtex Entry

@article{BasriR95a,
  title = {Localization and Homing Using Combinations of Model Views.},
  author = {Ronen Basri and Ehud Rivlin},
  year = {1995},
  journal = {Artif. Intell.},
  volume = {78},
  number = {1-2},
  pages = {327-354},
  abstract = {Navigation involves recognizing the environment, identifying the current position within the environment, and reaching particular positions. We present a method for localization (the act of recognizing the environment), positioning (the act of computing the exact coordinates of a robot in the environment), and honing (the act of returning to a previously visited position) from visual input. The method is based on representing the scene as a set of 2D views and predicting the appearances of novel views by linear combinations of the model views. The method accurately approximates the appearance of scenes under weak-perspective projection. Analysis of this projection as well as experimental results demonstrate that in many cases this approximation is sufficient to accurately describe the scene. When weak-perspective approximation is invalid, either a larger number of models can be acquired or an iterative solution to account for the perspective distortions can be employed. The method has several advantages over other approaches. It uses relatively rich representations; the representations are 2D rather than 3D; and localization can be done from only a single 2D view without calibration. The same principal method is applied for both the localization and positioning problems, and a simple “qualitative” algorithm for homing is derived from this method.}
}