Visual Homing: Surfing on The Epipoles

Ronen Basri, Ehud Rivlin, and Ilan Shimshoni.
Visual Homing: Surfing on the Epipoles.
International Journal of Computer Vision, 33(2):117-137, 1999

Online Version

A pdf version is available for download.

Abstract

We introduce a novel method for visual homing. Using this method a robot can be sent to desired positions and orientations in 3D space specified by single images taken from these positions. Our method is based on recovering the epipolar geometry relating the current image taken by the robot and the target image. Using the epipolar geometry, most of the parameters which specify the differences in position and orientation of the camera between the two images are recovered. However, since not all of the parameters can be recovered from two images, we have developed specific methods to bypass these missing parameters and resolve the ambiguities that exist. We present two homing algorithms for two standard projection models, weak and full perspective. Our method determines the path of the robot on-line, the starting position of the robot is relatively not constrained, and a 3D model of the environment is not required. The method is almost entirely memoryless, in the sense that at every step the path to the target position is determined independently of the previous path taken by the robot. Because of this property the robot may be able, while moving toward the target, to perform auxiliary tasks or to avoid obstacles, without this impairing its ability to eventually reach the target position. We have performed simulations and real experiments which demonstrate the robustness of the method and that the algorithms always converge to the target pose.

Keywords

Co-authors

Bibtex Entry

@article{BasriRS99a,
  title = {Visual Homing: Surfing on the Epipoles.},
  author = {Ronen Basri and Ehud Rivlin and Ilan Shimshoni},
  year = {1999},
  journal = {International Journal of Computer Vision},
  volume = {33},
  number = {2},
  pages = {117-137},
  keywords = {Robotics; Algorithms; Cameras; Image reconstruction; Mathematical models; Computer simulation; Three dimensional; Motion planning; Online systems; Navigation},
  abstract = {We introduce a novel method for visual homing. Using this method a robot can be sent to desired positions and orientations in 3D space specified by single images taken from these positions. Our method is based on recovering the epipolar geometry relating the current image taken by the robot and the target image. Using the epipolar geometry, most of the parameters which specify the differences in position and orientation of the camera between the two images are recovered. However, since not all of the parameters can be recovered from two images, we have developed specific methods to bypass these missing parameters and resolve the ambiguities that exist. We present two homing algorithms for two standard projection models, weak and full perspective. Our method determines the path of the robot on-line, the starting position of the robot is relatively not constrained, and a 3D model of the environment is not required. The method is almost entirely memoryless, in the sense that at every step the path to the target position is determined independently of the previous path taken by the robot. Because of this property the robot may be able, while moving toward the target, to perform auxiliary tasks or to avoid obstacles, without this impairing its ability to eventually reach the target position. We have performed simulations and real experiments which demonstrate the robustness of the method and that the algorithms always converge to the target pose.}
}