Image-based Robot Navigation in Unknown Indoor Environments

Ehud Rivlin, Ilan Shimshoni, and Evgeny Smolyar.
Image-based robot navigation in unknown indoor environments.
In IEEE/RSJ International Conference on Intelligent Robots and Systems., 3:2736--2742, 2003

Online Version

A pdf version is available for download.

Abstract

This paper presents a method for image based robot navigation under the full perspective model. The robot navigates through unknown indoor environments. A target image is taken from an unconstrained position in the environment and given to the robot. The robot starts at an arbitrary position and navigates to the position at which the target image was taken. The approach is based on using images of the environment taken by the robot at different positions along the path and comparing them with a target image. No extraction of 3D models of the scene is needed. The robot finds automatically an image which shows part of the environment shown in the target image. It then moves on the floor, takes pictures with its camera, finds corresponding features in the current and target image, and uses them to extract the motion parameters to the target location. All these steps are performed automatically. This paper describes experimental results performed with a Nomad XR4000 mobile robot These experiments show the feasibility and the significant benefits of our approach.

Co-authors

Bibtex Entry

@inproceedings{RivlinSS03i,
  title = {Image-based robot navigation in unknown indoor environments},
  author = {Ehud Rivlin and Ilan Shimshoni and Evgeny Smolyar},
  year = {2003},
  booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems.},
  volume = {3},
  pages = {2736--2742},
  abstract = {This paper presents a method for image based robot navigation under the full perspective model. The robot navigates through unknown indoor environments. A target image is taken from an unconstrained position in the environment and given to the robot. The robot starts at an arbitrary position and navigates to the position at which the target image was taken. The approach is based on using images of the environment taken by the robot at different positions along the path and comparing them with a target image. No extraction of 3D models of the scene is needed. The robot finds automatically an image which shows part of the environment shown in the target image. It then moves on the floor, takes pictures with its camera, finds corresponding features in the current and target image, and uses them to extract the motion parameters to the target location. All these steps are performed automatically. This paper describes experimental results performed with a Nomad XR4000 mobile robot These experiments show the feasibility and the significant benefits of our approach.}
}