Technical Report PHD-2019-02

Title: Sensory Routines for Indoor Autonomous Quad-Copter
Authors: Amir Geva
Supervisors: Hector Rotstein, Ehud Rivlin
PDFCurrently accessibly only within the Technion network
Abstract: Quad-Copters are quickly becoming an industry and military tool that perform a myriad of tasks. Initially, these crafts are manually controlled, at least in part. It is inevitable, though, that for the purposes of scalability, autonomous behavior will become essential. In order to facilitate autonomous operation of a quad-copter, it is necessary to know where the craft is located, with respect to the environment. When flying outdoors, the position can be sensed using Global Positioning System (GPS), but given that this system may fail, it is necessary to have an alternative. Using a combination of structure from motion with information about the environment, in the form of a sampled digital terrain map (DTM), one can calculate the position of the quad-copter with only a monocular camera as a sensor. This research describes means of integrating DTM information into a state of the art structure from motion method called Bundle Adjustment. When moving indoors, the option of using a GPS sensor becomes impossible, and an alternative method is required. The first part of the research presented in this thesis is an extension of prior work on the outdoor scenario, and introduces new constraint types, and a new smooth function model for the DTM. The method is compared to a previously available method called C-DTM and is shown to be superior. The thesis also introduces a localization method that is based on a combination of a LIDAR sensor and DTM, for cases where poor visibility may render the camera useless. For the indoor environment, a new method based on integration of bundle adjustment and a building floor plan is presented, and its performance is analyzed. Also, due to the limited processing power on the quad-copter, research has been done to reduce the processing load, including feature filtering and new light-weight methods for calculating camera orientation and position from single frames. Combining both methods yields the means to do real-time control and navigation.
CopyrightThe above paper is copyright by the Technion, Author(s), or others. Please contact the author(s) for more information

Remark: Any link to this technical report should be to this page (, rather than to the URL of the PDF files directly. The latter URLs may change without notice.

To the list of the PHD technical reports of 2019
To the main CS technical reports page

Computer science department, Technion