Skip to content (access key 's')
Logo of Technion
Logo of CS Department
Logo of CS4People
Events

The Taub Faculty of Computer Science Events and Talks

Indoor Exploration with a Robotic Vehicle Using a Single Camera and a Floorplan
event speaker icon
John Noonan (Ph.D. Thesis Seminar)
event date icon
Sunday, 14.02.2021, 12:00
event location icon
Zoom Lecture: 2728213233
For password to lecture, please contact: John Noonan@cs.technion.ac.il
event speaker icon
Advisor: Prof. Ehud Rivlin and Dr. Hector Rotstein
Intelligent systems which can be deployed to explore indoor buildings on a frequent and regular basis are beneficial to personnel operating remotely for security, manufacturing, or warehouse pack-and-ship. In this talk, I will present a new minimalistic approach to indoor exploration: minimal sensing, minimal prior map knowledge, and minimal underlying geometry needed to facilitate building a full visual scene representation. Our research combines both the classical and deep learning worlds, harnessing the strengths of each, using a single camera and a floorplan to facilitate both indoor localization and building a full visual scene representation of the explored building, with a small robotic vehicle to carry out the exploration. We introduce a novel neural scene representation that scales to full indoor buildings for view synthesis, describing it with a space of local neural rendering functions across the building which facilitates infusing meta-knowledge into the learning. Shared knowledge of performing neural rendering from various vantage points in the scene is realized by conditioning on similar building structure, resulting in accelerated learning for the full building. We demonstrate learning such a neural scene representation for view synthesis in around 15 minutes on a single commodity GPU and rendering in real-time at 64 Hz, allowing for immersive visual experiences. Indoor exploration also requires accurate global positioning. We formulate a core methodology of integrating a floorplan with a monocular camera, forming the basis for our positioning systems which resolve global position, orientation, and scale. We also present the theoretical analysis of planar criteria for uniqueness of global localization solutions. We develop multiple algorithms to handle various necessary components of indoor localization, such as extracting planes from scale-ambiguous monocular 3d pointclouds, associating extracted planes with floorplan walls, recovering the scale factor from wall-plane pairs, and integrating soft vehicle and floorplan constraints in an optimization to refine global poses. We introduce multiple modular global positioning systems, both optimization-based and probabilistic approaches, and evaluate on custom-created synthetic, simulation, and real-world datasets experimented using a custom designed-and-built small robotic vehicle.