Novel geometric solutions for robust and efficient visual odometry

Laurent Kneip (ANU)

COMPUTER VISION AND ROBOTICS SERIES

DATE: 2013-07-25
TIME: 16:00:00 - 17:00:00
LOCATION: NICTA - 7 London Circuit
CONTACT: JavaScript must be enabled to display this email address.

ABSTRACT:
Good egomotion estimation forms the backbone of any modern high-performance localization and navigation system. In contrast to global positioning systems or laser-based range measurement devices, the use of cameras represents an increasingly interesting alternative promising the applicability in a vast number of scenarios. Two fundamental approaches to vision-based localization have been pursued. The first one consists of a purely geometric approach, where the incremental transformation between consecutive images is each time computed by absolute or relative camera orientation algorithms, based on the identified feature correspondences. The second approach additionally takes time information into account, and estimates priors about the relative camera displacement by means of a motion model.

This talk focuses on the first class of approaches. Geometric egomotion computation is of major importance for bootstrapping model-based solutions or robustifying estimation of motion with challenging dynamics, where smoothness assumptions about the camera trajectory are potentially violated. The first part introduces novel minimal and non-minimal solutions to the absolute and relative camera orientation problems. Besides being applicable in a vast number of different applications, they constitute the most basic building blocks for any structure-from-motion-like visual odometry algorithm. The presented algorithms are compared against existing state-of-the-art solutions in terms of efficiency and robustness in degenerate cases, and cover both the perspective as well as the non-perspective case. Part two of the talk then outlines the application of these algorithms in a versatile, real-time egomotion estimation pipeline. The software is embedded into the robotic operating system ROS, and allowing the adaptation of a vast number of propertiesa"including the feature and descriptor type, the camera calibration model, as well as the size of the bundle adjustment window. The performance in challenging motion situations is finally increased by directly including relative motion priors from an additional IMU into the geometric computation process.
BIO:
Laurent Kneip graduated as a Diplom-Ingenieur Univ. in mechatronical engineering at the Friedrich-Alexander University Erlangen/NArnberg in 2008. He finished his Master thesis at the Autonomous Systems Lab at ETH Zurich, where he stayed to work on different projects as a project engineer and a scientific research assistant. He started his PhD studies at ASL in May 2009 in robotics and computer vision, and was a researcher in the context of the EU projects sFly and V-Charge. He graduated as a Doctor of Sciences from ETH Zurich in January 2013, and currently works as a post-doctoral research fellow at CECS, Australian National University in Canberra. Laurent Kneip's research interests focus on visual localization, visual SLAM, geometric vision, state estimation and sensor fusion.

Updated:  11 July 2013 / Responsible Officer:  JavaScript must be enabled to display this email address. / Page Contact:  JavaScript must be enabled to display this email address. / Powered by: Snorkel 1.4