Visual-inertial simultaneous localization, mapping and sensor-to-sensor self-calibration

Visual and inertial sensors, in combination, are well-suited for many robot navigation and mapping tasks. However, correct data fusion, and hence overall system performance, depends on accurate calibration of the 6-DOF transform between the sensors (one or more camera(s) and an inertial measurement...

Full description

Saved in:
Bibliographic Details
Published in2009 IEEE International Symposium on Computational Intelligence in Robotics and Automation pp. 360 - 368
Main Authors Kelly, J., Sukhatme, G.S.
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.12.2009
Subjects
Online AccessGet full text
ISBN1424448085
9781424448081
DOI10.1109/CIRA.2009.5423178

Cover

More Information
Summary:Visual and inertial sensors, in combination, are well-suited for many robot navigation and mapping tasks. However, correct data fusion, and hence overall system performance, depends on accurate calibration of the 6-DOF transform between the sensors (one or more camera(s) and an inertial measurement unit). Obtaining this calibration information is typically difficult and time-consuming. In this paper, we describe an algorithm, based on the unscented Kalman filter (UKF), for camera-IMU simultaneous localization, mapping and sensor relative pose self-calibration. We show that the sensor-to-sensor transform, the IMU gyroscope and accelerometer biases, the local gravity vector, and the metric scene structure can all be recovered from camera and IMU measurements alone. This is possible without any prior knowledge about the environment in which the robot is operating. We present results from experiments with a monocular camera and a low-cost solid-state IMU, which demonstrate accurate estimation of the calibration parameters and the local scene structure.
ISBN:1424448085
9781424448081
DOI:10.1109/CIRA.2009.5423178