Mobile robot self-localization in complex indoor environments using monocular vision and 3D model
In this paper, we consider the problem of mobile robot pose estimation using only visual information from a single camera and odometry readings. A focus is on building complex environmental models, fast online rendering and real-time complex and noisy image segmentation. The 3D model of the robot...
Saved in:
| Published in | 2007 IEEE/ASME international conference on advanced intelligent mechatronics pp. 1 - 6 |
|---|---|
| Main Authors | , , |
| Format | Conference Proceeding |
| Language | English |
| Published |
IEEE
01.09.2007
|
| Subjects | |
| Online Access | Get full text |
| ISBN | 1424412633 9781424412631 |
| ISSN | 2159-6247 |
| DOI | 10.1109/AIM.2007.4412566 |
Cover
| Summary: | In this paper, we consider the problem of mobile robot pose estimation using only visual information from a single camera and odometry readings. A focus is on building complex environmental models, fast online rendering and real-time complex and noisy image segmentation. The 3D model of the robot's environment is built using a professional freeware computer graphics tool named Blender and pre-stored in the memory of the robot's on-board computer. Estimation of the mobile robot pose as a stochastic variable is done by correspondences of image lines, extracted using Random Window Randomized Hough Transform line detection algorithm, and model lines, predicted using odometry readings and 3D environment model. The camera model and ray tracing algorithm are also described. Developed algorithms are experimentally tested using a Pioneer 2DX mobile robot. |
|---|---|
| ISBN: | 1424412633 9781424412631 |
| ISSN: | 2159-6247 |
| DOI: | 10.1109/AIM.2007.4412566 |