Vision and sonar sensor fusion for mobile robot localization in aliased environments
Monte Carlo localization (MCL) is a common method for self-localization of a mobile robot under the assumption that a map of the environment is available. Original implementations used range sensors like laser scanners and sonar sensors. Recently, localization approaches using vision sensors have be...
Saved in:
| Published in | 2006 2nd IEEE/ASME International Conference on Mechatronics and Embedded Systems and Applications pp. 1 - 6 |
|---|---|
| Main Authors | , |
| Format | Conference Proceeding |
| Language | English |
| Published |
IEEE
01.08.2006
|
| Subjects | |
| Online Access | Get full text |
| ISBN | 9780780397217 0780397215 |
| DOI | 10.1109/MESA.2006.296971 |
Cover
| Summary: | Monte Carlo localization (MCL) is a common method for self-localization of a mobile robot under the assumption that a map of the environment is available. Original implementations used range sensors like laser scanners and sonar sensors. Recently, localization approaches using vision sensors have been developed with good results. In this paper we compare vision-based with sonar-based MCL approaches in terms of localization accuracy. In particular, we show how in an environment with high perceptual aliasing like our department both approaches bear certain weaknesses while by combining vision and sonar sensors the respective localization errors decrease and overall accuracy is improved |
|---|---|
| ISBN: | 9780780397217 0780397215 |
| DOI: | 10.1109/MESA.2006.296971 |