Navigation Framework from a Monocular Camera for Autonomous Mobile Robots

Ground detection plays a critical role in the perception systems of autonomous mobile robots. The ground is typically characterized as a smooth, drivable surface with uniform texture, making it distinguishable from its surroundings. However, it may exhibit imperfections such as shadows and varying l...

Full description

Saved in:
Bibliographic Details
Published inPeriodica polytechnica. Transportation engineering Vol. 53; no. 4; pp. 389 - 405
Main Authors Aulia, Udink, Hasanuddin, Iskandar, Dirhamsyah, Muhammad, Nasaruddin, Nasaruddin
Format Journal Article
LanguageEnglish
German
Russian
Published Budapest Periodica Polytechnica, Budapest University of Technology and Economics 01.10.2025
Subjects
Online AccessGet full text
ISSN0303-7800
1587-3811
DOI10.3311/PPtr.37323

Cover

More Information
Summary:Ground detection plays a critical role in the perception systems of autonomous mobile robots. The ground is typically characterized as a smooth, drivable surface with uniform texture, making it distinguishable from its surroundings. However, it may exhibit imperfections such as shadows and varying light conditions. This paper presents a framework for the detection of vanishing points, drivable road regions, intersections, and obstacles in the context of autonomous mobile robot navigation. The proposed framework leverages Google's DeepLab v3+ for semantic segmentation of the road, employs the Hough line transform to identify vanishing points and drivable areas, utilizes an intersection analyzer to locate intersections linked to drivable areas, and incorporates a free obstacle detector to identify various objects within drivable regions. Our objective is to simplify the perception of ground-related information in recent methodologies and offer a solution to comprehend and harness the capabilities of these frameworks. The primary significance of this study lies in evaluating the performance of these networks in real-world deployment scenarios. The evaluation results demonstrate that our proposed framework achieves high accuracy across diverse and challenging situations. Consequently, the developed framework holds promise for integration into autonomous mobile robots (AMRs).
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0303-7800
1587-3811
DOI:10.3311/PPtr.37323