A Reinforcement Learning-Based Double Layer Controller for Mobile Robot in Human-Shared Environments

Various approaches have been explored to address the path planning problem for mobile robots. However, it remains a significant challenge, particularly in environments where a multi-tasking mobile robot operates alongside stochastically moving humans. This paper focuses on path planning for a mobile...

Full description

Saved in:
Bibliographic Details
Published inApplied sciences Vol. 15; no. 14; p. 7812
Main Authors Mi, Jian, Liu, Jianwen, Xu, Yue, Long, Zhongjie, Wang, Jun, Xu, Wei, Ji, Tao
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.07.2025
Subjects
Online AccessGet full text
ISSN2076-3417
2076-3417
DOI10.3390/app15147812

Cover

More Information
Summary:Various approaches have been explored to address the path planning problem for mobile robots. However, it remains a significant challenge, particularly in environments where a multi-tasking mobile robot operates alongside stochastically moving humans. This paper focuses on path planning for a mobile robot executing multiple pickup and delivery tasks in an environment shared with humans. To plan a safe path and achieve high task success rate, a Reinforcement Learning (RL)-based double layer controller is proposed in which a double-layer learning algorithm is developed. The high-level layer integrates a Finite-State Automaton (FSA) with RL to perform global strategy learning and task-level decision-making. The low-level layer handles local path planning by incorporating a Markov Decision Process (MDP) that accounts for environmental uncertainties. We verify the proposed double layer algorithm under different configurations and evaluate its performance based on several metrics, including task success rate, reward, etc. The proposed method outperforms conventional RL in terms of reward (+63.1%) and task success rate (+113.0%). The simulation results demonstrate the effectiveness of the proposed algorithm in solving path planning problem with stochastic human uncertainties.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2076-3417
2076-3417
DOI:10.3390/app15147812