Double Q-PID algorithm for mobile robot control
•A model-free reinforcement learning algorithm for adaptive low-level PID control.•Double Q-learning provides excelent performance with low computational cost.•Incremental state and action spaces discretization for fast reinforcement learning.•The Double Q-PID algorithm is effectively applied to mul...
Saved in:
| Published in | Expert systems with applications Vol. 137; pp. 292 - 307 |
|---|---|
| Main Authors | , , |
| Format | Journal Article |
| Language | English |
| Published |
New York
Elsevier Ltd
15.12.2019
Elsevier BV |
| Subjects | |
| Online Access | Get full text |
| ISSN | 0957-4174 1873-6793 |
| DOI | 10.1016/j.eswa.2019.06.066 |
Cover
| Summary: | •A model-free reinforcement learning algorithm for adaptive low-level PID control.•Double Q-learning provides excelent performance with low computational cost.•Incremental state and action spaces discretization for fast reinforcement learning.•The Double Q-PID algorithm is effectively applied to multiple robotic platforms.•The proposed algorithm effectively improves the adaptability of the PID controllers.
Many expert systems have been developed for self-adaptive PID controllers of mobile robots. However, the high computational requirements of the expert systems layers, developed for the tuning of the PID controllers, still require previous expert knowledge and high efficiency in algorithmic and software execution for real-time applications. To address these problems, in this paper we propose an expert agent-based system, based on a reinforcement learning agent, for self-adapting multiple low-level PID controllers in mobile robots. For the formulation of the artificial expert agent, we develop an incremental model-free algorithm version of the double Q-Learning algorithm for fast on-line adaptation of multiple low-level PID controllers. Fast learning and high on-line adaptability of the artificial expert agent is achieved by means of a proposed incremental active-learning exploration-exploitation procedure, for a non-uniform state space exploration, along with an experience replay mechanism for multiple value functions updates in the double Q-learning algorithm. A comprehensive comparative simulation study and experiments in a real mobile robot demonstrate the high performance of the proposed algorithm for a real-time simultaneous tuning of multiple adaptive low-level PID controllers of mobile robots in real world conditions. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 0957-4174 1873-6793 |
| DOI: | 10.1016/j.eswa.2019.06.066 |