Reinforcement learning-based motion control for snake robots in complex environments

Snake robots can move flexibly due to their special bodies and gaits. However, it is difficult to plan their motion in multi-obstacle environments due to their complex models. To solve this problem, this work investigates a reinforcement learning-based motion planning method. To plan feasible paths,...

Full description

Saved in:
Bibliographic Details
Published inRobotica Vol. 42; no. 4; pp. 947 - 961
Main Authors Zhang, Dong, Ju, Renjie, Cao, Zhengcai
Format Journal Article
LanguageEnglish
Published Cambridge, UK Cambridge University Press 01.04.2024
Subjects
Online AccessGet full text
ISSN0263-5747
1469-8668
DOI10.1017/S0263574723001613

Cover

More Information
Summary:Snake robots can move flexibly due to their special bodies and gaits. However, it is difficult to plan their motion in multi-obstacle environments due to their complex models. To solve this problem, this work investigates a reinforcement learning-based motion planning method. To plan feasible paths, together with a modified deep Q-learning algorithm, a Floyd-moving average algorithm is proposed to ensure smoothness and adaptability of paths for snake robots’ passing. An improved path integral algorithm is used to work out gait parameters to control snake robots to move along the planned paths. To speed up the training of parameters, a strategy combining serial training, parallel training, and experience replaying modules is designed. Moreover, we have designed a motion planning framework consists of path planning, path smoothing, and motion planning. Various simulations are conducted to validate the effectiveness of the proposed algorithms.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0263-5747
1469-8668
DOI:10.1017/S0263574723001613