Simplified reinforcement learning control algorithm for p-norm multiagent systems with full-state constraints

This paper studies the bipartite consensus tracking control problem with full-state constraints for p-norm multiagent systems. For the full-state constraints problem of p-norm multiagent systems, a transformed function is utilized to achieve the objective of the constraints, which has the property o...

Full description

Saved in:
Bibliographic Details
Published inNeurocomputing (Amsterdam) Vol. 551; p. 126504
Main Authors Wang, Min, Cao, Liang, Liang, Hongjing, Xiao, Wenbin
Format Journal Article
LanguageEnglish
Published Elsevier B.V 28.09.2023
Subjects
Online AccessGet full text
ISSN0925-2312
1872-8286
DOI10.1016/j.neucom.2023.126504

Cover

More Information
Summary:This paper studies the bipartite consensus tracking control problem with full-state constraints for p-norm multiagent systems. For the full-state constraints problem of p-norm multiagent systems, a transformed function is utilized to achieve the objective of the constraints, which has the property of low complexity because it avoids the intervention of log-type functions or trigonometric functions in the controllers. Meanwhile, the bipartite control performance of p-norm multiagent systems is also guaranteed. Moreover, under the simplified reinforcement learning framework, a compensation strategy is utilized to compensate the unknown ideal weights caused by the simplified reinforcement learning algorithm of critic-actor method, and greatly improve the accuracy of the tracking performance for p-norm multiagent systems. Furthermore, the effectiveness of the proposed strategy is illustrated by an actual simulation.
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2023.126504