Simplified reinforcement learning control algorithm for p-norm multiagent systems with full-state constraints
This paper studies the bipartite consensus tracking control problem with full-state constraints for p-norm multiagent systems. For the full-state constraints problem of p-norm multiagent systems, a transformed function is utilized to achieve the objective of the constraints, which has the property o...
Saved in:
| Published in | Neurocomputing (Amsterdam) Vol. 551; p. 126504 |
|---|---|
| Main Authors | , , , |
| Format | Journal Article |
| Language | English |
| Published |
Elsevier B.V
28.09.2023
|
| Subjects | |
| Online Access | Get full text |
| ISSN | 0925-2312 1872-8286 |
| DOI | 10.1016/j.neucom.2023.126504 |
Cover
| Summary: | This paper studies the bipartite consensus tracking control problem with full-state constraints for p-norm multiagent systems. For the full-state constraints problem of p-norm multiagent systems, a transformed function is utilized to achieve the objective of the constraints, which has the property of low complexity because it avoids the intervention of log-type functions or trigonometric functions in the controllers. Meanwhile, the bipartite control performance of p-norm multiagent systems is also guaranteed. Moreover, under the simplified reinforcement learning framework, a compensation strategy is utilized to compensate the unknown ideal weights caused by the simplified reinforcement learning algorithm of critic-actor method, and greatly improve the accuracy of the tracking performance for p-norm multiagent systems. Furthermore, the effectiveness of the proposed strategy is illustrated by an actual simulation. |
|---|---|
| ISSN: | 0925-2312 1872-8286 |
| DOI: | 10.1016/j.neucom.2023.126504 |