Explainable and Safety Aware Deep Reinforcement Learning-Based Control of Nonlinear Discrete-Time Systems Using Neural Network Gradient Decomposition

This paper presents an explainable deep-reinforcement learning (DRL)-based safety-aware optimal adaptive tracking (SOAT) scheme for a class of nonlinear discrete-time (DT) affine systems subject to state inequality constraints. The DRL-based SOAT utilizes a multilayer neural network (MNN)-based acto...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on automation science and engineering Vol. 22; pp. 13556 - 13568
Main Authors Farzanegan, Behzad, Jagannathan, Sarangapani
Format Journal Article
LanguageEnglish
Published IEEE 2025
Subjects
Online AccessGet full text
ISSN1545-5955
1558-3783
DOI10.1109/TASE.2025.3554431

Cover

More Information
Summary:This paper presents an explainable deep-reinforcement learning (DRL)-based safety-aware optimal adaptive tracking (SOAT) scheme for a class of nonlinear discrete-time (DT) affine systems subject to state inequality constraints. The DRL-based SOAT utilizes a multilayer neural network (MNN)-based actor-critic to estimate the cost function and optimal policy while the MNN update laws are tuned both using the singular value decomposition (SVD) of activation function gradient in order to mitigate the vanishing gradient issue and safety-aware Bellman error at each layer. An approximate safety-aware optimal policy is developed using Karush-Kuhn-Tucker (KKT) conditions by incorporating the higher-order control barrier function (HOCBF) into the Hamiltonian through the Lagrangian multiplier. The resulting safety-aware Bellman error helps with safe exploration both during online learning phase and at steady state without any explicit actor-critic MNN update law changes. To study the explainability and gain insights, we employ the Shapley Additive Explanations (SHAP) method to construct an explainer model for the DRL-based SOAT scheme in order to identify the important features in determining the optimal policy. The overall stability is established. Finally, the effectiveness of the proposed method is demonstrated on Shipboard Power Systems (SPS), achieving over a 35% reduction in cumulative cost compared to the existing actor-critic MNN optimal control policy. Note to Practitioners-In practical control systems, meeting safety constraints is often critical since ignoring constraints can lead to degraded performance or damage to equipment. This paper addresses the challenge of a safe DRL-based control approach that not only optimizes performance but also integrates robust safety assurances. Our DRL-based SOAT scheme specifically targets nonlinear discrete-time systems that must satisfy state inequality constraints. The successful proposed control performance in simulations on a Shipboard Power System demonstrates the potential for practical applications. DRL-based SOAT employs an MNN with an actor-critic framework for continuous learning and policy adaptation. Integrating HOCBFs directly into the optimization ensures safe operation, even during online learning, which is critical for real-time applications. The addition of SHAP enhances transparency by identifying key features that influence control decisions. Future work could adapt this framework to other constrained environments, such as autonomous vehicles, robotics, and industrial automation, where safety, optimality, and explainability are essential.
ISSN:1545-5955
1558-3783
DOI:10.1109/TASE.2025.3554431