Safe and Reliable Training of Learning-Based Aerospace Controllers

In recent years, deep reinforcement learning (DRL) approaches have generated highly successful controllers for a myriad of complex domains. However, the opaque nature of these models limits their applicability in aerospace systems and sasfety-critical domains, in which a single mistake can have dire...

Full description

Saved in:
Bibliographic Details
Published inIEEE/AIAA Digital Avionics Systems Conference pp. 1 - 10
Main Authors Mandal, Udayan, Amir, Guy, Wu, Haoze, Daukantas, Ieva, Newell, Fletcher Lee, Ravaioli, Umberto, Meng, Baoluo, Durling, Michael, Hobbs, Kerianne, Ganai, Milan, Shim, Tobey, Katz, Guy, Barrett, Clark
Format Conference Proceeding
LanguageEnglish
Published IEEE 29.09.2024
Subjects
Online AccessGet full text
ISSN2155-7209
DOI10.1109/DASC62030.2024.10749499

Cover

More Information
Summary:In recent years, deep reinforcement learning (DRL) approaches have generated highly successful controllers for a myriad of complex domains. However, the opaque nature of these models limits their applicability in aerospace systems and sasfety-critical domains, in which a single mistake can have dire consequences. In this paper, we present novel advancements in both the training and verification of DRL controllers, which can help ensure their safe behavior. We showcase a design-for-verification approach utilizing k-induction and demonstrate its use in verifying liveness properties. In addition, we also give a brief overview of neural Lyapunov Barrier certificates and summarize their capabilities on a case study. Finally, we describe several other novel reachability-based approaches which, despite failing to provide guarantees of interest, could be effective for verification of other DRL systems, and could be of further interest to the community.
ISSN:2155-7209
DOI:10.1109/DASC62030.2024.10749499