Optimal control of conditioned processes with feedback controls

We consider a class of closed loop stochastic optimal control problems in finite time horizon, in which the cost is an expectation conditional on the event that the process has not exited a given bounded domain. An important difficulty is that the probability of the event that conditionates the stra...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Achdou, Yves, Laurière, Mathieu, Pierre-Louis Lions
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 18.12.2019
Subjects
Online AccessGet full text
ISSN2331-8422

Cover

More Information
Summary:We consider a class of closed loop stochastic optimal control problems in finite time horizon, in which the cost is an expectation conditional on the event that the process has not exited a given bounded domain. An important difficulty is that the probability of the event that conditionates the strategy decays as time grows. The optimality conditions consist of a system of partial differential equations, including a Hamilton-Jacobi-Bellman equation (backward w.r.t. time) and a (forward w.r.t. time) Fokker-Planck equation for the law of the conditioned process. The two equations are supplemented with Dirichlet conditions. Next, we discuss the asymptotic behavior as the time horizon tends to \(+\infty\). This leads to a new kind of optimal control problem driven by an eigenvalue problem related to a continuity equation with Dirichlet conditions on the boundary. We prove existence for the latter. We also propose numerical methods and supplement the various theoretical aspects with numerical simulations.
Bibliography:content type line 50
SourceType-Working Papers-1
ObjectType-Working Paper/Pre-Print-1
ISSN:2331-8422