Optimal control of conditioned processes with feedback controls
We consider a class of closed loop stochastic optimal control problems in finite time horizon, in which the cost is an expectation conditional on the event that the process has not exited a given bounded domain. An important difficulty is that the probability of the event that conditionates the stra...
Saved in:
Published in | arXiv.org |
---|---|
Main Authors | , , |
Format | Paper |
Language | English |
Published |
Ithaca
Cornell University Library, arXiv.org
18.12.2019
|
Subjects | |
Online Access | Get full text |
ISSN | 2331-8422 |
Cover
Summary: | We consider a class of closed loop stochastic optimal control problems in finite time horizon, in which the cost is an expectation conditional on the event that the process has not exited a given bounded domain. An important difficulty is that the probability of the event that conditionates the strategy decays as time grows. The optimality conditions consist of a system of partial differential equations, including a Hamilton-Jacobi-Bellman equation (backward w.r.t. time) and a (forward w.r.t. time) Fokker-Planck equation for the law of the conditioned process. The two equations are supplemented with Dirichlet conditions. Next, we discuss the asymptotic behavior as the time horizon tends to \(+\infty\). This leads to a new kind of optimal control problem driven by an eigenvalue problem related to a continuity equation with Dirichlet conditions on the boundary. We prove existence for the latter. We also propose numerical methods and supplement the various theoretical aspects with numerical simulations. |
---|---|
Bibliography: | content type line 50 SourceType-Working Papers-1 ObjectType-Working Paper/Pre-Print-1 |
ISSN: | 2331-8422 |