A Two Stepsize SQP Method for Nonlinear Equality Constrained Stochastic Optimization
We develop a Sequential Quadratic Optimization (SQP) algorithm for minimizing a stochastic objective function subject to deterministic equality constraints. The method utilizes two different stepsizes, one which exclusively scales the component of the step corrupted by the variance of the stochastic...
Saved in:
Main Author | |
---|---|
Format | Journal Article |
Language | English |
Published |
29.08.2024
|
Subjects | |
Online Access | Get full text |
DOI | 10.48550/arxiv.2408.16656 |
Cover
Summary: | We develop a Sequential Quadratic Optimization (SQP) algorithm for minimizing
a stochastic objective function subject to deterministic equality constraints.
The method utilizes two different stepsizes, one which exclusively scales the
component of the step corrupted by the variance of the stochastic gradient
estimates and a second which scales the entire step. We prove that this
stepsize splitting scheme has a worst-case complexity result which improves
over the best known result for this class of problems. In terms of
approximately satisfying the constraint violation, this complexity result
matches that of deterministic SQP methods, up to constant factors, while
matching the known optimal rate for stochastic SQP methods to approximately
minimize the norm of the gradient of the Lagrangian. We also propose and
analyze multiple variants of our algorithm. One of these variants is based upon
popular adaptive gradient methods for unconstrained stochastic optimization
while another incorporates a safeguarded line search along the constraint
violation. Preliminary numerical experiments show competitive performance
against a state of the art stochastic SQP method. In addition, in these
experiments, we observe an improved rate of convergence in terms of the
constraint violation, as predicted by the theoretical results. |
---|---|
DOI: | 10.48550/arxiv.2408.16656 |