A limited-memory quasi-Newton algorithm for bound-constrained non-smooth optimization

We consider the problem of minimizing a continuous function that may be non-smooth and non-convex, subject to bound constraints. We propose an algorithm that uses the L-BFGS quasi-Newton approximation of the problem's curvature together with a variant of the weak Wolfe line search. The key ingr...

Full description

Saved in:
Bibliographic Details
Published inOptimization methods & software Vol. 34; no. 1; pp. 150 - 171
Main Authors Keskar, N., Wächter, A.
Format Journal Article
LanguageEnglish
Published Abingdon Taylor & Francis 02.01.2019
Taylor & Francis Ltd
Subjects
Online AccessGet full text
ISSN1055-6788
1029-4937
DOI10.1080/10556788.2017.1378652

Cover

More Information
Summary:We consider the problem of minimizing a continuous function that may be non-smooth and non-convex, subject to bound constraints. We propose an algorithm that uses the L-BFGS quasi-Newton approximation of the problem's curvature together with a variant of the weak Wolfe line search. The key ingredient of the method is an active-set selection strategy that defines the subspace in which search directions are computed. To overcome the inherent shortsightedness of the gradient for a non-smooth function, we propose two strategies. The first relies on an approximation of the ε-minimum norm subgradient, and the second uses an iterative corrective loop that augments the active set based on the resulting search directions. While theoretical convergence guarantees have been elusive even for the unconstrained case, we present numerical results on a set of standard test problems to illustrate the efficacy of our approach, using an open-source Python implementation of the proposed algorithm.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1055-6788
1029-4937
DOI:10.1080/10556788.2017.1378652