A multi-layer line search method to improve the initialization of optimization algorithms

•We describe a metaheuristic method to improve optimization algorithms.•We apply our approach to five particular optimization methods.•We check the performances of the resulting algorithms on various benchmark cases.•We compare our results with other metaheuristic algorithms.•Our algorithms exhibit...

Full description

Saved in:
Bibliographic Details
Published inEuropean journal of operational research Vol. 247; no. 3; pp. 711 - 720
Main Authors Ivorra, Benjamin, Mohammadi, Bijan, Manuel Ramos, Angel
Format Journal Article
LanguageEnglish
Published Amsterdam Elsevier B.V 16.12.2015
Elsevier Sequoia S.A
Elsevier
Subjects
Online AccessGet full text
ISSN0377-2217
1872-6860
1872-6860
DOI10.1016/j.ejor.2015.06.044

Cover

More Information
Summary:•We describe a metaheuristic method to improve optimization algorithms.•We apply our approach to five particular optimization methods.•We check the performances of the resulting algorithms on various benchmark cases.•We compare our results with other metaheuristic algorithms.•Our algorithms exhibit good performances. We introduce a novel metaheuristic methodology to improve the initialization of a given deterministic or stochastic optimization algorithm. Our objective is to improve the performance of the considered algorithm, called core optimization algorithm, by reducing its number of cost function evaluations, by increasing its success rate and by boosting the precision of its results. In our approach, the core optimization is considered as a sub-optimization problem for a multi-layer line search method. The approach is presented and implemented for various particular core optimization algorithms: Steepest Descent, Heavy-Ball, Genetic Algorithm, Differential Evolution and Controlled Random Search. We validate our methodology by considering a set of low and high dimensional benchmark problems (i.e., problems of dimension between 2 and 1000). The results are compared to those obtained with the core optimization algorithms alone and with two additional global optimization methods (Direct Tabu Search and Continuous Greedy Randomized Adaptive Search). These latter also aim at improving the initial condition for the core algorithms. The numerical results seem to indicate that our approach improves the performances of the core optimization algorithms and allows to generate algorithms more efficient than the other optimization methods studied here. A Matlab optimization package called “Global Optimization Platform” (GOP), implementing the algorithms presented here, has been developed and can be downloaded at: http://www.mat.ucm.es/momat/software.htm
Bibliography:SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
ISSN:0377-2217
1872-6860
1872-6860
DOI:10.1016/j.ejor.2015.06.044