Benchmarking footprints of continuous black-box optimization algorithms: Explainable insights into algorithm success and failure

The practices for comparing black-box optimization algorithms based on performance statistics over a benchmark suite are being increasingly criticized. Critics argue that these practices fail to explain why particular algorithms outperform others. Consequently, there is a growing demand for more rob...

Full description

Saved in:
Bibliographic Details
Published inSwarm and evolutionary computation Vol. 94; p. 101895
Main Authors Nikolikj, Ana, Muñoz, Mario Andrés, Eftimov, Tome
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.04.2025
Subjects
Online AccessGet full text
ISSN2210-6502
DOI10.1016/j.swevo.2025.101895

Cover

More Information
Summary:The practices for comparing black-box optimization algorithms based on performance statistics over a benchmark suite are being increasingly criticized. Critics argue that these practices fail to explain why particular algorithms outperform others. Consequently, there is a growing demand for more robust comparison methods that assess the overall efficiency of the algorithms in terms of performance and also consider the specific landscape properties of the optimization problems on which the algorithms are compared. This study introduces a novel approach for comparing algorithms based on the concept of an algorithm footprint, which aims to identify easy and challenging problem instances for a given algorithm. A unique footprint is assigned to each algorithm and then compared, to highlight problem instances where an algorithm either uniquely succeeds or falls, as well as how the algorithms complement each other across the problem instances. Our solution employs a multi-task regression model (MTR) to simultaneously link the performance of multiple algorithms with the landscape features of the problem instances. By applying an Explainable Machine Learning (XML) technique, we quantify and compare the importance of the landscape features for each algorithm. The methodology is applied to a portfolio of three different BBO algorithms, highlighting their success and failure on the Black-Box Optimization Benchmarking (BBOB) suite. The efficacy of our approach is further demonstrated through a comparative analysis with two existing algorithm comparison methods, showcasing the robustness and depth of insights provided by the proposed approach. •A new method to identify easy and difficult problems for optimization algorithms.•Each algorithm has a unique footprint revealing easy and difficult problems.•The footprint enables explanation of performance variation through landscape features.•Footprint comparison reveals algorithm complementarity across different problems.•Provides robust insights when compared to SOTA approaches for algorithm benchmarking.
ISSN:2210-6502
DOI:10.1016/j.swevo.2025.101895