Per-run Algorithm Selection with Warm-Starting Using Trajectory-Based Features

Per-instance algorithm selection seeks to recommend, for a given problem instance and a given performance criterion, one or several suitable algorithms that are expected to perform well for the particular setting. The selection is classically done offline, using openly available information about th...

Full description

Saved in:
Bibliographic Details
Published inLecture notes in computer science Vol. 13398; pp. 46 - 60
Main Authors Kostovska, Ana, Jankovic, Anja, Vermetten, Diederick, de Nobel, Jacob, Wang, Hao, Eftimov, Tome, Doerr, Carola
Format Book Chapter
LanguageEnglish
Published Switzerland Springer International Publishing AG 2022
Springer International Publishing
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text
ISBN9783031147135
3031147138
ISSN0302-9743
1611-3349
1611-3349
DOI10.1007/978-3-031-14714-2_4

Cover

More Information
Summary:Per-instance algorithm selection seeks to recommend, for a given problem instance and a given performance criterion, one or several suitable algorithms that are expected to perform well for the particular setting. The selection is classically done offline, using openly available information about the problem instance or features that are extracted from the instance during a dedicated feature extraction step. This ignores valuable information that the algorithms accumulate during the optimization process. In this work, we propose an alternative, online algorithm selection scheme which we coin as “per-run” algorithm selection. In our approach, we start the optimization with a default algorithm, and, after a certain number of iterations, extract instance features from the observed trajectory of this initial optimizer to determine whether to switch to another optimizer. We test this approach using the CMA-ES as the default solver, and a portfolio of six different optimizers as potential algorithms to switch to. In contrast to other recent work on online per-run algorithm selection, we warm-start the second optimizer using information accumulated during the first optimization phase. We show that our approach outperforms static per-instance algorithm selection. We also compare two different feature extraction principles, based on exploratory landscape analysis and time series analysis of the internal state variables of the CMA-ES, respectively. We show that a combination of both feature sets provides the most accurate recommendations for our test cases, taken from the BBOB function suite from the COCO platform and the YABBOB suite from the Nevergrad platform.
ISBN:9783031147135
3031147138
ISSN:0302-9743
1611-3349
1611-3349
DOI:10.1007/978-3-031-14714-2_4