Discrete simulation optimization for tuning machine learning method hyperparameters

An important aspect of machine learning (ML) involves controlling the learning process for the ML method in question to maximize its performance. Hyperparameter tuning (HPT) involves selecting suitable ML method parameters that control its learning process. Given that HPT can be conceptualized as a...

Full description

Saved in:
Bibliographic Details
Published inJournal of simulation : JOS Vol. ahead-of-print; no. ahead-of-print; pp. 1 - 21
Main Authors Ramamohan, Varun, Singhal, Shobhit, Raj Gupta, Aditya, Bolia, Nomesh Bhojkumar
Format Journal Article
LanguageEnglish
Published Taylor & Francis 02.09.2024
Subjects
Online AccessGet full text
ISSN1747-7778
1774-7786
1747-7786
1747-7786
DOI10.1080/17477778.2023.2219401

Cover

More Information
Summary:An important aspect of machine learning (ML) involves controlling the learning process for the ML method in question to maximize its performance. Hyperparameter tuning (HPT) involves selecting suitable ML method parameters that control its learning process. Given that HPT can be conceptualized as a black box optimization problem subject to stochasticity, simulation optimization (SO) methods appear well suited to this purpose. Therefore, we conceptualize HPT as a discrete SO problem and demonstrate the use of the Kim and Nelson (KN) ranking and selection method, and the stochastic ruler (SR) and the adaptive hyperbox (AH) random search methods for HPT. We also construct the theoretical basis for applying the KN method. We demonstrate the application of the KN and the SR methods to a wide variety of machine learning models, including deep neural network models. We then successfully benchmark the KN, SR and the AH methods against multiple state-of-the-art HPT methods.
ISSN:1747-7778
1774-7786
1747-7786
1747-7786
DOI:10.1080/17477778.2023.2219401