Genetic Algorithm Based Hyper-Parameter Tuning to Improve the Performance of Machine Learning Models

Parameter setting will have a great impact on overall behavior of a machine learning model in terms of training time, infrastructure resource requirements, model convergence, and model accuracy. While training machine learning models, it is very difficult to choose optimum values for various paramet...

Full description

Saved in:
Bibliographic Details
Published inSN computer science Vol. 4; no. 2; p. 119
Main Authors Shanthi, D. L., Chethan, N.
Format Journal Article
LanguageEnglish
Published Singapore Springer Nature Singapore 22.12.2022
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN2661-8907
2662-995X
2661-8907
DOI10.1007/s42979-022-01537-8

Cover

More Information
Summary:Parameter setting will have a great impact on overall behavior of a machine learning model in terms of training time, infrastructure resource requirements, model convergence, and model accuracy. While training machine learning models, it is very difficult to choose optimum values for various parameters to create the final model architecture. There are two types of parameters in machine learning model, one is referred as model parameters that are estimated by fitting the given data to the model. And the other is referred as model hyperparameters, these parameters are used to control the learning process. Model parameters are determined by machine ideally by exploration and automatically picks the optimum value; for example, the weights given to a neural network continuously update throughout each iteration until an optimal value is not reached. The method of hyperparameter tuning aims to determine the optimal combination of hyperparameters that will enable the model to function optimally. Setting the optimal mix of hyperparameters is the only method to maximize model performance. However, the designer is responsible for setting the hyperparameters that define the model architecture, such as the value of k in a kNN model, and the process of finding the optimum hyperparameter is referred to as hyperparameter tuning. Currently, this is handled in a variety of methods, including random searching of a specific solution space, sequential searching of the solution space using grids, and so on. In this article, comparative analysis of these methods to the genetic algorithm methodology for hyperparameter tuning is tested.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2661-8907
2662-995X
2661-8907
DOI:10.1007/s42979-022-01537-8