Evolutionary Optimization of High-Dimensional Multiobjective and Many-Objective Expensive Problems Assisted by a Dropout Neural Network

Gaussian processes (GPs) are widely used in surrogate-assisted evolutionary optimization of expensive problems mainly due to the ability to provide a confidence level of their outputs, making it possible to adopt principled surrogate management methods, such as the acquisition function used in the B...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on systems, man, and cybernetics. Systems Vol. 52; no. 4; pp. 2084 - 2097
Main Authors Guo, Dan, Wang, Xilu, Gao, Kailai, Jin, Yaochu, Ding, Jinliang, Chai, Tianyou
Format Journal Article
LanguageEnglish
Published New York IEEE 01.04.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN2168-2216
2168-2232
DOI10.1109/TSMC.2020.3044418

Cover

More Information
Summary:Gaussian processes (GPs) are widely used in surrogate-assisted evolutionary optimization of expensive problems mainly due to the ability to provide a confidence level of their outputs, making it possible to adopt principled surrogate management methods, such as the acquisition function used in the Bayesian optimization. Unfortunately, GPs become less practical for high-dimensional multiobjective and many-objective optimization as their computational complexity is cubic in the number of training samples. In this article, we propose a computationally efficient dropout neural network (EDN) to replace the Gaussian process and a new model management strategy to achieve a good balance between convergence and diversity for assisting evolutionary algorithms to solve high-dimensional multiobjective and many-objective expensive optimization problems. While the conventional dropout neural network needs to save a large number of network models during the training for calculating the confidence level, only one single network model is needed in the EDN to estimate the fitness and its confidence level by randomly ignoring neurons in both training and testing the neural network. Extensive experimental studies on benchmark problems with up to 100 decision variables and 20 objectives demonstrate that, compared to state of the art, the proposed algorithm is not only highly competitive in performance but also computationally more scalable to high-dimensional many-objective optimization problems. Finally, the proposed algorithm is validated on an operational optimization problem of crude oil distillation units, further confirming its capability of handling expensive problems given a limited computational budget.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2168-2216
2168-2232
DOI:10.1109/TSMC.2020.3044418