Machine Learning and Deep Learning Optimization Algorithms for Unconstrained Convex Optimization Problem

This paper conducts a thorough comparative analysis of optimization algorithms for an unconstrained convex optimization problem. It contrasts traditional methods like Gradient Descent (GD) and Nesterov Accelerated Gradient (NAG) with modern techniques such as Adaptive Moment Estimation (Adam), Long...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 13; pp. 1817 - 1833
Main Authors Naeem, Kainat, Bukhari, Amal, Daud, Ali, Alsahfi, Tariq, Alshemaimri, Bader, Alhajlah, Mousa
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN2169-3536
2169-3536
DOI10.1109/ACCESS.2024.3522361

Cover

More Information
Summary:This paper conducts a thorough comparative analysis of optimization algorithms for an unconstrained convex optimization problem. It contrasts traditional methods like Gradient Descent (GD) and Nesterov Accelerated Gradient (NAG) with modern techniques such as Adaptive Moment Estimation (Adam), Long Short-Term Memory (LSTM) and Multilayer Perceptron (MLP). Through empirical experiments, convergence speed, solution accuracy and robustness, is evaluated providing insights to aid algorithm selection. The convergence dynamics of convex optimization, is explored analyzing classical algorithms and contemporary neural network (NN) methodologies. The study concludes with a comparative assessment of these algorithms performance metrics and their respective strengths and weaknesses.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3522361