Machine Learning and Deep Learning Optimization Algorithms for Unconstrained Convex Optimization Problem
This paper conducts a thorough comparative analysis of optimization algorithms for an unconstrained convex optimization problem. It contrasts traditional methods like Gradient Descent (GD) and Nesterov Accelerated Gradient (NAG) with modern techniques such as Adaptive Moment Estimation (Adam), Long...
Saved in:
| Published in | IEEE access Vol. 13; pp. 1817 - 1833 |
|---|---|
| Main Authors | , , , , , |
| Format | Journal Article |
| Language | English |
| Published |
Piscataway
IEEE
2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Subjects | |
| Online Access | Get full text |
| ISSN | 2169-3536 2169-3536 |
| DOI | 10.1109/ACCESS.2024.3522361 |
Cover
| Summary: | This paper conducts a thorough comparative analysis of optimization algorithms for an unconstrained convex optimization problem. It contrasts traditional methods like Gradient Descent (GD) and Nesterov Accelerated Gradient (NAG) with modern techniques such as Adaptive Moment Estimation (Adam), Long Short-Term Memory (LSTM) and Multilayer Perceptron (MLP). Through empirical experiments, convergence speed, solution accuracy and robustness, is evaluated providing insights to aid algorithm selection. The convergence dynamics of convex optimization, is explored analyzing classical algorithms and contemporary neural network (NN) methodologies. The study concludes with a comparative assessment of these algorithms performance metrics and their respective strengths and weaknesses. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 2169-3536 2169-3536 |
| DOI: | 10.1109/ACCESS.2024.3522361 |