Evaluating Performance of Adam Optimization by Proposing Energy Index

The adjustment of learning rate (η $$\eta $$ ), bias and additional parameters throughout back propagation are crucial for the performance of machine learning algorithms. Regarding optimization for algorithms, adam optimization technique tune the learning parameters by utilising the exponential deca...

Full description

Saved in:
Bibliographic Details
Published inRecent Trends in Image Processing and Pattern Recognition Vol. 1576; pp. 156 - 168
Main Authors Bhandari, Mohan, Parajuli, Pramod, Chapagain, Pralhad, Gaur, Loveleen
Format Book Chapter
LanguageEnglish
Published Switzerland Springer International Publishing AG 2022
Springer International Publishing
SeriesCommunications in Computer and Information Science
Subjects
Online AccessGet full text
ISBN3031070046
9783031070044
ISSN1865-0929
1865-0937
DOI10.1007/978-3-031-07005-1_15

Cover

More Information
Summary:The adjustment of learning rate (η $$\eta $$ ), bias and additional parameters throughout back propagation are crucial for the performance of machine learning algorithms. Regarding optimization for algorithms, adam optimization technique tune the learning parameters by utilising the exponential decay of past gradients and their squares. However, the optimizer requires the engagement of frequently occurring features from the datasets that play a significant role for performance improvement in machine learning algorithms. In this paper, the energy model of a neuron is designed to calculate the energy index from frequently occurring features and introduced in adam optimizer. The classification performance of the proposed energy modeled adam optimizer is experimented on Logistic Regression (single layered) and Support Vector Machine (hyperplane based) machine learning algorithms utlising CIFAR10, MNIST and Fashion MNIST datasets. Optimized with proposed optimizer, Logistic Regression achieved training accuracy of 90.79%, 99.02% and 95.87% whereas Support Vector Machine achieved training accuracy of 39.04%, 80.80% and 82.29% for CIFAR10, MNIST and Fashion MNIST datasets respectively.
Bibliography:Original Abstract: The adjustment of learning rate (η\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\eta $$\end{document}), bias and additional parameters throughout back propagation are crucial for the performance of machine learning algorithms. Regarding optimization for algorithms, adam optimization technique tune the learning parameters by utilising the exponential decay of past gradients and their squares. However, the optimizer requires the engagement of frequently occurring features from the datasets that play a significant role for performance improvement in machine learning algorithms. In this paper, the energy model of a neuron is designed to calculate the energy index from frequently occurring features and introduced in adam optimizer. The classification performance of the proposed energy modeled adam optimizer is experimented on Logistic Regression (single layered) and Support Vector Machine (hyperplane based) machine learning algorithms utlising CIFAR10, MNIST and Fashion MNIST datasets. Optimized with proposed optimizer, Logistic Regression achieved training accuracy of 90.79%, 99.02% and 95.87% whereas Support Vector Machine achieved training accuracy of 39.04%, 80.80% and 82.29% for CIFAR10, MNIST and Fashion MNIST datasets respectively.
ISBN:3031070046
9783031070044
ISSN:1865-0929
1865-0937
DOI:10.1007/978-3-031-07005-1_15