Differentiable Programming based Step Size Optimization for LMS and NLMS Algorithms

We propose TLMS (Trainable Least Mean Squares) and TNLMS (Trainable Normalized LMS) algorithms, which use different step size parameter at each iteration determined by machine learning approach. It has been known that LMS algorithm can achieve fast convergence and small steady-state error simultaneo...

Full description

Saved in:
Bibliographic Details
Published inProceedings ... Asia-Pacific Signal and Information Processing Association Annual Summit and Conference APSIPA ASC ... (Online) pp. 1721 - 1727
Main Authors Hayashi, Kazunori, Shiohara, Kaede, Sasaki, Tetsuya
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.11.2019
Subjects
Online AccessGet full text
ISSN2640-0103
DOI10.1109/APSIPAASC47483.2019.9023175

Cover

More Information
Summary:We propose TLMS (Trainable Least Mean Squares) and TNLMS (Trainable Normalized LMS) algorithms, which use different step size parameter at each iteration determined by machine learning approach. It has been known that LMS algorithm can achieve fast convergence and small steady-state error simultaneously by dynamically controlling the step size compared as a fix step size, however, in conventional variable step size approaches, the step size parameter has been controlled in rather heuristic manners. In this study, based on the concept of differential programming, we unfold the iterative process of LMS or NLMS algorithms, and obtain a multilayer signal-flow graph similar to a neural network, where each layer has a step size of each iteration of LMS or NLMS algorithm as an independent learnable parameter. Then, we optimize the step size parameters of all iterations by using a machine learning approach, such as the stochastic gradient descent. Numerical experiments demonstrate the performance of the proposed TLMS and TNLMS algorithms under various conditions.
ISSN:2640-0103
DOI:10.1109/APSIPAASC47483.2019.9023175