Differentiable Programming based Step Size Optimization for LMS and NLMS Algorithms

We propose TLMS (Trainable Least Mean Squares) and TNLMS (Trainable Normalized LMS) algorithms, which use different step size parameter at each iteration determined by machine learning approach. It has been known that LMS algorithm can achieve fast convergence and small steady-state error simultaneo...

Full description

Saved in:
Bibliographic Details
Published inProceedings ... Asia-Pacific Signal and Information Processing Association Annual Summit and Conference APSIPA ASC ... (Online) pp. 1721 - 1727
Main Authors Hayashi, Kazunori, Shiohara, Kaede, Sasaki, Tetsuya
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.11.2019
Subjects
Online AccessGet full text
ISSN2640-0103
DOI10.1109/APSIPAASC47483.2019.9023175

Cover

Abstract We propose TLMS (Trainable Least Mean Squares) and TNLMS (Trainable Normalized LMS) algorithms, which use different step size parameter at each iteration determined by machine learning approach. It has been known that LMS algorithm can achieve fast convergence and small steady-state error simultaneously by dynamically controlling the step size compared as a fix step size, however, in conventional variable step size approaches, the step size parameter has been controlled in rather heuristic manners. In this study, based on the concept of differential programming, we unfold the iterative process of LMS or NLMS algorithms, and obtain a multilayer signal-flow graph similar to a neural network, where each layer has a step size of each iteration of LMS or NLMS algorithm as an independent learnable parameter. Then, we optimize the step size parameters of all iterations by using a machine learning approach, such as the stochastic gradient descent. Numerical experiments demonstrate the performance of the proposed TLMS and TNLMS algorithms under various conditions.
AbstractList We propose TLMS (Trainable Least Mean Squares) and TNLMS (Trainable Normalized LMS) algorithms, which use different step size parameter at each iteration determined by machine learning approach. It has been known that LMS algorithm can achieve fast convergence and small steady-state error simultaneously by dynamically controlling the step size compared as a fix step size, however, in conventional variable step size approaches, the step size parameter has been controlled in rather heuristic manners. In this study, based on the concept of differential programming, we unfold the iterative process of LMS or NLMS algorithms, and obtain a multilayer signal-flow graph similar to a neural network, where each layer has a step size of each iteration of LMS or NLMS algorithm as an independent learnable parameter. Then, we optimize the step size parameters of all iterations by using a machine learning approach, such as the stochastic gradient descent. Numerical experiments demonstrate the performance of the proposed TLMS and TNLMS algorithms under various conditions.
Author Hayashi, Kazunori
Shiohara, Kaede
Sasaki, Tetsuya
Author_xml – sequence: 1
  givenname: Kazunori
  surname: Hayashi
  fullname: Hayashi, Kazunori
  organization: RIKEN Center for Advanced Intelligence Project
– sequence: 2
  givenname: Kaede
  surname: Shiohara
  fullname: Shiohara, Kaede
  organization: Osaka City University,Osaka,Japan
– sequence: 3
  givenname: Tetsuya
  surname: Sasaki
  fullname: Sasaki, Tetsuya
  organization: Osaka City University,Osaka,Japan
BookMark eNotkLtOw0AQRRcEEiHkC2hWonaYfcS7U1rhFSmQSIY6GtvjsCh-aO2GfD0gUt2jU5ziXouLtmtZiDsFc6UA77NtvtpmWb60znoz16BwjqCNcoszMUPnldNeGW29PxcTnVpIQIG5ErNh-AIAo8FYhInIH0Jdc-R2DFQcWG5jt4_UNKHdy4IGrmQ-ci_zcGS56cfQhCONoWtl3UW5fs0ltZV8-4PssO9iGD-b4UZc1nQYeHbaqfh4enxfviTrzfNqma2ToLwdEypRO1SlJQupJcO2RGeQ2FV1UaAuseI01bjw5tdWhU4LNJXn0mtKSTkzFbf_3cDMuz6GhuL37nSD-QHTsFTd
ContentType Conference Proceeding
DBID 6IE
6IL
CBEJK
RIE
RIL
DOI 10.1109/APSIPAASC47483.2019.9023175
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Xplore POP ALL
IEEE Xplore All Conference Proceedings
IEEE Xplore digital library
IEEE Proceedings Order Plans (POP All) 1998-Present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISBN 9781728132488
1728132487
EISSN 2640-0103
EndPage 1727
ExternalDocumentID 9023175
Genre orig-research
GroupedDBID 6IE
6IF
6IL
6IN
AAJGR
AAWTH
ABLEC
ADZIZ
ALMA_UNASSIGNED_HOLDINGS
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CBEJK
CHZPO
IEGSK
OCL
RIE
RIL
ID FETCH-LOGICAL-i184t-ac92791c4a4064a3e4c9739ae7dfbb92c9de662958339adb26b93d8ec82a6a173
IEDL.DBID RIE
IngestDate Wed Aug 27 02:44:35 EDT 2025
IsPeerReviewed false
IsScholarly false
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-i184t-ac92791c4a4064a3e4c9739ae7dfbb92c9de662958339adb26b93d8ec82a6a173
PageCount 7
ParticipantIDs ieee_primary_9023175
PublicationCentury 2000
PublicationDate 2019-11-01
PublicationDateYYYYMMDD 2019-11-01
PublicationDate_xml – month: 11
  year: 2019
  text: 2019-11-01
  day: 01
PublicationDecade 2010
PublicationTitle Proceedings ... Asia-Pacific Signal and Information Processing Association Annual Summit and Conference APSIPA ASC ... (Online)
PublicationTitleAbbrev APSIPA
PublicationYear 2019
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssj0003203490
Score 1.7065862
Snippet We propose TLMS (Trainable Least Mean Squares) and TNLMS (Trainable Normalized LMS) algorithms, which use different step size parameter at each iteration...
SourceID ieee
SourceType Publisher
StartPage 1721
SubjectTerms Convergence
Machine learning algorithms
Optimization
Signal processing algorithms
Time-domain analysis
Time-varying systems
Training
Title Differentiable Programming based Step Size Optimization for LMS and NLMS Algorithms
URI https://ieeexplore.ieee.org/document/9023175
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1bT8IwGP0CPBh98QLGe5rooxu7dOv6uKAEjSDJJOGN9DYlwjA4Xvj1ttvES3zwrdmapWnTnq_fzjkfwBVLsS-4Pv30ZUFamKTSYoFwLCpIEHIiXJyafEd_EPZG-H4cjGtwvdHCKKUK8pmyTbP4ly8XYmVSZW1qzMpIUIc6icJSq7XJp_iecVpxtuCystFsx8PkbhjHSQcTHPmGxUXt6gs_SqkUSNLdhf7nGEoCyau9yrkt1r_sGf87yD1ofWn20HCDRvtQU9kB7HyzG2xCclNVQ9G7ms-K7oabNdcvkUEziQznCyXTtUKP-iiZVxpNpANb9NBPEMskGphGPHteLKf5y_y9BaPu7VOnZ1VFFaypvszlFhPUI9QVmGkox8xXWFDiU6aITDmnnqBShaFHjRqLMsm9kFNfRkpEHguZS_xDaGSLTB0B0qGJ0NGXDLlwsHAU5SrSXWjqppGpbXUMTTM9k7fSN2NSzczJ349PYdssUanzO4NGvlypcw34Ob8oVvoD8yKqgQ
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1bT8IwGP2CmHh58QLGu0300cHYunV9JCgBZUgySHgjvU2JXAyOF3697TbxEh98a7Zmadq05-u3c84HcMNi7AquTz99WZAWJrG0mCdsiwri-ZyIGo5NviPs-q0Bfhh6wwLcrrUwSqmUfKYqppn-y5dzsTSpsio1ZmXE24BND2PsZWqtdUbFdYzXir0F17mRZrXei9q9ej1qYIID1_C4aCX_xo9iKimWNPcg_BxFRiF5rSwTXhGrXwaN_x3mPpS_VHuot8ajAyio2SHsfjMcLEF0l9dD0fuaT9Luhp011S-RwTOJDOsLReOVQk_6MJnmKk2kQ1vUCSPEZhJ1TaM-eZ4vxsnL9L0Mg-Z9v9Gy8rIK1lhf5xKLCeoQWhOYaTDHzFVYUOJSpoiMOaeOoFL5vkONHosyyR2fU1cGSgQO81mNuEdQnM1n6hiQDk6Ejr-kz4WNha0oV4HuQuNaHJjqVidQMtMzesucM0b5zJz-_fgKtlv9sDPqtLuPZ7BjlitT_Z1DMVks1YWG_4Rfpqv-ARE1rc4
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=Proceedings+...+Asia-Pacific+Signal+and+Information+Processing+Association+Annual+Summit+and+Conference+APSIPA+ASC+...+%28Online%29&rft.atitle=Differentiable+Programming+based+Step+Size+Optimization+for+LMS+and+NLMS+Algorithms&rft.au=Hayashi%2C+Kazunori&rft.au=Shiohara%2C+Kaede&rft.au=Sasaki%2C+Tetsuya&rft.date=2019-11-01&rft.pub=IEEE&rft.eissn=2640-0103&rft.spage=1721&rft.epage=1727&rft_id=info:doi/10.1109%2FAPSIPAASC47483.2019.9023175&rft.externalDocID=9023175