Analysis of the data-reusing LMS algorithm

A variant of the popular LMS (least mean square) algorithm, termed data-reusing LMS (DR-LMS) algorithms, is analyzed. This family of algorithms is parametrized by the number of reuses (L) of the weight update per data sample, and can be considered to have intermediate properties between the LMS and...

Full description

Saved in:
Bibliographic Details
Published inProceedings of the 32nd Midwest Symposium on Circuits and Systems pp. 1127 - 1130 vol.2
Main Authors Roy, S., Shynk, J.J.
Format Conference Proceeding
LanguageEnglish
Published IEEE 1989
Subjects
Online AccessGet full text
DOI10.1109/MWSCAS.1989.102053

Cover

More Information
Summary:A variant of the popular LMS (least mean square) algorithm, termed data-reusing LMS (DR-LMS) algorithms, is analyzed. This family of algorithms is parametrized by the number of reuses (L) of the weight update per data sample, and can be considered to have intermediate properties between the LMS and the normalized LMS algorithm. Analysis and experiments indicate faster convergence at the cost of reduced stability regions and additional computational complexity that is linear in the number of reuses.< >
DOI:10.1109/MWSCAS.1989.102053