Analysis of the data-reusing LMS algorithm
A variant of the popular LMS (least mean square) algorithm, termed data-reusing LMS (DR-LMS) algorithms, is analyzed. This family of algorithms is parametrized by the number of reuses (L) of the weight update per data sample, and can be considered to have intermediate properties between the LMS and...
Saved in:
| Published in | Proceedings of the 32nd Midwest Symposium on Circuits and Systems pp. 1127 - 1130 vol.2 |
|---|---|
| Main Authors | , |
| Format | Conference Proceeding |
| Language | English |
| Published |
IEEE
1989
|
| Subjects | |
| Online Access | Get full text |
| DOI | 10.1109/MWSCAS.1989.102053 |
Cover
| Summary: | A variant of the popular LMS (least mean square) algorithm, termed data-reusing LMS (DR-LMS) algorithms, is analyzed. This family of algorithms is parametrized by the number of reuses (L) of the weight update per data sample, and can be considered to have intermediate properties between the LMS and the normalized LMS algorithm. Analysis and experiments indicate faster convergence at the cost of reduced stability regions and additional computational complexity that is linear in the number of reuses.< > |
|---|---|
| DOI: | 10.1109/MWSCAS.1989.102053 |