A tensor LMS algorithm
Although the LMS algorithm is often preferred in practice due to its numerous positive implementation properties, once the parameter space to estimate becomes large, the algorithm suffers of slow learning. Many ideas have been proposed to introduce some a-priori knowledge into the algorithm to speed...
Saved in:
| Published in | 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) pp. 3347 - 3351 |
|---|---|
| Main Authors | , |
| Format | Conference Proceeding |
| Language | English |
| Published |
IEEE
01.04.2015
|
| Subjects | |
| Online Access | Get full text |
| ISSN | 1520-6149 |
| DOI | 10.1109/ICASSP.2015.7178591 |
Cover
| Summary: | Although the LMS algorithm is often preferred in practice due to its numerous positive implementation properties, once the parameter space to estimate becomes large, the algorithm suffers of slow learning. Many ideas have been proposed to introduce some a-priori knowledge into the algorithm to speed up its learning rate. Recently also sparsity concepts have become of interest for such algorithms. In this contribution we follow a different path by focusing on the separability of linear operators, a typical property of interest when dealing with tensors. Once such separability property is given, a gradient type algorithm can be derived with significant increase in learning rate. Even if separability is only given to a certain extent, we show that the algorithm can still provide gains. We derive quality and quantity measures to describe the algorithmic behavior in such contexts and evaluate its properties by Monte Carlo simulations. |
|---|---|
| ISSN: | 1520-6149 |
| DOI: | 10.1109/ICASSP.2015.7178591 |