Feature LMS Algorithms
In recent years, there is a growing effort in the learning algorithms area to propose new strategies to detect and exploit sparsity in the model parameters. In many situations, the sparsity is hidden in the relations among these coefficients so that some suitable tools are required to reveal the pot...
        Saved in:
      
    
          | Published in | 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) pp. 4144 - 4148 | 
|---|---|
| Main Authors | , , | 
| Format | Conference Proceeding | 
| Language | English | 
| Published | 
            IEEE
    
        01.04.2018
     | 
| Subjects | |
| Online Access | Get full text | 
| ISSN | 2379-190X | 
| DOI | 10.1109/ICASSP.2018.8461674 | 
Cover
| Summary: | In recent years, there is a growing effort in the learning algorithms area to propose new strategies to detect and exploit sparsity in the model parameters. In many situations, the sparsity is hidden in the relations among these coefficients so that some suitable tools are required to reveal the potential sparsity. This work proposes a set of LMS-type algorithms, collectively called Feature LMS (F-LMS) algorithms, setting forth a hidden feature of the unknown parameters, which ultimately would improve convergence speed and steady-state mean-squared error. The key idea is to apply linear transformations, by means of the so-called feature matrices, to reveal the sparsity hidden in the coefficient vector, followed by a sparsity-promoting penalty function to exploit such sparsity. Some F-LMS algorithms for lowpass and highpass systems are also introduced by using simple feature matrices that require only trivial operations. Simulation results demonstrate that the proposed F-LMS algorithms bring about several performance improvements whenever the hidden sparsity of the parameters is exposed. | 
|---|---|
| ISSN: | 2379-190X | 
| DOI: | 10.1109/ICASSP.2018.8461674 |