Comparing measures of sparsity
Sparsity is a recurrent theme in machine learning and is used to improve performance of algorithms such as non-negative matrix factorization and the LOST algorithm. Our aim in this paper is to compare several commonly-used sparsity measures according to intuitive attributes that a sparsity measure s...
Saved in:
| Published in | 2008 IEEE Workshop on Machine Learning for Signal Processing pp. 55 - 60 |
|---|---|
| Main Authors | , |
| Format | Conference Proceeding |
| Language | English |
| Published |
IEEE
01.10.2008
|
| Subjects | |
| Online Access | Get full text |
| ISBN | 9781424423750 1424423759 |
| ISSN | 1551-2541 |
| DOI | 10.1109/MLSP.2008.4685455 |
Cover
| Summary: | Sparsity is a recurrent theme in machine learning and is used to improve performance of algorithms such as non-negative matrix factorization and the LOST algorithm. Our aim in this paper is to compare several commonly-used sparsity measures according to intuitive attributes that a sparsity measure should have. Sparsity of representations of signals in fields such as blind source separation, compression, sampling and signal analysis has proved not just to be useful but a key factor in the success of algorithms in these areas. Intuitively, a sparse representation is one in which a small number of coefficients contain a large proportion of the energy. In this paper we discuss six properties (robin hood, scaling, rising tide, cloning, bill gates and babies) that we believe a sparsity measure should have. The main contribution of this paper is a table which classifies commonly-used sparsity measures based on whether or not they satisfy these six propositions. Only one of these measures satisfies all six: the Gini index. |
|---|---|
| ISBN: | 9781424423750 1424423759 |
| ISSN: | 1551-2541 |
| DOI: | 10.1109/MLSP.2008.4685455 |