A Greedy EM Algorithm for Gaussian Mixture Learning
Learning a Gaussian mixture with a local algorithm like EM can be difficult because (i) the true number of mixing components is usually unknown, (ii) there is no generally accepted method for parameter initialization, and (iii) the algorithm can get trapped in one of the many local maxima of the lik...
Saved in:
| Published in | Neural processing letters Vol. 15; no. 1; pp. 77 - 87 |
|---|---|
| Main Authors | , |
| Format | Journal Article |
| Language | English |
| Published |
Dordrecht
Springer
01.02.2002
Springer Nature B.V |
| Subjects | |
| Online Access | Get full text |
| ISSN | 1370-4621 1573-773X |
| DOI | 10.1023/A:1013844811137 |
Cover
| Summary: | Learning a Gaussian mixture with a local algorithm like EM can be difficult because (i) the true number of mixing components is usually unknown, (ii) there is no generally accepted method for parameter initialization, and (iii) the algorithm can get trapped in one of the many local maxima of the likelihood function. In this paper we propose a greedy algorithm for learning a Gaussian mixture which tries to overcome these limitations. In particular, starting with a single component and adding components sequentially until a maximum number k, the algorithm is capable of achieving solutions superior to EM with k components in terms of the likelihood of a test set. The algorithm is based on recent theoretical results on incremental mixture density estimation, and uses a combination of global and local search each time a new component is added to the mixture. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 1370-4621 1573-773X |
| DOI: | 10.1023/A:1013844811137 |