A Greedy EM Algorithm for Gaussian Mixture Learning

Learning a Gaussian mixture with a local algorithm like EM can be difficult because (i) the true number of mixing components is usually unknown, (ii) there is no generally accepted method for parameter initialization, and (iii) the algorithm can get trapped in one of the many local maxima of the lik...

Full description

Saved in:
Bibliographic Details
Published inNeural processing letters Vol. 15; no. 1; pp. 77 - 87
Main Authors Vlassis, Nikos, Likas, Aristidis
Format Journal Article
LanguageEnglish
Published Dordrecht Springer 01.02.2002
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN1370-4621
1573-773X
DOI10.1023/A:1013844811137

Cover

More Information
Summary:Learning a Gaussian mixture with a local algorithm like EM can be difficult because (i) the true number of mixing components is usually unknown, (ii) there is no generally accepted method for parameter initialization, and (iii) the algorithm can get trapped in one of the many local maxima of the likelihood function. In this paper we propose a greedy algorithm for learning a Gaussian mixture which tries to overcome these limitations. In particular, starting with a single component and adding components sequentially until a maximum number k, the algorithm is capable of achieving solutions superior to EM with k components in terms of the likelihood of a test set. The algorithm is based on recent theoretical results on incremental mixture density estimation, and uses a combination of global and local search each time a new component is added to the mixture.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1370-4621
1573-773X
DOI:10.1023/A:1013844811137