Bayesian k -Means as a “Maximization-Expectation” Algorithm

We introduce a new class of “maximization-expectation” (ME) algorithms where we maximize over hidden variables but marginalize over random parameters. This reverses the roles of expectation and maximization in the classical expectation-maximization algorithm. In the context of clustering, we argue t...

Full description

Saved in:
Bibliographic Details
Published inNeural computation Vol. 21; no. 4; pp. 1145 - 1172
Main Authors Kurihara, Kenichi, Welling, Max
Format Journal Article
LanguageEnglish
Published One Rogers Street, Cambridge, MA 02142-1209, USA MIT Press 01.04.2009
MIT Press Journals, The
Subjects
Online AccessGet full text
ISSN0899-7667
1530-888X
DOI10.1162/neco.2008.12-06-421

Cover

More Information
Summary:We introduce a new class of “maximization-expectation” (ME) algorithms where we maximize over hidden variables but marginalize over random parameters. This reverses the roles of expectation and maximization in the classical expectation-maximization algorithm. In the context of clustering, we argue that these hard assignments open the door to very fast implementations based on data structures such as kd-trees and conga lines. The marginalization over parameters ensures that we retain the ability to infer model structure (i.e., number of clusters). As an important example, we discuss a top-down Bayesian -means algorithm and a bottom-up agglomerative clustering algorithm. In experiments, we compare these algorithms against a number of alternative algorithms that have recently appeared in the literature.
Bibliography:April, 2009
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
ObjectType-Article-1
ObjectType-Feature-2
content type line 23
ISSN:0899-7667
1530-888X
DOI:10.1162/neco.2008.12-06-421