Identifying the number of components in Gaussian mixture models using numerical algebraic geometry

Using Gaussian mixture models for clustering is a statistically mature method for clustering in data science with numerous successful applications in science and engineering. The parameters for a Gaussian mixture model (GMM) are typically estimated from training data using the iterative expectation-...

Full description

Saved in:
Bibliographic Details
Published inJournal of algebra and its applications Vol. 19; no. 11; p. 2050204
Main Authors Shirinkam, Sara, Alaeddini, Adel, Gross, Elizabeth
Format Journal Article
LanguageEnglish
Published Singapore World Scientific Publishing Company 01.11.2020
World Scientific Publishing Co. Pte., Ltd
Subjects
Online AccessGet full text
ISSN0219-4988
1793-6829
1793-6829
DOI10.1142/S0219498820502047

Cover

More Information
Summary:Using Gaussian mixture models for clustering is a statistically mature method for clustering in data science with numerous successful applications in science and engineering. The parameters for a Gaussian mixture model (GMM) are typically estimated from training data using the iterative expectation-maximization algorithm, which requires the number of Gaussian components a priori. In this study, we propose two algorithms rooted in numerical algebraic geometry (NAG), namely, an area-based algorithm and a local maxima algorithm, to identify the optimal number of components. The area-based algorithm transforms several GMM with varying number of components into sets of equivalent polynomial regression splines. Next, it uses homotopy continuation methods for evaluating the resulting splines to identify the number of components that is most compatible with the gradient data. The local maxima algorithm forms a set of polynomials by fitting a smoothing spline to a dataset. Next, it uses NAG to solve the system of the first derivatives for finding the local maxima of the resulting smoothing spline, which represent the number of mixture components. The local maxima algorithm also identifies the location of the centers of Gaussian components. Using a real-world case study in automotive manufacturing and extensive simulations, we demonstrate that the performance of the proposed algorithms is comparable with that of Akaike information criterion (AIC) and Bayesian information criterion (BIC), which are popular methods in the literature. We also show the proposed algorithms are more robust than AIC and BIC when the Gaussian assumption is violated.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0219-4988
1793-6829
1793-6829
DOI:10.1142/S0219498820502047