Parameter-Free Multiview K-Means Clustering With Coordinate Descent Method

Recently, more and more real-world datasets have been composed of heterogeneous but related features from diverse views. Multiview clustering provides a promising attempt at a solution for partitioning such data according to heterogeneous information. However, most existing methods suffer from hyper...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 36; no. 3; pp. 4879 - 4892
Main Authors Nie, Feiping, Liu, Han, Wang, Rong, Li, Xuelong
Format Journal Article
LanguageEnglish
Published United States IEEE 01.03.2025
Subjects
Online AccessGet full text
ISSN2162-237X
2162-2388
2162-2388
DOI10.1109/TNNLS.2024.3373532

Cover

More Information
Summary:Recently, more and more real-world datasets have been composed of heterogeneous but related features from diverse views. Multiview clustering provides a promising attempt at a solution for partitioning such data according to heterogeneous information. However, most existing methods suffer from hyper-parameter tuning trouble and high computational cost. Besides, there is still an opportunity for improvement in clustering performance. To this end, a novel multiview framework, called parameter-free multiview <inline-formula> <tex-math notation="LaTeX">k </tex-math></inline-formula>-means clustering with coordinate descent method (PFMVKM), is presented to address the above problems. Specifically, PFMVKM is completely parameter-free and learns the weights via a self-weighted scheme, which can avoid the intractable process of hyper-parameters tuning. Moreover, our model is capable of directly calculating the cluster indicator matrix, with no need to learn the cluster centroid matrix and the indicator matrix simultaneously as previous multiview methods have to do. What's more, we propose an efficient optimization algorithm utilizing the idea of coordinate descent, which can not only reduce the computational complexity but also improve the clustering performance. Extensive experiments on various types of real datasets illustrate that the proposed method outperforms existing state-of-the-art competitors and conforms well with the actual situation.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2162-237X
2162-2388
2162-2388
DOI:10.1109/TNNLS.2024.3373532