Federated Matrix Factorization: Algorithm Design and Application to Data Clustering
Recent demands on data privacy have called for federated learning (FL) as a new distributed learning paradigm in massive and heterogeneous networks. Although many FL algorithms have been proposed, few of them have considered the matrix factorization (MF) model, which is known to have a vast number o...
Saved in:
| Published in | IEEE transactions on signal processing Vol. 70; pp. 1625 - 1640 |
|---|---|
| Main Authors | , |
| Format | Journal Article |
| Language | English |
| Published |
New York
IEEE
2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Subjects | |
| Online Access | Get full text |
| ISSN | 1053-587X 1941-0476 |
| DOI | 10.1109/TSP.2022.3151505 |
Cover
| Summary: | Recent demands on data privacy have called for federated learning (FL) as a new distributed learning paradigm in massive and heterogeneous networks. Although many FL algorithms have been proposed, few of them have considered the matrix factorization (MF) model, which is known to have a vast number of signal processing and machine learning applications. Since the MF problem involves two blocks of variables and the variables are usually subject to constraints related to specific solution structure, it requires new FL algorithm designs to achieve communication-efficient MF in heterogeneous data networks. In this paper, we address the challenge by proposing two new federated MF (FedMF) algorithms, namely, FedMAvg and FedMGS, based on the model averaging and gradient sharing principles, respectively. Both FedMAvg and FedMGS adopt multiple steps of local updates per communication round to speed up convergence, and allow only a randomly sampled subset of clients to communicate with the server for reducing the communication cost. Convergence properties for the two algorithms are thoroughly analyzed, which delineate the impacts of heterogeneous data distribution, local update number, and partial client communication on the algorithm performance, and guide the design of proposed algorithms. By focusing on a data clustering task, extensive experiment results are presented to examine the practical performance of proposed algorithms, as well as demonstrating their efficacy over the existing distributed clustering algorithms. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 1053-587X 1941-0476 |
| DOI: | 10.1109/TSP.2022.3151505 |