Latency-Efficient Wireless Federated Learning With Spasification and Quantization for Heterogeneous Devices

Recently, federated learning (FL) has attracted much attention as a promising decentralized machine learning method that provides privacy and low latency. However, the communication bottleneck is still a problem that needs to be solved to effectively deploy FL on wireless networks. In this article,...

Full description

Saved in:
Bibliographic Details
Published inIEEE internet of things journal Vol. 12; no. 1; p. 488
Main Authors Chen, Xuechen, Wang, Aixiang, Deng, Xiaoheng, Gui, Jinsong
Format Journal Article
LanguageEnglish
Published Piscataway The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 01.01.2025
Subjects
Online AccessGet full text
ISSN2327-4662
DOI10.1109/JIOT.2024.3462722

Cover

More Information
Summary:Recently, federated learning (FL) has attracted much attention as a promising decentralized machine learning method that provides privacy and low latency. However, the communication bottleneck is still a problem that needs to be solved to effectively deploy FL on wireless networks. In this article, we aim to minimize the total convergence time of FL by sparsifying and quantizing local model parameters before uplink transmission. More specifically, we first present the convergence analysis of the FL algorithm with random sparsification and quantization, revealing the impact of compression error on the convergence speed. Then, we jointly optimize the computation, communication resources and the number of quantization bits, sparsity to minimize the total convergence time, subject to the energy and compression error requirements derived from the convergence analysis. By simulating the impact of different compression errors on model accuracy, we reveal that the low-precision updates do not inherently yield a better balance between efficiency and accuracy than the high-precision updates. Furthermore, compared with the equal resource allocation schemes and the unilateral compression optimization schemes on four different data distributions, the proposed scheme has faster convergence speed and less total convergence time.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2327-4662
DOI:10.1109/JIOT.2024.3462722