Joint Sparsification and Quantization for Heterogeneous Devices in Energy Constrained Federated Learning
Recently, federated learning (FL) has attracted much attention as a promising decentralized machine learning method that provides privacy and low latency. However, the communication bottleneck is still a problem that needs to be solved to effectively deploy FL on wireless networks. In this paper, we...
Saved in:
| Published in | 2024 IEEE/CIC International Conference on Communications in China (ICCC) pp. 879 - 884 |
|---|---|
| Main Authors | , , , |
| Format | Conference Proceeding |
| Language | English |
| Published |
IEEE
07.08.2024
|
| Subjects | |
| Online Access | Get full text |
| DOI | 10.1109/ICCC62479.2024.10681900 |
Cover
| Summary: | Recently, federated learning (FL) has attracted much attention as a promising decentralized machine learning method that provides privacy and low latency. However, the communication bottleneck is still a problem that needs to be solved to effectively deploy FL on wireless networks. In this paper, we aim to minimize the total convergence time of FL by sparsifying and quantizing local model parameters before uplink transmission. More specifically, we first present the convergence analysis of the FL algorithm with random sparsification and quantization, revealing the impact of compression error on the convergence speed. Then, we jointly optimize the computation, communication resources and the number of quantization bits, sparsity to minimize the total convergence time, subject to the energy and compression error requirements derived from the convergence analysis. We show the trade-off between model accuracy and convergence time by simulating the impact of compression error. Furthermore, the proposed method has faster convergence compared to baseline schemes. |
|---|---|
| DOI: | 10.1109/ICCC62479.2024.10681900 |