Linear Coded Federated Learning
In recent years, federated learning (FL) has attracted a lot of attention as a new edge computing paradigm for artificial intelligence (AI). FL facilitates multiple edge devices to collaboratively train a global model without leaking local data of any participant. In a typical edge computing scenari...
Saved in:
| Published in | Algorithms and Architectures for Parallel Processing Vol. 13155; pp. 627 - 644 |
|---|---|
| Main Authors | , , , , |
| Format | Book Chapter |
| Language | English |
| Published |
Switzerland
Springer International Publishing AG
2022
Springer International Publishing |
| Series | Lecture Notes in Computer Science |
| Subjects | |
| Online Access | Get full text |
| ISBN | 3030953831 9783030953836 |
| ISSN | 0302-9743 1611-3349 |
| DOI | 10.1007/978-3-030-95384-3_39 |
Cover
| Summary: | In recent years, federated learning (FL) has attracted a lot of attention as a new edge computing paradigm for artificial intelligence (AI). FL facilitates multiple edge devices to collaboratively train a global model without leaking local data of any participant. In a typical edge computing scenario, the participants in FL are heterogeneous and can be composed of personal computers, smartphones, Internet of Things devices, network devices, etc.. In this heterogeneous setting, the slowest client in each training round becomes the bottleneck, which may limit the overall convergence speed and accuracy of the global model. To address this issue, one possible solution is to outsource the computing task in the slowest client to faster devices, which requires data transmission from the slowest client to other selected clients. Certainly, sending the original data set is not an option due to the privacy requirements. Therefore, in this paper, we propose an efficient linear coded federated learning (LCFL) framework to (1) speed up the convergence speed of heterogeneous FL and (2) protect the data privacy of the participants. Within the proposed framework, we design a collaborative client selection (CCS) algorithm that can select appropriate clients and assign the computation task of the slowest client to those selected devices. Finally, we build a practical experimental platform and conduct numerous experiments to evaluate the proposed LCFL framework from different aspects. The experimental results demonstrate that the proposed LCFL scheme can reduce the training time up to 93.73% when the participants have a large difference in terms of computing capability. |
|---|---|
| ISBN: | 3030953831 9783030953836 |
| ISSN: | 0302-9743 1611-3349 |
| DOI: | 10.1007/978-3-030-95384-3_39 |