A Stochastic Asynchronous Gradient Descent Algorithm with Delay Compensation Mechanism
A large amount of idle computing power exists in mobile devices, which can be deployed with large-scale machine learning applications. One of the key problems is how to reduce the communication overhead between different nodes. In recent years, gradient sparsity is introduced to reduce the communica...
Saved in:
| Published in | 2022 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT) pp. 167 - 171 |
|---|---|
| Main Authors | , , |
| Format | Conference Proceeding |
| Language | English |
| Published |
IEEE
28.07.2022
|
| Subjects | |
| Online Access | Get full text |
| DOI | 10.1109/IAICT55358.2022.9887513 |
Cover
| Summary: | A large amount of idle computing power exists in mobile devices, which can be deployed with large-scale machine learning applications. One of the key problems is how to reduce the communication overhead between different nodes. In recent years, gradient sparsity is introduced to reduce the communication overhead. However, in the federated learning scenario, the traditional synchronous gradient optimization algorithm can not adapt to the complex network environment and high communication costs. In this paper, we propose a stochastic gradient descent algorithm with delay compensation mechanism (FedDgd) for asynchronous distributed training and further optimize it for federated asynchronous training. It is proved theoretically that FedDgd can converge at the same rate as ASGD for non-convex neural networks. Moreover, FedDgd way converge quickly and tolerates staleness in various app applications as well. |
|---|---|
| DOI: | 10.1109/IAICT55358.2022.9887513 |