Deep reinforcement learning-based optimal deployment of IoT machine learning jobs in fog computing architecture
By increasing the number and variety of areas where IoT technology is being applied, the challenges regarding the design and deployment of IoT applications and services have recently become the subject of many studies. Many IoT applications are machine learning jobs that collect and analyze sensor m...
Saved in:
Published in | Computing Vol. 107; no. 1; p. 15 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Vienna
Springer Vienna
01.01.2025
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
ISSN | 0010-485X 1436-5057 |
DOI | 10.1007/s00607-024-01353-3 |
Cover
Summary: | By increasing the number and variety of areas where IoT technology is being applied, the challenges regarding the design and deployment of IoT applications and services have recently become the subject of many studies. Many IoT applications are machine learning jobs that collect and analyze sensor measurements in smart cities, farms, or industrial areas to meet the end-user requirements. These machine-learning jobs consist of distributed tasks that work collaboratively to build models in a federated manner. Though some challenges regarding the deployment and scheduling of IoT applications have been studied before, the problem of determining the optimal number and the coverage of distributed tasks of an IoT machine learning job has not been addressed previously. This paper proposes a two-phased method for adaptive task creation and deployment of IoT machine learning jobs over a heterogeneous multi-layer fog computing architecture. In the first phase, the optimal number of tasks and their respective sensor coverage is determined using a Deep Reinforcement Learning (DRL) based method and subsequently, in the second phase, the tasks are deployed over the heterogeneous multi-layer fog computing architecture using a greedy deployment method. The task creation and deployment problem is formulated as a three-objective optimization problem: 1) minimizing the deployment latency 2) minimizing the deployment cost and, 3) minimizing the evaluation loss of the machine learning job when trained in a federated manner over the edge/fog/cloud nodes. A Deep Deterministic Policy Gradient (DDPG) algorithm is used to solve the online IoT machine learning job deployment optimization problem adaptively and efficiently. The experimental results obtained by the deployment of several IoT machine learning jobs with disparate profiles over the heterogeneous fog test-bed showed that the proposed two-phased DRL-based method could outperform the Edge-IoT and Cloud-IoT baseline methods by improving the total deployment score up to 32%. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 0010-485X 1436-5057 |
DOI: | 10.1007/s00607-024-01353-3 |