A DRL Strategy for Optimal Resource Allocation along with 3D Trajectory Dynamics in UAV-MEC Network

Advances in Unmanned Air Vehicle (UAV) technology have paved a way for numerous configurations and applications in communication systems. However, UAV dynamics play an important role in determining its effective use. In this article, while considering UAV dynamics, we evaluate the performance of a U...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 11; p. 1
Main Authors Khurshid, Tayyaba, Ahmed, Waqas, Rehan, Muhammad, Ahmad, Rizwan, Alam, Muhammad Mahtab, Radwan, Ayman
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.01.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN2169-3536
2169-3536
DOI10.1109/ACCESS.2023.3278591

Cover

More Information
Summary:Advances in Unmanned Air Vehicle (UAV) technology have paved a way for numerous configurations and applications in communication systems. However, UAV dynamics play an important role in determining its effective use. In this article, while considering UAV dynamics, we evaluate the performance of a UAV equipped with a Mobile-Edge Computing (MEC) server that provides services to End-user Devices (EuDs). The EuDs due to their limited energy resources offload a portion of their computational task to nearby MEC-based UAV. To this end, we jointly optimize the computational cost and 3D UAV placement along with resource allocation subject to the network, communication, and environment constraints. A Deep Reinforcement Learning (DRL) technique based on a continuous action space approach, namely Deep Deterministic Policy Gradient (DDPG) is utilized. By exploiting DDPG, we propose an optimization strategy to obtain an optimal offloading policy in the presence of UAV dynamics, which is not considered in earlier studies. The proposed strategy can be classified into three cases namely; training through an ideal scenario, training through error dynamics, and training through extreme values. We compared the performance of these individual cases based on cost percentage and concluded that case II (training through error dynamics) achieves minimum cost i.e., 37.75 %, whereas case I and case III settles at 67.25% and 67.50% respectively. Numerical simulations are performed, and extensive results are obtained which shows that the advanced DDPG based algorithm along with error dynamic protocol is able to converge to near optimum. To validate the efficacy of the proposed algorithm, a comparison with state-of-the-art Deep Q-Network (DQN) is carried out, which shows that our algorithm has significant improvements.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3278591