Scheduling for Mobile Edge Computing With Random User Arrivals-An Approximate MDP and Reinforcement Learning Approach

In this paper, we investigate the scheduling design of a mobile edge computing (MEC) system, where active mobile devices with computation tasks randomly appear in a cell. Every task can be computed at either the mobile device or the MEC server. We jointly optimize the task offloading decision, uplin...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on vehicular technology Vol. 69; no. 7; pp. 7735 - 7750
Main Authors Huang, Shanfeng, Lv, Bojie, Wang, Rui, Huang, Kaibin
Format Journal Article
LanguageEnglish
Published New York IEEE 01.07.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN0018-9545
1939-9359
DOI10.1109/TVT.2020.2990482

Cover

More Information
Summary:In this paper, we investigate the scheduling design of a mobile edge computing (MEC) system, where active mobile devices with computation tasks randomly appear in a cell. Every task can be computed at either the mobile device or the MEC server. We jointly optimize the task offloading decision, uplink transmission device selection and power allocation by formulating the problem as an infinite-horizon Markov decision process (MDP). Compared with most of the existing literature, this is the first attempt to address the transmission and computation optimization with random device arrivals in an infinite time horizon to our best knowledge. Due to the uncertainty in the device number and location, the conventional approximate MDP approaches addressing the curse of dimensionality cannot be applied. An alternative and suitable low-complexity solution framework is proposed in this work. We first introduce a baseline scheduling policy, whose value function can be derived analytically with the statistics of random mobile device arrivals. Then, one-step policy iteration is adopted to obtain a sub-optimal scheduling policy whose performance can be bounded analytically. The complexity of deriving the sub-optimal policy is reduced dramatically compared with conventional solutions of MDP by eliminating the complicated value iteration. To address a more general scenario where the statistics of random mobile device arrivals are unknown, a novel and efficient algorithm integrating reinforcement learning and stochastic gradient descent (SGD) is proposed to improve the system performance in an online manner. Simulation results show that the gain of the sub-optimal policy over various benchmarks is significant.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0018-9545
1939-9359
DOI:10.1109/TVT.2020.2990482