Deep Reinforcement Learning Based Resource Allocation for V2V Communications

In this paper, we develop a novel decentralized resource allocation mechanism for vehicle-to-vehicle (V2V) communications based on deep reinforcement learning, which can be applied to both unicast and broadcast scenarios. According to the decentralized resource allocation mechanism, an autonomous &q...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on vehicular technology Vol. 68; no. 4; pp. 3163 - 3173
Main Authors Ye, Hao, Li, Geoffrey Ye, Juang, Biing-Hwang Fred
Format Journal Article
LanguageEnglish
Published New York IEEE 01.04.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN0018-9545
1939-9359
DOI10.1109/TVT.2019.2897134

Cover

More Information
Summary:In this paper, we develop a novel decentralized resource allocation mechanism for vehicle-to-vehicle (V2V) communications based on deep reinforcement learning, which can be applied to both unicast and broadcast scenarios. According to the decentralized resource allocation mechanism, an autonomous "agent," a V2V link or a vehicle, makes its decisions to find the optimal sub-band and power level for transmission without requiring or having to wait for global information. Since the proposed method is decentralized, it incurs only limited transmission overhead. From the simulation results, each agent can effectively learn to satisfy the stringent latency constraints on V2V links while minimizing the interference to vehicle-to-infrastructure communications.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0018-9545
1939-9359
DOI:10.1109/TVT.2019.2897134