Momentum-Based Policy Gradient with Second-Order Information

Variance-reduced gradient estimators for policy gradient methods have been one of the main focus of research in the reinforcement learning in recent years as they allow acceleration of the estimation process. We propose a variance-reduced policy-gradient method, called SHARP, which incorporates seco...

Full description

Saved in:
Bibliographic Details
Main Authors Salehkaleybar, Saber, Khorasani, Sadegh, Kiyavash, Negar, He, Niao, Thiran, Patrick
Format Journal Article
LanguageEnglish
Published 17.05.2022
Subjects
Online AccessGet full text
DOI10.48550/arxiv.2205.08253

Cover

More Information
Summary:Variance-reduced gradient estimators for policy gradient methods have been one of the main focus of research in the reinforcement learning in recent years as they allow acceleration of the estimation process. We propose a variance-reduced policy-gradient method, called SHARP, which incorporates second-order information into stochastic gradient descent (SGD) using momentum with a time-varying learning rate. SHARP algorithm is parameter-free, achieving$\epsilon$ -approximate first-order stationary point with$O(\epsilon^{-3})$number of trajectories, while using a batch size of$O(1)$at each iteration. Unlike most previous work, our proposed algorithm does not require importance sampling which can compromise the advantage of variance reduction process. Moreover, the variance of estimation error decays with the fast rate of$O(1/t^{2/3})$where$t$is the number of iterations. Our extensive experimental evaluations show the effectiveness of the proposed algorithm on various control tasks and its advantage over the state of the art in practice.
DOI:10.48550/arxiv.2205.08253