Warm-up Free Policy Optimization: Improved Regret in Linear Markov Decision Processes

Policy Optimization (PO) methods are among the most popular Reinforcement Learning (RL) algorithms in practice. Recently, Sherman et al. [2023a] proposed a PO-based algorithm with rate-optimal regret guarantees under the linear Markov Decision Process (MDP) model. However, their algorithm relies on...

Full description

Saved in:
Bibliographic Details
Main Authors Cassel, Asaf, Rosenberg, Aviv
Format Journal Article
LanguageEnglish
Published 03.07.2024
Subjects
Online AccessGet full text
DOI10.48550/arxiv.2407.03065

Cover

More Information
Summary:Policy Optimization (PO) methods are among the most popular Reinforcement Learning (RL) algorithms in practice. Recently, Sherman et al. [2023a] proposed a PO-based algorithm with rate-optimal regret guarantees under the linear Markov Decision Process (MDP) model. However, their algorithm relies on a costly pure exploration warm-up phase that is hard to implement in practice. This paper eliminates this undesired warm-up phase, replacing it with a simple and efficient contraction mechanism. Our PO algorithm achieves rate-optimal regret with improved dependence on the other parameters of the problem (horizon and function approximation dimension) in two fundamental settings: adversarial losses with full-information feedback and stochastic losses with bandit feedback.
DOI:10.48550/arxiv.2407.03065