DIKUL - logo
E-viri
Celotno besedilo
Odprti dostop
  • Novelli, Pietro; Pratticò, Marco; Pontil, Massimiliano; Ciliberto, Carlo

    arXiv.org, 06/2024
    Paper, Journal Article

    Policy Mirror Descent (PMD) is a powerful and theoretically sound methodology for sequential decision-making. However, it is not directly applicable to Reinforcement Learning (RL) due to the inaccessibility of explicit action-value functions. We address this challenge by introducing a novel approach based on learning a world model of the environment using conditional mean embeddings. We then leverage the operatorial formulation of RL to express the action-value function in terms of this quantity in closed form via matrix operations. Combining these estimators with PMD leads to POWR, a new RL algorithm for which we prove convergence rates to the global optimum. Preliminary experiments in finite and infinite state settings support the effectiveness of our method.