NUK - logo
E-viri
Recenzirano Odprti dostop
  • Explainability in deep rein...
    Heuillet, Alexandre; Couthouis, Fabien; Díaz-Rodríguez, Natalia

    Knowledge-based systems, 02/2021, Letnik: 214
    Journal Article

    A large set of the explainable Artificial Intelligence (XAI) literature is emerging on feature relevance techniques to explain a deep neural network (DNN) output or explaining models that ingest image source data. However, assessing how XAI techniques can help understand models beyond classification tasks, e.g. for reinforcement learning (RL), has not been extensively studied. We review recent works in the direction to attain Explainable Reinforcement Learning (XRL), a relatively new subfield of Explainable Artificial Intelligence, intended to be used in general public applications, with diverse audiences, requiring ethical, responsible and trustable algorithms. In critical situations where it is essential to justify and explain the agent’s behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box. We evaluate mainly studies directly linking explainability to RL, and split these into two categories according to the way the explanations are generated: transparent algorithms and post-hoc explainability. We also review the most prominent XAI works from the lenses of how they could potentially enlighten the further deployment of the latest advances in RL, in the demanding present and future of everyday problems. •We review concepts related to the explainability of Deep Reinforcement Learning models.•We provide a comprehensive analysis of the Explainable Reinforcement Learning literature.•We propose a categorization of existing Explainable Reinforcement Learning methods.•We discuss ideas emerging from the literature and provide insights for future work.