DIKUL - logo
E-viri
Celotno besedilo
Recenzirano Odprti dostop
  • Scalar reward is not enough...
    Vamplew, Peter; Smith, Benjamin J.; Källström, Johan; Ramos, Gabriel; Rădulescu, Roxana; Roijers, Diederik M.; Hayes, Conor F.; Heintz, Fredrik; Mannion, Patrick; Libin, Pieter J. K.; Dazeley, Richard; Foale, Cameron

    Autonomous agents and multi-agent systems, 10/2022, Letnik: 36, Številka: 2
    Journal Article

    The recent paper “Reward is Enough” by Silver, Singh, Precup and Sutton posits that the concept of reward maximisation is sufficient to underpin all intelligence, both natural and artificial, and provides a suitable basis for the creation of artificial general intelligence. We contest the underlying assumption of Silver et al. that such reward can be scalar-valued. In this paper we explain why scalar rewards are insufficient to account for some aspects of both biological and computational intelligence, and argue in favour of explicitly multi-objective models of reward maximisation. Furthermore, we contend that even if scalar reward functions can trigger intelligent behaviour in specific cases, this type of reward is insufficient for the development of human-aligned artificial general intelligence due to unacceptable risks of unsafe or unethical behaviour.