Akademska digitalna zbirka SLovenije - logo
E-viri
Celotno besedilo
Recenzirano Odprti dostop
  • Humans adaptively select di...
    Verbeke, Pieter; Verguts, Tom

    Psychological review, 2024-Apr-15, 2024-04-15, 20240415
    Journal Article

    The Rescorla-Wagner rule remains the most popular tool to describe human behavior in reinforcement learning tasks. Nevertheless, it cannot fit human learning in complex environments. Previous work proposed several hierarchical extensions of this learning rule. However, it remains unclear when a flat (nonhierarchical) versus a hierarchical strategy is adaptive, or when it is implemented by humans. To address this question, current work applies a nested modeling approach to evaluate multiple models in multiple reinforcement learning environments both computationally (which approach performs best) and empirically (which approach fits human data best). We consider 10 empirical data sets ( = 407) divided over three reinforcement learning environments. Our results demonstrate that different environments are best solved with different learning strategies; and that humans adaptively select the learning strategy that allows best performance. Specifically, while flat learning fitted best in less complex stable learning environments, humans employed more hierarchically complex models in more complex environments. (PsycInfo Database Record (c) 2024 APA, all rights reserved).