UP - logo
E-resources
Full text
Peer reviewed
  • Explainable Learning Analyt...
    Tiukhova, Elena; Vemuri, Pavani; Flores, Nidia López; Islind, Anna Sigridur; Óskarsdóttir, María; Poelmans, Stephan; Baesens, Bart; Snoeck, Monique

    Decision Support Systems, July 2024, 2024-07-00, Volume: 182
    Journal Article

    Beyond managing student dropout, higher education stakeholders need decision support to consistently influence the student learning process to keep students motivated, engaged, and successful. At the course level, the combination of predictive analytics and self-regulation theory can help instructors determine the best study advice and allow learners to better self-regulate and determine how they want to learn. The best performing techniques are often black-box models that favor performance over interpretability and are heavily influenced by course contexts. In this study, we argue that explainable AI has the potential not only to uncover the reasons behind model decisions, but also to reveal their stability across contexts, effectively bridging the gap between predictive and explanatory learning analytics (LA). In contributing to decision support systems research, this study (1) leverages traditional techniques, such as concept drift and performance drift, to investigate the stability of student success prediction models over time; (2) uses Shapley Additive explanations in a novel way to explore the stability of extracted feature importance rankings generated for these models; (3) generates new insights that emerge from stable features across cohorts, enabling teachers to determine study advice. We believe this study makes a strong contribution to education research at large and expands the field of LA by augmenting the interpretability and explainability of prediction algorithms and ensuring their applicability in changing contexts. •SHAP exhibits two-fold utility in checking model stability and aiding study advice.•Success prediction models must be updated to ensure stable performance.•Changing learning contexts lead to distributional drift of LA indicators.•As the learning context changes, the importance of learning indicators shifts.•General activity and regularity indicators show the highest stability.