UNI-MB - logo
UMNIK - logo
 
E-resources
Full text
Peer reviewed Open access
  • How to explain AI systems t...
    Laato, Samuli; Tiainen, Miika; Najmul Islam, A.K.M.; Mäntymäki, Matti

    Internet research, 12/2022, Volume: 32, Issue: 7
    Journal Article

    PurposeInscrutable machine learning (ML) models are part of increasingly many information systems. Understanding how these models behave, and what their output is based on, is a challenge for developers let alone non-technical end users.Design/methodology/approachThe authors investigate how AI systems and their decisions ought to be explained for end users through a systematic literature review.FindingsThe authors’ synthesis of the literature suggests that AI system communication for end users has five high-level goals: (1) understandability, (2) trustworthiness, (3) transparency, (4) controllability and (5) fairness. The authors identified several design recommendations, such as offering personalized and on-demand explanations and focusing on the explainability of key functionalities instead of aiming to explain the whole system. There exists multiple trade-offs in AI system explanations, and there is no single best solution that fits all cases.Research limitations/implicationsBased on the synthesis, the authors provide a design framework for explaining AI systems to end users. The study contributes to the work on AI governance by suggesting guidelines on how to make AI systems more understandable, fair, trustworthy, controllable and transparent.Originality/valueThis literature review brings together the literature on AI system communication and explainable AI (XAI) for end users. Building on previous academic literature on the topic, it provides synthesized insights, design recommendations and future research agenda.