Akademska digitalna zbirka SLovenije - logo
E-viri
Celotno besedilo
Recenzirano
  • XFlag: Explainable Fake New...
    Chien, Shih-Yi; Yang, Cheng-Jun; Yu, Fang

    International journal of human-computer interaction, 12/2022, Letnik: 38, Številka: 18-20
    Journal Article

    Social media allows any individual to disseminate information without third-party restrictions, making it difficult to verify the authenticity of a source. The proliferation of fake news has severely affected people's intentions and behaviors in trusting online sources. Applying AI approaches for fake news detection on social media is the focus of recent research, most of which, however, focuses on enhancing AI performance. This study proposes XFlag, an innovative explainable AI (XAI) framework which uses long short-term memory (LSTM) model to identify fake news articles, layer-wise relevance propagation (LRP) algorithm to explain the fake news detection model based on LSTM, and situation awareness-based agent transparency (SAT) model to increase transparency in human-AI interaction. The developed XFlag framework has been empirically validated. The findings suggest the use of XFlag supports users in understanding system goals (perception), justifying system decisions (comprehension), and predicting system uncertainty (projection), with little cost of perceived cognitive workload.