Explainable Artificial Intelligence in education Khosravi, Hassan; Shum, Simon Buckingham; Chen, Guanliang ...
Computers and education. Artificial intelligence,
2022, 2022-00-00, 2022-01-01, Volume:
3
Journal Article
Peer reviewed
Open access
There are emerging concerns about the Fairness, Accountability, Transparency, and Ethics (FATE) of educational interventions supported by the use of Artificial Intelligence (AI) algorithms. One of ...the emerging methods for increasing trust in AI systems is to use eXplainable AI (XAI), which promotes the use of methods that produce transparent explanations and reasons for decisions AI systems make. Considering the existing literature on XAI, this paper argues that XAI in education has commonalities with the broader use of AI but also has distinctive needs. Accordingly, we first present a framework, referred to as XAI-ED, that considers six key aspects in relation to explainability for studying, designing and developing educational AI tools. These key aspects focus on the stakeholders, benefits, approaches for presenting explanations, widely used classes of AI models, human-centred designs of the AI interfaces and potential pitfalls of providing explanations within education. We then present four comprehensive case studies that illustrate the application of XAI-ED in four different educational AI tools. The paper concludes by discussing opportunities, challenges and future research needs for the effective incorporation of XAI in education.
•The paper explores the role and need for explainable AI (XAI) in education.•We argue that XAI in education has commonalities with the broader use of AI but also has distinctive needs.•A framework based on six key aspects for studying and developing educational AI tools is proposed.•The application of the proposed framework is illustrated with four comprehensive case studies.•The paper concludes with an agenda for future research in XAI in education.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to provide more transparency to their algorithms. Much of this research is ...focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artificial intelligence. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers' intuition of what constitutes a ‘good’ explanation. There exist vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations to the explanation process. This paper argues that the field of explainable artificial intelligence can build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings, and discusses ways that these can be infused with work on explainable artificial intelligence.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
In this work we provide an algorithm for the detection of cardiac diseases that operates directly on ECG data without any preprocessing and interpret the results.
For this aim two neural network ...architectures were trained and compared, and the attribution methods: Gradient*Input, Integrated Gradients, DeepLIFT and LRP were used for explaining ECG classification problem.
As a result, our classifier reached 74% accuracy with 6 classes and using attribution methods as interpretation, we achieved six times performance compared to the popular SHAP method.
Our results demonstrate the prospects of ECG analysis for future applications, given the qualitative interpretation of the results and the comparative performance of attribution methods.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
Explainability techniques are rapidly being developed to improve human-AI decision-making across various cooperative work settings. Consequently, previous research has evaluated how decision-makers ...collaborate with imperfect AI by investigating appropriate reliance and task performance with the aim of designing more human-centered computer-supported collaborative tools. Several human-centered explainable AI (XAI) techniques have been proposed in hopes of improving decision-makers' collaboration with AI; however, these techniques are grounded in findings from previous studies that primarily focus on the impact of incorrect AI advice. Few studies acknowledge the possibility of the explanations being incorrect even if the AI advice is correct. Thus, it is crucial to understand how imperfect XAI affects human-AI decision-making. In this work, we contribute a robust, mixed-methods user study with 136 participants to evaluate how incorrect explanations influence humans' decision-making behavior in a bird species identification task, taking into account their level of expertise and an explanation's level of assertiveness. Our findings reveal the influence of imperfect XAI and humans' level of expertise on their reliance on AI and human-AI team performance. We also discuss how explanations can deceive decision-makers during human-AI collaboration. Hence, we shed light on the impacts of imperfect XAI in the field of computer-supported cooperative work and provide guidelines for designers of human-AI collaboration systems.
River habitats are fragmented by barriers which impede the movement and dispersal of aquatic organisms. Restoring habitat connectivity is a primary objective of nature conservation plans with ...multiple efforts to strategically restore connectivity at local, regional, and global scales. However, current approaches to prioritize connectivity restoration do not typically consider how barriers spatially fragment species' populations. Additionally, we lack knowledge on biodiversity baselines to predict which species would find suitable habitat after restoring connectivity. In this paper, we asked how neglecting these biodiversity baselines in river barrier removals impacts priority setting for conservation planning. We applied a novel modelling approach combining predictions of species distributions with network connectivity models to prioritize conservation actions in rivers of the Rhine-Aare system in Switzerland. Our results show that the high number and density of barriers has reduced structural and functional connectivity across representative catchments within the system. We show that fragmentation decreases habitat suitability for species and that using expected distributions as biodiversity baselines significantly affects priority settings for connectivity restorations compared to species-agnostic metrics based on river length. This indicates that priorities for barrier removals are ranked higher within the expected distributions of species to maximize functional connectivity while barriers in unsuitable regions are given lower importance scores. Our work highlights that the joint consideration of existing barriers and species past and current distributions are critical for restoration plans to ensure the recovery and persistence of riverine fish diversity.
Display omitted
•The high number and density of barriers in river networks has substantially reduced structural and functional habitat connectivity for freshwater fish•Incorporating the expected spatial distribution of freshwater fish species under natural conditions as biodiversity baselines in connectivity restoration planning•Novel combination of explainable artificial intelligence tools in species distribution models and river connectivity modelling to prioritize barrier removal•Habitat fragmentation reduces species habitat suitability and impacts priority settings for connectivity restoration•Priorities for barrier removals are ranked higher within the expected distributions of species to maximise functional connectivity while barriers in unsuitable regions are given lower importance scores
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
The inevitable rise and development of artificial intelligence (AI) was not a sudden occurrence. The greater the effect that AI has on humans, the more pressing the need is for us to understand it. ...This paper addresses research on the use of AI to evaluate new design methods and tools that can be leveraged to advance AI research, education, policy, and practice to improve the human condition. AI has the potential to educate, train, and improve the performance of humans, making them better at their tasks and activities. The use of AI can enhance human welfare in numerous respects, such as through improving the productivity of food, health, water, education, and energy services. However, the misuse of AI due to algorithm bias and a lack of governance could inhibit human rights and result in employment, gender, and racial inequality. We envision that AI can evolve into human-centered AI (HAI), which refers to approaching AI from a human perspective by considering human conditions and contexts. Most current discussions on AI technology focus on how AI can enable human performance. However, we explore AI can also inhibit the human condition and advocate for an in-depth dialog between technology- and humanity-based researchers to improve understanding of HAI from various perspectives.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
Counterfactual Explanations (CEs) have emerged as a major paradigm in explainable AI research, providing recourse recommendations for users affected by the decisions of machine learning models. ...However, CEs found by existing methods often become invalid when slight changes occur in the parameters of the model they were generated for. The literature lacks a way to provide exhaustive robustness guarantees for CEs under model changes, in that existing methods to improve CEs' robustness are mostly heuristic, and the robustness performances are evaluated empirically using only a limited number of retrained models. To bridge this gap, we propose a novel interval abstraction technique for parametric machine learning models, which allows us to obtain provable robustness guarantees for CEs under a possibly infinite set of plausible model changes Δ. Based on this idea, we formalise a robustness notion for CEs, which we call Δ-robustness, in both binary and multi-class classification settings. We present procedures to verify Δ-robustness based on Mixed Integer Linear Programming, using which we further propose algorithms to generate CEs that are Δ-robust. In an extensive empirical study involving neural networks and logistic regression models, we demonstrate the practical applicability of our approach. We discuss two strategies for determining the appropriate hyperparameters in our method, and we quantitatively benchmark CEs generated by eleven methods, highlighting the effectiveness of our algorithms in finding robust CEs.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP