Much research in artificial intelligence is concerned with the development of autonomous agents that can interact effectively with other agents. An important aspect of such agents is the ability to ...reason about the behaviours of other agents, by constructing models which make predictions about various properties of interest (such as actions, goals, beliefs) of the modelled agents. A variety of modelling approaches now exist which vary widely in their methodology and underlying assumptions, catering to the needs of the different sub-communities within which they were developed and reflecting the different practical uses for which they are intended. The purpose of the present article is to provide a comprehensive survey of the salient modelling methods which can be found in the literature. The article concludes with a discussion of open problems which may form the basis for fruitful future research.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
The rapid increase of interest in, and use of, artificial intelligence (AI) in computer applications has raised a parallel concern about its ability (or lack thereof) to provide understandable, or ...explainable, output to users. This concern is especially legitimate in biomedical contexts, where patient safety is of paramount importance. This position paper brings together seven researchers working in the field with different roles and perspectives, to explore in depth the concept of explainable AI, or XAI, offering a functional definition and conceptual framework or model that can be used when considering XAI. This is followed by a series of desiderata for attaining explainability in AI, each of which touches upon a key domain in biomedicine.
•AI in Medicine becomes increasingly ubiquitous, with new concerns and questions:•How does an AI algorithm work — what is it doing?•Does an AI system work as well as an expert?•Does an AI system do what a user would do, were she in the same situation?•Why cannot the system tell a user how it arrived at a conclusion or made a decision?•Here, we deal with the need to address gaps in the explainability of AI in Medicine.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
Explainable artificial intelligence (AI) is attracting much interest in medicine. Technically, the problem of explainability is as old as AI itself and classic AI represented comprehensible ...retraceable approaches. However, their weakness was in dealing with uncertainties of the real world. Through the introduction of probabilistic learning, applications became increasingly successful, but increasingly opaque. Explainable AI deals with the implementation of transparency and traceability of statistical black‐box machine learning methods, particularly deep learning (DL). We argue that there is a need to go beyond explainable AI. To reach a level of explainable medicine we need causability. In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations. In this article, we provide some necessary definitions to discriminate between explainability and causability as well as a use‐case of DL interpretation and of human explanation in histopathology. The main contribution of this article is the notion of causability, which is differentiated from explainability in that causability is a property of a person, while explainability is a property of a system
This article is categorized under:
Fundamental Concepts of Data and Knowledge > Human Centricity and User Interaction
Explainable AI.
Full text
Available for:
BFBNIB, FZAB, GIS, IJS, KILJ, NLZOH, NUK, OILJ, SAZU, SBCE, SBMB, UL, UM, UPUK
Explainable artificial intelligence: an analytical review Angelov, Plamen P.; Soares, Eduardo A.; Jiang, Richard ...
Wiley interdisciplinary reviews. Data mining and knowledge discovery,
September/October 2021, Volume:
11, Issue:
5
Journal Article
Peer reviewed
Open access
This paper provides a brief analytical review of the current state‐of‐the‐art in relation to the explainability of artificial intelligence in the context of recent advances in machine learning and ...deep learning. The paper starts with a brief historical introduction and a taxonomy, and formulates the main challenges in terms of explainability building on the recently formulated National Institute of Standards four principles of explainability. Recently published methods related to the topic are then critically reviewed and analyzed. Finally, future directions for research are suggested.
This article is categorized under:
Technologies > Artificial Intelligence
Fundamental Concepts of Data and Knowledge > Explainable AI
Accuracy versus interpretability for different machine learning models.
Full text
Available for:
BFBNIB, FZAB, GIS, IJS, KILJ, NLZOH, NUK, OILJ, SAZU, SBCE, SBMB, UL, UM, UPUK
An "intriguing, insightful" look at how algorithms and robots could lead to social unrest—and how to avoid it ( The Economist, Books of the Year). After decades of effort, researchers are finally ...cracking the code on artificial intelligence. Society stands on the cusp of unprecedented change, driven by advances in robotics, machine learning, and perception powering systems that rival or exceed human capabilities. Driverless cars, robotic helpers, and intelligent agents that promote our interests have the potential to usher in a new age of affluence and leisure—but as AI expert and Silicon Valley entrepreneur Jerry Kaplan warns, the transition may be protracted and brutal unless we address the two great scourges of the modern developed world: volatile labor markets and income inequality. In Humans Need Not Apply, he proposes innovative, free-market adjustments to our economic system and social policies to avoid an extended period of social turmoil. His timely and accessible analysis of the promises and perils of AI is a must-read for business leaders and policy makers on both sides of the aisle. "A reminder that AI systems don't need red laser eyes to be dangerous."— Times Higher Education Supplement "Kaplan…sidesteps the usual arguments of techno-optimism and dystopia, preferring to go for pragmatic solutions to a shrinking pool of jobs."— Financial Times
The ABC of AI Stalmans, Ingeborg
Acta ophthalmologica (Oxford, England),
December 2022, 2022-12-00, 20221201, Volume:
100, Issue:
S275
Journal Article
Peer reviewed
Artificial Intelligence (AI) is a booming field, and is extensively present in our daily life. But at the same time AI is often perceived a mysterious and therefore even threatening. A better ...understanding of the principles of AI may unveil the mysteries of this technology and allow to reveal its potential to aid our lives, and in particular facilitate and move forward the medical field. In this presentation, the principles of AI, the different methods as well as the applications in various field (going from daily life to application in ophthalmology) will be discussed.
Full text
Available for:
DOBA, FZAB, GIS, IJS, IZUM, KILJ, NLZOH, NUK, OILJ, PILJ, PNG, SAZU, SBCE, SBMB, UILJ, UKNU, UL, UM, UPUK