Machine learning (ML) is a growing field in medicine. This narrative review describes the current body of literature on ML for clinical decision support in infectious diseases (ID).
We aim to inform ...clinicians about the use of ML for diagnosis, classification, outcome prediction and antimicrobial management in ID.
References for this review were identified through searches of MEDLINE/PubMed, EMBASE, Google Scholar, biorXiv, ACM Digital Library, arXiV and IEEE Xplore Digital Library up to July 2019.
We found 60 unique ML-clinical decision support systems (ML-CDSS) aiming to assist ID clinicians. Overall, 37 (62%) focused on bacterial infections, 10 (17%) on viral infections, nine (15%) on tuberculosis and four (7%) on any kind of infection. Among them, 20 (33%) addressed the diagnosis of infection, 18 (30%) the prediction, early detection or stratification of sepsis, 13 (22%) the prediction of treatment response, four (7%) the prediction of antibiotic resistance, three (5%) the choice of antibiotic regimen and two (3%) the choice of a combination antiretroviral therapy. The ML-CDSS were developed for intensive care units (n = 24, 40%), ID consultation (n = 15, 25%), medical or surgical wards (n = 13, 20%), emergency department (n = 4, 7%), primary care (n = 3, 5%) and antimicrobial stewardship (n = 1, 2%). Fifty-three ML-CDSS (88%) were developed using data from high-income countries and seven (12%) with data from low- and middle-income countries (LMIC). The evaluation of ML-CDSS was limited to measures of performance (e.g. sensitivity, specificity) for 57 ML-CDSS (95%) and included data in clinical practice for three (5%).
Considering comprehensive patient data from socioeconomically diverse healthcare settings, including primary care and LMICs, may improve the ability of ML-CDSS to suggest decisions adapted to various clinical contexts. Currents gaps identified in the evaluation of ML-CDSS must also be addressed in order to know the potential impact of such tools for clinicians and patients.
Summary
Objectives
: This paper draws attention to: i) key considerations for evaluating artificial intelligence (AI) enabled clinical decision support; and ii) challenges and practical implications ...of AI design, development, selection, use, and ongoing surveillance.
Method
: A narrative review of existing research and evaluation approaches along with expert perspectives drawn from the International Medical Informatics Association (IMIA) Working Group on Technology Assessment and Quality Development in Health Informatics and the European Federation for Medical Informatics (EFMI) Working Group for Assessment of Health Information Systems.
Results
: There is a rich history and tradition of evaluating AI in healthcare. While evaluators can learn from past efforts, and build on best practice evaluation frameworks and methodologies, questions remain about how to evaluate the safety and effectiveness of AI that dynamically harness vast amounts of genomic, biomarker, phenotype, electronic record, and care delivery data from across health systems. This paper first provides a historical perspective about the evaluation of AI in healthcare. It then examines key challenges of evaluating AI-enabled clinical decision support during design, development, selection, use, and ongoing surveillance. Practical aspects of evaluating AI in healthcare, including approaches to evaluation and indicators to monitor AI are also discussed.
Conclusion
: Commitment to rigorous initial and ongoing evaluation will be critical to ensuring the safe and effective integration of AI in complex sociotechnical settings. Specific enhancements that are required for the new generation of AI-enabled clinical decision support will emerge through practical application.
Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform ...humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue, instead it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice.
Taking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using the "Principles of Biomedical Ethics" by Beauchamp and Childress (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI.
Each of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective we identified informed consent, certification and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health.
To ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward.
Although alert fatigue is blamed for high override rates in contemporary clinical decision support systems, the concept of alert fatigue is poorly defined. We tested hypotheses arising from two ...possible alert fatigue mechanisms: (A) cognitive overload associated with amount of work, complexity of work, and effort distinguishing informative from uninformative alerts, and (B) desensitization from repeated exposure to the same alert over time.
Retrospective cohort study using electronic health record data (both drug alerts and clinical practice reminders) from January 2010 through June 2013 from 112 ambulatory primary care clinicians. The cognitive overload hypotheses were that alert acceptance would be lower with higher workload (number of encounters, number of patients), higher work complexity (patient comorbidity, alerts per encounter), and more alerts low in informational value (repeated alerts for the same patient in the same year). The desensitization hypothesis was that, for newly deployed alerts, acceptance rates would decline after an initial peak.
On average, one-quarter of drug alerts received by a primary care clinician, and one-third of clinical reminders, were repeats for the same patient within the same year. Alert acceptance was associated with work complexity and repeated alerts, but not with the amount of work. Likelihood of reminder acceptance dropped by 30% for each additional reminder received per encounter, and by 10% for each five percentage point increase in proportion of repeated reminders. The newly deployed reminders did not show a pattern of declining response rates over time, which would have been consistent with desensitization. Interestingly, nurse practitioners were 4 times as likely to accept drug alerts as physicians.
Clinicians became less likely to accept alerts as they received more of them, particularly more repeated alerts. There was no evidence of an effect of workload per se, or of desensitization over time for a newly deployed alert. Reducing within-patient repeats may be a promising target for reducing alert overrides and alert fatigue.
Computerised clinical decision support (CDS) can potentially better inform decisions, and it can help with the management of information overload. It is perceived to be a key component of a learning ...health care system. Despite its increasing implementation worldwide, it remains uncertain why the effect of CDS varies and which factors make CDS more effective.
To examine which factors make CDS strategies more effective on a number of outcomes, including adherence to recommended practice, patient outcome measures, economic measures, provider or patient satisfaction, and medical decision quality.
We identified randomised controlled trials, non-randomised trials, and controlled before-and-after studies that directly compared CDS implementation with a given factor to CDS without that factor by searching CENTRAL, MEDLINE, EMBASE, and CINAHL and checking reference lists of relevant studies. We considered CDS with any objective for any condition in any healthcare setting. We included CDS interventions that were either displayed on screen or provided on paper and that were directed at healthcare professionals or targeted at both professionals and patients. The reviewers screened the potentially relevant studies in duplicate. They extracted data and assessed risk of bias in independent pairs or individually followed by a double check by another reviewer. We summarised results using medians and interquartile ranges and rated our certainty in the evidence using the GRADE system.
We identified 66 head-to-head trials that we synthesised across 14 comparisons of CDS intervention factors. Providing CDS automatically versus on demand led to large improvements in adherence. Displaying CDS on-screen versus on paper led to moderate improvements and making CDS more versus less patient-specific improved adherence modestly. When CDS interventions were combined with professional-oriented strategies, combined with patient-oriented strategies, or combined with staff-oriented strategies, then adherence improved slightly. Providing CDS to patients slightly increased adherence versus CDS aimed at the healthcare provider only. Making CDS advice more explicit and requiring users to respond to the advice made little or no difference. The CDS intervention factors made little or no difference to patient outcomes. The results for economic outcomes and satisfaction outcomes were sparse.
Multiple factors may affect the success of CDS interventions. CDS may be more effective when the advice is provided automatically and displayed on-screen and when the suggestions are more patient-specific. CDS interventions combined with other strategies probably also improves adherence. Providing CDS directly to patients may also positively affect adherence. The certainty of the evidence was low to moderate for all factors.
PROSPERO, CRD42016033738.
Providing comprehensive and individualized diabetes care remains a significant challenge in the face of the increasing complexity of diabetes management and a lack of specialized endocrinologists to ...support diabetes care. Clinical decision support systems (CDSSs) are progressively being used to improve diabetes care, while many health care providers lack awareness and knowledge about CDSSs in diabetes care. A comprehensive analysis of the applications of CDSSs in diabetes care is still lacking.
This review aimed to summarize the research landscape, clinical applications, and impact on both patients and physicians of CDSSs in diabetes care.
We conducted a scoping review following the Arksey and O'Malley framework. A search was conducted in 7 electronic databases to identify the clinical applications of CDSSs in diabetes care up to June 30, 2022. Additional searches were conducted for conference abstracts from the period of 2021-2022. Two researchers independently performed the screening and data charting processes.
Of 11,569 retrieved studies, 85 (0.7%) were included for analysis. Research interest is growing in this field, with 45 (53%) of the 85 studies published in the past 5 years. Among the 58 (68%) out of 85 studies disclosing the underlying decision-making mechanism, most CDSSs (44/58, 76%) were knowledge based, while the number of non-knowledge-based systems has been increasing in recent years. Among the 81 (95%) out of 85 studies disclosing application scenarios, the majority of CDSSs were used for treatment recommendation (63/81, 78%). Among the 39 (46%) out of 85 studies disclosing physician user types, primary care physicians (20/39, 51%) were the most common, followed by endocrinologists (15/39, 39%) and nonendocrinology specialists (8/39, 21%). CDSSs significantly improved patients' blood glucose, blood pressure, and lipid profiles in 71% (45/63), 67% (12/18), and 38% (8/21) of the studies, respectively, with no increase in the risk of hypoglycemia.
CDSSs are both effective and safe in improving diabetes care, implying that they could be a potentially reliable assistant in diabetes care, especially for physicians with limited experience and patients with limited access to medical resources.
RR2-10.37766/inplasy2022.9.0061.
Decision support systems based on reinforcement learning (RL) have been implemented to facilitate the delivery of personalized care. This paper aimed to provide a comprehensive review of RL ...applications in the critical care setting.
This review aimed to survey the literature on RL applications for clinical decision support in critical care and to provide insight into the challenges of applying various RL models.
We performed an extensive search of the following databases: PubMed, Google Scholar, Institute of Electrical and Electronics Engineers (IEEE), ScienceDirect, Web of Science, Medical Literature Analysis and Retrieval System Online (MEDLINE), and Excerpta Medica Database (EMBASE). Studies published over the past 10 years (2010-2019) that have applied RL for critical care were included.
We included 21 papers and found that RL has been used to optimize the choice of medications, drug dosing, and timing of interventions and to target personalized laboratory values. We further compared and contrasted the design of the RL models and the evaluation metrics for each application.
RL has great potential for enhancing decision making in critical care. Challenges regarding RL system design, evaluation metrics, and model choice exist. More importantly, further work is required to validate RL in authentic clinical environments.
Objective: Thus far, most applications in precision mental health have not been evaluated prospectively. This article presents the results of a prospective randomized-controlled trial investigating ...the effects of a digital decision support and feedback system, which includes two components of patient-specific recommendations: (a) a clinical strategy recommendation and (b) adaptive recommendations for patients at risk for treatment failure. Method: Therapist-patient dyads (N = 538) in a cognitive behavioral therapy outpatient clinic were randomized to either having access to a decision support system (intervention group; n = 335) or not (treatment as usual; n = 203). First, treatment strategy recommendations (problem-solving, motivation-oriented, or a mix of both strategies) for the first 10 sessions were evaluated. Second, the effect of psychometric feedback enhanced with clinical problem-solving tools on treatment outcome was investigated. Results: The prospective evaluation showed a differential effect size of about 0.3 when therapists followed the recommended treatment strategy in the first 10 sessions. Moreover, the linear mixed models revealed therapist symptom awareness and therapist attitude and confidence as significant predictors of an outcome as well as therapist-rated usefulness of feedback as a significant moderator of the feedback-outcome and the not on track-outcome associations. However, no main effects were found for feedback. Conclusions: The results demonstrate the importance of prospective studies and the high-quality implementation of digital decision support tools in clinical practice. Therapists seem to be able to learn from such systems and incorporate them into their clinical practice to enhance patient outcomes, but only when implementation is successful.
What is the public health significance of this article?
This randomized clinical implementation trial provides insight into the evaluation of a clinical decision support and feedback system including personalized pretherapy recommendations and enhanced psychometric feedback during treatment. As it is one of the first decision systems to have been implemented and evaluated prospectively in mental health, this study helps to improve such systems designed to support and change the way psychotherapy is conducted. The results underscore the importance of high-quality implementation of digital decision support tools in clinical practice.
The implementation of clinical decision support systems (CDSSs) as an intervention to foster clinical practice change is affected by many factors. Key factors include those associated with behavioral ...change and those associated with technology acceptance. However, the literature regarding these subjects is fragmented and originates from two traditionally separate disciplines: implementation science and technology acceptance.
Our objective is to propose an integrated framework that bridges the gap between the behavioral change and technology acceptance aspects of the implementation of CDSSs.
We employed an iterative process to map constructs from four contributing frameworks-the Theoretical Domains Framework (TDF); the Consolidated Framework for Implementation Research (CFIR); the Human, Organization, and Technology-fit framework (HOT-fit); and the Unified Theory of Acceptance and Use of Technology (UTAUT)-and the findings of 10 literature reviews, identified through a systematic review of reviews approach.
The resulting framework comprises 22 domains: agreement with the decision algorithm; attitudes; behavioral regulation; beliefs about capabilities; beliefs about consequences; contingencies; demographic characteristics; effort expectancy; emotions; environmental context and resources; goals; intentions; intervention characteristics; knowledge; memory, attention, and decision processes; patient-health professional relationship; patient's preferences; performance expectancy; role and identity; skills, ability, and competence; social influences; and system quality. We demonstrate the use of the framework providing examples from two research projects.
We proposed BEAR (BEhavior and Acceptance fRamework), an integrated framework that bridges the gap between behavioral change and technology acceptance, thereby widening the view established by current models.