Objective To illustrate ways in which clinical decision support systems (CDSSs) malfunction and identify patterns of such malfunctions.
Materials and Methods We identified and investigated several ...CDSS malfunctions at Brigham and Women’s Hospital and present them as a case series. We also conducted a preliminary survey of Chief Medical Information Officers to assess the frequency of such malfunctions.
Results We identified four CDSS malfunctions at Brigham and Women’s Hospital: (1) an alert for monitoring thyroid function in patients receiving amiodarone stopped working when an internal identifier for amiodarone was changed in another system; (2) an alert for lead screening for children stopped working when the rule was inadvertently edited; (3) a software upgrade of the electronic health record software caused numerous spurious alerts to fire; and (4) a malfunction in an external drug classification system caused an alert to inappropriately suggest antiplatelet drugs, such as aspirin, for patients already taking one. We found that 93% of the Chief Medical Information Officers who responded to our survey had experienced at least one CDSS malfunction, and two-thirds experienced malfunctions at least annually.
Discussion CDSS malfunctions are widespread and often persist for long periods. The failure of alerts to fire is particularly difficult to detect. A range of causes, including changes in codes and fields, software upgrades, inadvertent disabling or editing of rules, and malfunctions of external systems commonly contribute to CDSS malfunctions, and current approaches for preventing and detecting such malfunctions are inadequate.
Conclusion CDSS malfunctions occur commonly and often go undetected. Better methods are needed to prevent and detect these malfunctions.
Highlights • We studied problem list completeness for diabetes at ten sites using mixed methods. • Problem list completeness across the sites varied substantially from 60.2% to 99.4%. • Six success ...factor for problem list completeness were identified from four top performing sites. • All ten sites were surveyed about use of these success factors.
Abstract
Background
Rule-base clinical decision support alerts are known to malfunction, but tools for discovering malfunctions are limited.
Objective
Investigate whether user override comments can ...be used to discover malfunctions.
Methods
We manually classified all rules in our database with at least 10 override comments into 3 categories based on a sample of override comments: “broken,” “not broken, but could be improved,” and “not broken.” We used 3 methods (frequency of comments, cranky word list heuristic, and a Naïve Bayes classifier trained on a sample of comments) to automatically rank rules based on features of their override comments. We evaluated each ranking using the manual classification as truth.
Results
Of the rules investigated, 62 were broken, 13 could be improved, and the remaining 45 were not broken. Frequency of comments performed worse than a random ranking, with precision at 20 of 8 and AUC = 0.487. The cranky comments heuristic performed better with precision at 20 of 16 and AUC = 0.723. The Naïve Bayes classifier had precision at 20 of 17 and AUC = 0.738.
Discussion
Override comments uncovered malfunctions in 26% of all rules active in our system. This is a lower bound on total malfunctions and much higher than expected. Even for low-resource organizations, reviewing comments identified by the cranky word list heuristic may be an effective and feasible way of finding broken alerts.
Conclusion
Override comments are a rich data source for finding alerts that are broken or could be improved. If possible, we recommend monitoring all override comments on a regular basis.
To develop an empirically derived taxonomy of clinical decision support (CDS) alert malfunctions.
We identified CDS alert malfunctions using a mix of qualitative and quantitative methods: (1) site ...visits with interviews of chief medical informatics officers, CDS developers, clinical leaders, and CDS end users; (2) surveys of chief medical informatics officers; (3) analysis of CDS firing rates; and (4) analysis of CDS overrides. We used a multi-round, manual, iterative card sort to develop a multi-axial, empirically derived taxonomy of CDS malfunctions.
We analyzed 68 CDS alert malfunction cases from 14 sites across the United States with diverse electronic health record systems. Four primary axes emerged: the cause of the malfunction, its mode of discovery, when it began, and how it affected rule firing. Build errors, conceptualization errors, and the introduction of new concepts or terms were the most frequent causes. User reports were the predominant mode of discovery. Many malfunctions within our database caused rules to fire for patients for whom they should not have (false positives), but the reverse (false negatives) was also common.
Across organizations and electronic health record systems, similar malfunction patterns recurred. Challenges included updates to code sets and values, software issues at the time of system upgrades, difficulties with migration of CDS content between computing environments, and the challenge of correctly conceptualizing and building CDS.
CDS alert malfunctions are frequent. The empirically derived taxonomy formalizes the common recurring issues that cause these malfunctions, helping CDS developers anticipate and prevent CDS malfunctions before they occur or detect and resolve them expediently.
Malfunctions in Clinical Decision Support (CDS) systems occur due to a multitude of reasons, and often go unnoticed, leading to potentially poor outcomes. Our goal was to identify malfunctions within ...CDS systems.
We evaluated 6 anomaly detection models: (1) Poisson Changepoint Model, (2) Autoregressive Integrated Moving Average (ARIMA) Model, (3) Hierarchical Divisive Changepoint (HDC) Model, (4) Bayesian Changepoint Model, (5) Seasonal Hybrid Extreme Studentized Deviate (SHESD) Model, and (6) E-Divisive with Median (EDM) Model and characterized their ability to find known anomalies. We analyzed 4 CDS alerts with known malfunctions from the Longitudinal Medical Record (LMR) and Epic® (Epic Systems Corporation, Madison, WI, USA) at Brigham and Women's Hospital, Boston, MA. The 4 rules recommend lead testing in children, aspirin therapy in patients with coronary artery disease, pneumococcal vaccination in immunocompromised adults and thyroid testing in patients taking amiodarone.
Poisson changepoint, ARIMA, HDC, Bayesian changepoint and the SHESD model were able to detect anomalies in an alert for lead screening in children and in an alert for pneumococcal conjugate vaccine in immunocompromised adults. EDM was able to detect anomalies in an alert for monitoring thyroid function in patients on amiodarone.
Malfunctions/anomalies occur frequently in CDS alert systems. It is important to be able to detect such anomalies promptly. Anomaly detection models are useful tools to aid such detections.
Objective: To examine medication errors potentially related to computerized prescriber order entry (CPOE) and refine a previously published taxonomy to classify them.
Materials and Methods: We ...reviewed all patient safety medication reports that occurred in the medication ordering phase from 6 sites participating in a United States Food and Drug Administration–sponsored project examining CPOE safety. Two pharmacists independently reviewed each report to confirm whether the error occurred in the ordering/prescribing phase and was related to CPOE. For those related to CPOE, we assessed whether CPOE facilitated (actively contributed to) the error or failed to prevent the error (did not directly cause it, but optimal systems could have potentially prevented it). A previously developed taxonomy was iteratively refined to classify the reports.
Results: Of 2522 medication error reports, 1308 (51.9%) were related to CPOE. Of these, CPOE facilitated the error in 171 (13.1%) and potentially could have prevented the error in 1137 (86.9%). The most frequent categories of “what happened to the patient” were delays in medication reaching the patient, potentially receiving duplicate drugs, or receiving a higher dose than indicated. The most frequent categories for “what happened in CPOE” included orders not routed to or received at the intended location, wrong dose ordered, and duplicate orders. Variations were seen in the format, categorization, and quality of reports, resulting in error causation being assignable in only 403 instances (31%).
Discussion and Conclusion: Errors related to CPOE commonly involved transmission errors, erroneous dosing, and duplicate orders. More standardized safety reporting using a common taxonomy could help health care systems and vendors learn and implement prevention strategies.
Fuelled by compelling evidence that computerised provider order entry (CPOE) improves medication safety and the infusion of tens of billions of federal electronic medical record (EMR) stimulus ...dollars, electronic medication prescribing in the USA has gone from <10% to >70% of prescriptions being written electronically in just the past six years. 1-4 Most medications are now ordered electronically both inside and outside the hospital, and they are being sent in electronically to pharmacies. A series of studies by the U.S. Institute of Medicine and Office of the National Coordinator for Health Information Technology (HIT) have recently spotlighted a number of potential safety risks. 5-7 To better understand these risks and the opportunities for improvement, particularly as they relate to drug names and drug ordering, the U.S. Food and Drug Administration Center for Drug Evaluation and Research's Division of Medication Error Prevention and Analysis contracted the Brigham and Women's Hospital (BWH) Center for Patient Safety Research and Practice to study CPOE and risks that could potentially lead to medication errors.
•CDS malfunctions are common and may go undetected for long periods of time, putting patients at risk.•Institutions have inadequate tools and processes to detect and prevent malfunctions.•Experts ...identified, evaluated and prioritized 47 best practices for preventing CDS malfunctions in 7 categories.•Experts pointed to the issue of shared responsibility between healthcare organizations and electronic health record vendors.
Developing effective and reliable rule-based clinical decision support (CDS) alerts and reminders is challenging. Using a previously developed taxonomy for alert malfunctions, we identified best practices for developing, testing, implementing, and maintaining alerts and avoiding malfunctions.
We identified 72 initial practices from the literature, interviews with subject matter experts, and prior research. To refine, enrich, and prioritize the list of practices, we used the Delphi method with two rounds of consensus-building and refinement. We used a larger than normal panel of experts to include a wide representation of CDS subject matter experts from various disciplines.
28 experts completed Round 1 and 25 completed Round 2. Round 1 narrowed the list to 47 best practices in 7 categories: knowledge management, designing and specifying, building, testing, deployment, monitoring and feedback, and people and governance. Round 2 developed consensus on the importance and feasibility of each best practice.
The Delphi panel identified a range of best practices that may help to improve implementation of rule-based CDS and avert malfunctions. Due to limitations on resources and personnel, not everyone can implement all best practices. The most robust processes require investing in a data warehouse. Experts also pointed to the issue of shared responsibility between the healthcare organization and the electronic health record vendor.
These 47 best practices represent an ideal situation. The research identifies the balance between importance and difficulty, highlights the challenges faced by organizations seeking to implement CDS, and describes several opportunities for future research to reduce alert malfunctions.
Abstract
Objective
To improve problem list documentation and care quality.
Materials and methods
We developed algorithms to infer clinical problems a patient has that are not recorded on the coded ...problem list using structured data in the electronic health record (EHR) for 12 clinically significant heart, lung, and blood diseases. We also developed a clinical decision support (CDS) intervention which suggests adding missing problems to the problem list. We evaluated the intervention at 4 diverse healthcare systems using 3 different EHRs in a randomized trial using 3 predetermined outcome measures: alert acceptance, problem addition, and National Committee for Quality Assurance Healthcare Effectiveness Data and Information Set (NCQA HEDIS) clinical quality measures.
Results
There were 288 832 opportunities to add a problem in the intervention arm and the problem was added 63 777 times (acceptance rate 22.1%). The intervention arm had 4.6 times as many problems added as the control arm. There were no significant differences in any of the clinical quality measures.
Discussion
The CDS intervention was highly effective at improving problem list completeness. However, the improvement in problem list utilization was not associated with improvement in the quality measures. The lack of effect on quality measures suggests that problem list documentation is not directly associated with improvements in quality measured by National Committee for Quality Assurance Healthcare Effectiveness Data and Information Set (NCQA HEDIS) quality measures. However, improved problem list accuracy has other benefits, including clinical care, patient comprehension of health conditions, accurate CDS and population health, and for research.
Conclusion
An EHR-embedded CDS intervention was effective at improving problem list completeness but was not associated with improvement in quality measures.
Microbiology laboratory results are complex and cumbersome to review. We sought to develop a new review tool to improve the ease and accuracy of microbiology results review.
We observed and ...informally interviewed clinicians to determine areas in which existing microbiology review tools were lacking. We developed a new tool that reorganizes microbiology results by time and organism. We conducted a scenario-based usability evaluation to compare the new tool to existing legacy tools, using a balanced block design.
The average time-on-task decreased from 45.3 min for the legacy tools to 27.1 min for the new tool (P < .0001). Total errors decreased from 41 with the legacy tools to 19 with the new tool (P = .0068). The average Single Ease Question score was 5.65 (out of 7) for the new tool, compared to 3.78 for the legacy tools (P < .0001). The new tool scored 88 ("Excellent") on the System Usability Scale.
The new tool substantially improved efficiency, accuracy, and usability. It was subsequently integrated into the electronic health record and rolled out system-wide. This project provides an example of how clinical and informatics teams can innovative alongside a commercial Electronic Health Record (EHR).