Until we have a meaningful alternative, lockdown is the only thing we can do to prevent further catastrophic spread of the virus, says Edward R Melnick. But John PA Ioannidis argues that any benefits ...of lockdown depend on its effectiveness and the covid-19 burden—and that the harms are multifarious
To describe and benchmark physician-perceived electronic health record (EHR) usability as defined by a standardized metric of technology usability and evaluate the association with professional ...burnout among physicians.
This cross-sectional survey of US physicians from all specialty disciplines was conducted between October 12, 2017, and March 15, 2018, using the American Medical Association Physician Masterfile. Among the 30,456 invited physicians, 5197 (17.1%) completed surveys. A random 25% (n=1250) of respondents in the primary survey received a subsurvey evaluating EHR usability, and 870 (69.6%) completed it. EHR usability was assessed using the System Usability Scale (SUS; range 0-100). SUS scores were normalized to percentile rankings across more than 1300 previous studies from other industries. Burnout was measured using the Maslach Burnout Inventory.
Mean ± SD SUS score was 45.9±21.9. A score of 45.9 is in the bottom 9% of scores across previous studies and categorized in the “not acceptable” range or with a grade of F. On multivariate analysis adjusting for age, sex, medical specialty, practice setting, hours worked, and number of nights on call weekly, physician-rated EHR usability was independently associated with the odds of burnout with each 1 point more favorable SUS score associated with a 3% lower odds of burnout (odds ratio, 0.97; 95% CI, 0.97-0.98; P<.001).
The usability of current EHR systems received a grade of F by physician users when evaluated using a standardized metric of technology usability. A strong dose-response relationship between EHR usability and the odds of burnout was observed.
IMPORTANCE: As coronavirus disease 2019 (COVID-19) spread throughout the US in the early months of 2020, acute care delivery changed to accommodate an influx of patients with a highly contagious ...infection about which little was known. OBJECTIVE: To examine trends in emergency department (ED) visits and visits that led to hospitalizations covering a 4-month period leading up to and during the COVID-19 outbreak in the US. DESIGN, SETTING, AND PARTICIPANTS: This retrospective, observational, cross-sectional study of 24 EDs in 5 large health care systems in Colorado (n = 4), Connecticut (n = 5), Massachusetts (n = 5), New York (n = 5), and North Carolina (n = 5) examined daily ED visit and hospital admission rates from January 1 to April 30, 2020, in relation to national and the 5 states’ COVID-19 case counts. EXPOSURES: Time (day) as a continuous variable. MAIN OUTCOMES AND MEASURES: Daily counts of ED visits, hospital admissions, and COVID-19 cases. RESULTS: A total of 24 EDs were studied. The annual ED volume before the COVID-19 pandemic ranged from 13 000 to 115 000 visits per year; the decrease in ED visits ranged from 41.5% in Colorado to 63.5% in New York. The weeks with the most rapid rates of decrease in visits were in March 2020, which corresponded with national public health messaging about COVID-19. Hospital admission rates from the ED were stable until new COVID-19 case rates began to increase locally; the largest relative increase in admission rates was 149.0% in New York, followed by 51.7% in Massachusetts, 36.2% in Connecticut, 29.4% in Colorado, and 22.0% in North Carolina. CONCLUSIONS AND RELEVANCE: From January through April 2020, as the COVID-19 pandemic intensified in the US, temporal associations were observed with a decrease in ED visits and an increase in hospital admission rates in 5 health care systems in 5 states. These findings suggest that practitioners and public health officials should emphasize the importance of visiting the ED during the COVID-19 pandemic for serious symptoms, illnesses, and injuries that cannot be managed in other settings.
Objectives
Predictive analytics in emergency care has mostly been limited to the use of clinical decision rules (CDRs) in the form of simple heuristics and scoring systems. In the development of ...CDRs, limitations in analytic methods and concerns with usability have generally constrained models to a preselected small set of variables judged to be clinically relevant and to rules that are easily calculated. Furthermore, CDRs frequently suffer from questions of generalizability, take years to develop, and lack the ability to be updated as new information becomes available. Newer analytic and machine learning techniques capable of harnessing the large number of variables that are already available through electronic health records (EHRs) may better predict patient outcomes and facilitate automation and deployment within clinical decision support systems. In this proof‐of‐concept study, a local, big data–driven, machine learning approach is compared to existing CDRs and traditional analytic methods using the prediction of sepsis in‐hospital mortality as the use case.
Methods
This was a retrospective study of adult ED visits admitted to the hospital meeting criteria for sepsis from October 2013 to October 2014. Sepsis was defined as meeting criteria for systemic inflammatory response syndrome with an infectious admitting diagnosis in the ED. ED visits were randomly partitioned into an 80%/20% split for training and validation. A random forest model (machine learning approach) was constructed using over 500 clinical variables from data available within the EHRs of four hospitals to predict in‐hospital mortality. The machine learning prediction model was then compared to a classification and regression tree (CART) model, logistic regression model, and previously developed prediction tools on the validation data set using area under the receiver operating characteristic curve (AUC) and chi‐square statistics.
Results
There were 5,278 visits among 4,676 unique patients who met criteria for sepsis. Of the 4,222 patients in the training group, 210 (5.0%) died during hospitalization, and of the 1,056 patients in the validation group, 50 (4.7%) died during hospitalization. The AUCs with 95% confidence intervals (CIs) for the different models were as follows: random forest model, 0.86 (95% CI = 0.82 to 0.90); CART model, 0.69 (95% CI = 0.62 to 0.77); logistic regression model, 0.76 (95% CI = 0.69 to 0.82); CURB‐65, 0.73 (95% CI = 0.67 to 0.80); MEDS, 0.71 (95% CI = 0.63 to 0.77); and mREMS, 0.72 (95% CI = 0.65 to 0.79). The random forest model AUC was statistically different from all other models (p ≤ 0.003 for all comparisons).
Conclusions
In this proof‐of‐concept study, a local big data–driven, machine learning approach outperformed existing CDRs as well as traditional analytic techniques for predicting in‐hospital mortality of ED patients with sepsis. Future research should prospectively evaluate the effectiveness of this approach and whether it translates into improved clinical outcomes for high‐risk sepsis patients. The methods developed serve as an example of a new model for predictive analytics in emergency care that can be automated, applied to other clinical outcomes of interest, and deployed in EHRs to enable locally relevant clinical predictions.
ABSTRACTDespite considerable progress in tackling cardiovascular disease over the past 50 years, many gaps in the quality of care for cardiovascular disease remain. Multiple missed opportunities have ...been identified at every step in the prevention and treatment of cardiovascular disease, such as failure to make risk factor modifications, failure to diagnose cardiovascular disease, and failure to use proper evidence based treatments. With the digital transformation of medicine and advances in health information technology, clinical decision support (CDS) tools offer promise to enhance the efficiency and effectiveness of delivery of cardiovascular care. However, to date, the promise of CDS delivering scalable and sustained value for patient care in clinical practice has not been realized. This article reviews the evidence on key emerging questions around the development, implementation, and regulation of CDS with a focus on cardiovascular disease. It first reviews evidence on the effectiveness of CDS on healthcare process and clinical outcomes related to cardiovascular disease and design features associated with CDS effectiveness. It then reviews the barriers encountered during implementation of CDS in cardiovascular care, with a focus on unintended consequences and strategies to promote successful implementation. Finally, it reviews the legal and regulatory environment of CDS with specific examples for cardiovascular disease.
Physician turnover places a heavy burden on the healthcare industry, patients, physicians, and their families. Having a mechanism in place to identify physicians at risk for departure could help ...target appropriate interventions that prevent departure. We have collected physician characteristics, electronic health record (EHR) use patterns, and clinical productivity data from a large ambulatory based practice of non-teaching physicians to build a predictive model. We use several techniques to identify possible intervenable variables. Specifically, we used gradient boosted trees to predict the probability of a physician departing within an interval of 6 months. Several variables significantly contributed to predicting physician departure including tenure (time since hiring date), panel complexity, physician demand, physician age, inbox, and documentation time. These variables were identified by training, validating, and testing the model followed by computing SHAP (SHapley Additive exPlanation) values to investigate which variables influence the model's prediction the most. We found these top variables to have large interactions with other variables indicating their importance. Since these variables may be predictive of physician departure, they could prove useful to identify at risk physicians such who would benefit from targeted interventions.
The stepped wedge cluster randomized design has received increasing attention in pragmatic clinical trials and implementation science research. The key feature of the design is the unidirectional ...crossover of clusters from the control to intervention conditions on a staggered schedule, which induces confounding of the intervention effect by time. The stepped wedge design first appeared in the Gambia hepatitis study in the 1980s. However, the statistical model used for the design and analysis was not formally introduced until 2007 in an article by Hussey and Hughes. Since then, a variety of mixed-effects model extensions have been proposed for the design and analysis of these trials. In this article, we explore these extensions under a unified perspective. We provide a general model representation and regard various model extensions as alternative ways to characterize the secular trend, intervention effect, as well as sources of heterogeneity. We review the key model ingredients and clarify their implications for the design and analysis. The article serves as an entry point to the evolving statistical literatures on stepped wedge designs.
The aim of this article is to compare the aims, measures, methods, limitations, and scope of studies that employ vendor-derived and investigator-derived measures of electronic health record (EHR) ...use, and to assess measure consistency across studies.
We searched PubMed for articles published between July 2019 and December 2021 that employed measures of EHR use derived from EHR event logs. We coded the aims, measures, methods, limitations, and scope of each article and compared articles employing vendor-derived and investigator-derived measures.
One hundred and two articles met inclusion criteria; 40 employed vendor-derived measures, 61 employed investigator-derived measures, and 1 employed both. Studies employing vendor-derived measures were more likely than those employing investigator-derived measures to observe EHR use only in ambulatory settings (83% vs 48%, P = .002) and only by physicians or advanced practice providers (100% vs 54% of studies, P < .001). Studies employing vendor-derived measures were also more likely to measure durations of EHR use (P < .001 for 6 different activities), but definitions of measures such as time outside scheduled hours varied widely. Eight articles reported measure validation. The reported limitations of vendor-derived measures included measure transparency and availability for certain clinical settings and roles.
Vendor-derived measures are increasingly used to study EHR use, but only by certain clinical roles. Although poorly validated and variously defined, both vendor- and investigator-derived measures of EHR time are widely reported.
The number of studies using event logs to observe EHR use continues to grow, but with inconsistent measure definitions and significant differences between studies that employ vendor-derived and investigator-derived measures.
This Viewpoint considers the barriers to efficient medical record keeping and suggests ways the systems can be studied to maximize their promise and minimize the undue burden many physicians shoulder.