Allergy documentation is frequently inconsistent and incomplete. The impact of this variability on subsequent treatment is not well described.
To determine how allergy documentation affects ...subsequent antibiotic choice.
Retrospective, cohort study.
232,616 adult patients seen by 199 primary care providers (PCPs) between January 1, 2009 and January 1, 2014 at an academic medical system.
Inter-physician variation in beta-lactam allergy documentation; antibiotic treatment following beta-lactam allergy documentation.
15.6% of patients had a reported beta-lactam allergy. Of those patients, 39.8% had a specific allergen identified and 22.7% had allergic reaction characteristics documented. Variation between PCPs was greater than would be expected by chance (all p<0.001) in the percentage of their patients with a documented beta-lactam allergy (7.9% to 24.8%), identification of a specific allergen (e.g. amoxicillin as opposed to "penicillins") (24.0% to 58.2%) and documentation of the reaction characteristics (5.4% to 51.9%). After beta-lactam allergy documentation, patients were less likely to receive penicillins (Relative Risk RR 0.16 95% Confidence Interval: 0.15-0.17) and cephalosporins (RR 0.28 95% CI 0.27-0.30) and more likely to receive fluoroquinolones (RR 1.5 95% CI 1.5-1.6), clindamycin (RR 3.8 95% CI 3.6-4.0) and vancomycin (RR 5.0 95% CI 4.3-5.8). Among patients with beta-lactam allergy, rechallenge was more likely when a specific allergen was identified (RR 1.6 95% CI 1.5-1.8) and when reaction characteristics were documented (RR 2.0 95% CI 1.8-2.2).
Provider documentation of beta-lactam allergy is highly variable, and details of the allergy are infrequently documented. Classification of a patient as beta-lactam allergic and incomplete documentation regarding the details of the allergy lead to beta-lactam avoidance and use of other antimicrobial agents, behaviors that may adversely impact care quality and cost.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Abstract
Background
We compared the efficacy of the antiviral agent, remdesivir, versus standard-of-care treatment in adults with severe coronavirus disease 2019 (COVID-19) using data from a phase 3 ...remdesivir trial and a retrospective cohort of patients with severe COVID-19 treated with standard of care.
Methods
GS-US-540–5773 is an ongoing phase 3, randomized, open-label trial comparing two courses of remdesivir (remdesivir-cohort). GS-US-540–5807 is an ongoing real-world, retrospective cohort study of clinical outcomes in patients receiving standard-of-care treatment (non-remdesivir-cohort). Inclusion criteria were similar between studies: patients had confirmed severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection, were hospitalized, had oxygen saturation ≤94% on room air or required supplemental oxygen, and had pulmonary infiltrates. Stabilized inverse probability of treatment weighted multivariable logistic regression was used to estimate the treatment effect of remdesivir versus standard of care. The primary endpoint was the proportion of patients with recovery on day 14, dichotomized from a 7-point clinical status ordinal scale. A key secondary endpoint was mortality.
Results
After the inverse probability of treatment weighting procedure, 312 and 818 patients were counted in the remdesivir- and non-remdesivir-cohorts, respectively. At day 14, 74.4% of patients in the remdesivir-cohort had recovered versus 59.0% in the non-remdesivir-cohort (adjusted odds ratio aOR 2.03: 95% confidence interval CI: 1.34–3.08, P < .001). At day 14, 7.6% of patients in the remdesivir-cohort had died versus 12.5% in the non-remdesivir-cohort (aOR 0.38, 95% CI: .22–.68, P = .001).
Conclusions
In this comparative analysis, by day 14, remdesivir was associated with significantly greater recovery and 62% reduced odds of death versus standard-of-care treatment in patients with severe COVID-19.
Clinical Trials Registration
NCT04292899 and EUPAS34303.
In this comparative analysis, remdesivir was associated with significantly lower mortality and higher recovery than standard-of-care treatment without remdesivir in patients with severe COVID-19. Ongoing studies will further determine the utility of remdesivir for the treatment of severe COVID-19.
Recent sepsis studies have defined patients as "infected" using a combination of culture and antibiotic orders rather than billing data. However, the accuracy of these definitions is unclear. We ...aimed to compare the accuracy of different established criteria for identifying infected patients using detailed chart review.
Retrospective observational study.
Six hospitals from three health systems in Illinois.
Adult admissions with blood culture or antibiotic orders, or Angus International Classification of Diseases infection codes and death were eligible for study inclusion as potentially infected patients. Nine-hundred to 1,000 of these admissions were randomly selected from each health system for chart review, and a proportional number of patients who did not meet chart review eligibility criteria were also included and deemed not infected.
None.
The accuracy of published billing code criteria by Angus et al and electronic health record criteria by Rhee et al and Seymour et al (Sepsis-3) was determined using the manual chart review results as the gold standard. A total of 5,215 patients were included, with 2,874 encounters analyzed via chart review and a proportional 2,341 added who did not meet chart review eligibility criteria. In the study cohort, 27.5% of admissions had at least one infection. This was most similar to the percentage of admissions with blood culture orders (26.8%), Angus infection criteria (28.7%), and the Sepsis-3 criteria (30.4%). Sepsis-3 criteria was the most sensitive (81%), followed by Angus (77%) and Rhee (52%), while Rhee (97%) and Angus (90%) were more specific than the Sepsis-3 criteria (89%). Results were similar for patients with organ dysfunction during their admission.
Published criteria have a wide range of accuracy for identifying infected patients, with the Sepsis-3 criteria being the most sensitive and Rhee criteria being the most specific. These findings have important implications for studies investigating the burden of sepsis on a local and national level.
To determine the impact of a machine learning early warning risk score, electronic Cardiac Arrest Risk Triage (eCART), on mortality for elevated-risk adult inpatients.
A pragmatic pre- and ...post-intervention study conducted over the same 10-month period in 2 consecutive years.
Four-hospital community-academic health system.
All adult patients admitted to a medical-surgical ward.
During the baseline period, clinicians were blinded to eCART scores. During the intervention period, scores were presented to providers. Scores greater than or equal to 95th percentile were designated high risk prompting a physician assessment for ICU admission. Scores between the 89th and 95th percentiles were designated intermediate risk, triggering a nurse-directed workflow that included measuring vital signs every 2 hours and contacting a physician to review the treatment plan.
The primary outcome was all-cause inhospital mortality. Secondary measures included vital sign assessment within 2 hours, ICU transfer rate, and time to ICU transfer. A total of 60,261 patients were admitted during the study period, of which 6,681 (11.1%) met inclusion criteria (baseline period n = 3,191, intervention period n = 3,490). The intervention period was associated with a significant decrease in hospital mortality for the main cohort (8.8% vs 13.9%; p < 0.0001; adjusted odds ratio OR, 0.60 95% CI, 0.52-0.71). A significant decrease in mortality was also seen for the average-risk cohort not subject to the intervention (0.49% vs 0.26%; p < 0.05; adjusted OR, 0.53 95% CI, 0.41-0.74). In subgroup analysis, the benefit was seen in both high- (17.9% vs 23.9%; p = 0.001) and intermediate-risk (2.0% vs 4.0 %; p = 0.005) patients. The intervention period was also associated with a significant increase in ICU transfers, decrease in time to ICU transfer, and increase in vital sign reassessment within 2 hours.
Implementation of a machine learning early warning score-driven protocol was associated with reduced inhospital mortality, likely driven by earlier and more frequent ICU transfer.
Early antibiotic administration is a central component of sepsis guidelines, and delays may increase mortality. However, prior studies have examined the delay to first antibiotic administration as a ...single time period even though it contains two distinct processes: antibiotic ordering and antibiotic delivery, which can each be targeted for improvement through different interventions. The objective of this study was to characterize and compare patients who experienced order or delivery delays, investigate the association of each delay type with mortality, and identify novel patient subphenotypes with elevated risk of harm from delays.
Retrospective analysis of multicenter inpatient data.
Two tertiary care medical centers (2008-2018, 2006-2017) and four community-based hospitals (2008-2017).
All patients admitted through the emergency department who met clinical criteria for infection.
None.
Patient demographics, vitals, laboratory values, medication order and administration times, and in-hospital survival data were obtained from the electronic health record. Order and delivery delays were calculated for each admission. Adjusted logistic regression models were used to examine the relationship between each delay and in-hospital mortality. Causal forests, a machine learning method, was used to identify a high-risk subgroup. A total of 60,817 admissions were included, and delays occurred in 58% of patients. Each additional hour of order delay (odds ratio, 1.04; 95% CI, 1.03-1.05) and delivery delay (odds ratio, 1.05; 95% CI, 1.02-1.08) was associated with increased mortality. A patient subgroup identified by causal forests with higher comorbidity burden, greater organ dysfunction, and abnormal initial lactate measurements had a higher risk of death associated with delays (odds ratio, 1.07; 95% CI, 1.06-1.09 vs odds ratio, 1.02; 95% CI, 1.01-1.03).
Delays in antibiotic ordering and drug delivery are both associated with a similar increase in mortality. A distinct subgroup of high-risk patients exist who could be targeted for more timely therapy.
Empiric antibiotic prescribing can be supported by guidelines and/or local antibiograms, but these have limitations. We sought to use data from a comprehensive electronic health record to use ...statistical learning to develop predictive models for individual antibiotics that incorporate patient- and hospital-specific factors. This paper reports on the development and validation of these models with a large retrospective cohort. This was a retrospective cohort study including hospitalized patients with positive urine cultures in the first 48 h of hospitalization at a 1,500-bed tertiary-care hospital over a 4.5-year period. All first urine cultures with susceptibilities were included. Statistical learning techniques, including penalized logistic regression, were used to create predictive models for cefazolin, ceftriaxone, ciprofloxacin, cefepime, and piperacillin-tazobactam. These were validated on a held-out cohort. The final data set used for analysis included 6,366 patients. Final model covariates included demographics, comorbidity score, recent antibiotic use, recent antimicrobial resistance, and antibiotic allergies. Models had acceptable to good discrimination in the training data set and acceptable performance in the validation data set, with a point estimate for area under the receiver operating characteristic curve (AUC) that ranged from 0.65 for ceftriaxone to 0.69 for cefazolin. All models had excellent calibration. We used electronic health record data to create predictive models to estimate antibiotic susceptibilities for urinary tract infections in hospitalized patients. Our models had acceptable performance in a held-out validation cohort.
The global healthcare burden of COVID-19 continues to rise. There is currently limited information regarding the disease progression and the need for hospitalizations in patients who present to the ...Emergency Department (ED) with minimal or no symptoms.
This study identifies bounceback rates and timeframes for patients who return to the ED due to COVID-19 after initial discharge on the date of testing.
Using the NorthShore University Health System's (NSUHS) Enterprise Data Warehouse (EDW), we conducted a retrospective cohort analysis of patients who were tested positive for COVID-19 and were discharged home on the date of testing. A one-month follow-up period was included to ensure the capture of disease progression.
Of 1883 positive cases with initially mild symptoms, 14.6% returned to the ED for complaints related to COVID-19. 56.9% of the mildly symptomatic bounceback patients were discharged on the return visit while 39.5% were admitted to the floor and 3.6% to the ICU. Of the 1120 positive cases with no initial symptoms, only four returned to the ED (0.26%) and only one patient was admitted. Median initial testing occurred on day 3 (2–5.6) of illness, and median ED bounceback occurred on day 9 (6.3–12.7). Our statistical model was unable to identify risk factors for ED bouncebacks.
COVID-19 patients diagnosed with mild symptoms on initial presentation have a 14.6% rate of bounceback due to progression of illness.
OBJECTIVES:Bacteremia and fungemia can cause life-threatening illness with high mortality rates, which increase with delays in antimicrobial therapy. The objective of this study is to develop machine ...learning models to predict blood culture results at the time of the blood culture order using routine data in the electronic health record.
DESIGN:Retrospective analysis of a large, multicenter inpatient data.
SETTING:Two academic tertiary medical centers between the years 2007 and 2018.
SUBJECTS:All hospitalized patients who received a blood culture during hospitalization.
INTERVENTIONS:The dataset was partitioned temporally into development and validation cohortsthe logistic regression and gradient boosting machine models were trained on the earliest 80% of hospital admissions and validated on the most recent 20%.
MEASUREMENTS AND MAIN RESULTS:There were 252,569 blood culture days—defined as nonoverlapping 24-hour periods in which one or more blood cultures were ordered. In the validation cohort, there were 50,514 blood culture days, with 3,762 cases of bacteremia (7.5%) and 370 cases of fungemia (0.7%). The gradient boosting machine model for bacteremia had significantly higher area under the receiver operating characteristic curve (0.78 95% CI 0.77–0.78) than the logistic regression model (0.73 0.72–0.74) (p < 0.001). The model identified a high-risk group with over 30 times the occurrence rate of bacteremia in the low-risk group (27.4% vs 0.9%; p < 0.001). Using the low-risk cut-off, the model identifies bacteremia with 98.7% sensitivity. The gradient boosting machine model for fungemia had high discrimination (area under the receiver operating characteristic curve 0.88 95% CI 0.86–0.90). The high-risk fungemia group had 252 fungemic cultures compared with one fungemic culture in the low-risk group (5.0% vs 0.02%; p < 0.001). Further, the high-risk group had a mortality rate 60 times higher than the low-risk group (28.2% vs 0.4%; p < 0.001).
CONCLUSIONS:Our novel models identified patients at low and high-risk for bacteremia and fungemia using routinely collected electronic health record data. Further research is needed to evaluate the cost-effectiveness and impact of model implementation in clinical practice.
Risk scores used in early warning systems exist for general inpatients and patients with suspected infection outside the intensive care unit (ICU), but their relative performance is incompletely ...characterized.
To compare the performance of tools used to determine points-based risk scores among all hospitalized patients, including those with and without suspected infection, for identifying those at risk for death and/or ICU transfer.
In a cohort design, a retrospective analysis of prospectively collected data was conducted in 21 California and 7 Illinois hospitals between 2006 and 2018 among adult inpatients outside the ICU using points-based scores from 5 commonly used tools: National Early Warning Score (NEWS), Modified Early Warning Score (MEWS), Between the Flags (BTF), Quick Sequential Sepsis-Related Organ Failure Assessment (qSOFA), and Systemic Inflammatory Response Syndrome (SIRS). Data analysis was conducted from February 2019 to January 2020.
Risk model discrimination was assessed in each state for predicting in-hospital mortality and the combined outcome of ICU transfer or mortality with area under the receiver operating characteristic curves (AUCs). Stratified analyses were also conducted based on suspected infection.
The study included 773 477 hospitalized patients in California (mean SD age, 65.1 17.6 years; 416 605 women 53.9%) and 713 786 hospitalized patients in Illinois (mean SD age, 61.3 19.9 years; 384 830 women 53.9%). The NEWS exhibited the highest discrimination for mortality (AUC, 0.87; 95% CI, 0.87-0.87 in California vs AUC, 0.86; 95% CI, 0.85-0.86 in Illinois), followed by the MEWS (AUC, 0.83; 95% CI, 0.83-0.84 in California vs AUC, 0.84; 95% CI, 0.84-0.85 in Illinois), qSOFA (AUC, 0.78; 95% CI, 0.78-0.79 in California vs AUC, 0.78; 95% CI, 0.77-0.78 in Illinois), SIRS (AUC, 0.76; 95% CI, 0.76-0.76 in California vs AUC, 0.76; 95% CI, 0.75-0.76 in Illinois), and BTF (AUC, 0.73; 95% CI, 0.73-0.73 in California vs AUC, 0.74; 95% CI, 0.73-0.74 in Illinois). At specific decision thresholds, the NEWS outperformed the SIRS and qSOFA at all 28 hospitals either by reducing the percentage of at-risk patients who need to be screened by 5% to 20% or increasing the percentage of adverse outcomes identified by 3% to 25%.
In all hospitalized patients evaluated in this study, including those meeting criteria for suspected infection, the NEWS appeared to display the highest discrimination. Our results suggest that, among commonly used points-based scoring systems, determining the NEWS for inpatient risk stratification could identify patients with and without infection at high risk of mortality.