Background
We sought to develop and prospectively validate a dynamic model that incorporates changes in biomarkers to predict rapid clinical deterioration in patients hospitalized for COVID‐19.
...Methods
We established a retrospective cohort of hospitalized patients aged ≥18 years with laboratory‐confirmed COVID‐19 using electronic health records (EHR) from a large integrated care delivery network in Massachusetts including >40 facilities from March to November 2020. A total of 71 factors, including time‐varying vital signs and laboratory findings during hospitalization were screened. We used elastic net regression and tree‐based scan statistics for variable selection to predict rapid deterioration, defined as progression by two levels of a published severity scale in the next 24 h. The development cohort included the first 70% of patients identified chronologically in calendar time; the latter 30% served as the validation cohort. A cut‐off point was estimated to alert clinicians of high risk of imminent clinical deterioration.
Results
Overall, 3706 patients (2587 in the development and 1119 in the validation cohort) met the eligibility criteria with a median of 6 days of follow‐up. Twenty‐four variables were selected in the final model, including 16 dynamic changes of laboratory results or vital signs. Area under the ROC curve was 0.81 (95% CI, 0.79–0.82) in the development set and 0.74 (95% CI, 0.71–0.78) in the validation set. The model was well calibrated (slope = 0.84 and intercept = −0.07 on the calibration plot in the validation set). The estimated cut‐off point, with a positive predictive value of 83%, was 0.78.
Conclusions
Our prospectively validated dynamic prognostic model demonstrated temporal generalizability in a rapidly evolving pandemic and can be used to inform day‐to‐day treatment and resource allocation decisions based on dynamic changes in biophysiological factors.
Background
Patients with Alzheimer's Disease and Related Dementias (ADRD) undergoing inpatient procedures represent a population at elevated risk for adverse outcomes including postoperative ...complications, mortality, and discharge to a higher level of care. Outcomes may be particularly poor in patients with ADRD undergoing high‐risk procedures. We sought to determine traditional (e.g., 30‐day mortality) and patient‐centered (e.g., discharge disposition) outcomes in patients with ADRD undergoing high‐risk inpatient procedures.
Methods
This retrospective cohort study analyzed electronic health records linked to fee‐for‐service Medicare claims data at a tertiary care academic health system. All patients from a large multi‐hospital health system undergoing high‐risk inpatient procedures from October 1, 2015 to September 30, 2017 with continuous Medicare Parts A and B enrollment in the 12 months prior to and 90 days following the procedure were included.
Results
This study included 6779 patients. 536 (7.9%) had ADRD. A multivariable analysis of outcomes demonstrated higher risks for postoperative complications (OR 1.49, 95% CI 1.23–1.81) and 90‐day mortality (OR 1.44 95% CI 1.09–1.91) in patients with ADRD compared to those without. Patients with ADRD were more likely to be discharged to a higher level of care (OR 1.70, 95% CI 1.32–2.18) and only 37.3% of patients admitted from home were discharged to home.
Conclusions
Compared to those without ADRD, patients living with ADRD undergoing high‐risk procedures have poor traditional and patient‐centered outcomes including increased risks for 90‐day mortality, postoperative complications, longer hospital lengths of stay, and discharge to a higher level of care. These data may be used by patients, their surrogates, and their physicians to help align surgical decision‐making with health care goals.
As the scientific research community along with healthcare professionals and decision makers around the world fight tirelessly against the coronavirus disease 2019 (COVID‐19) pandemic, the need for ...comparative effectiveness research (CER) on preventive and therapeutic interventions for COVID‐19 is immense. Randomized controlled trials markedly under‐represent the frail and complex patients seen in routine care, and they do not typically have data on long‐term treatment effects. The increasing availability of electronic health records (EHRs) for clinical research offers the opportunity to generate timely real‐world evidence reflective of routine care for optimal management of COVID‐19. However, there are many potential threats to the validity of CER based on EHR data that are not originally generated for research purposes. To ensure unbiased and robust results, we need high‐quality healthcare databases, rigorous study designs, and proper implementation of appropriate statistical methods. We aimed to describe opportunities and challenges in EHR‐based CER for COVID‐19‐related questions and to introduce best practices in pharmacoepidemiology to minimize potential biases. We structured our discussion into the following topics: (1) study population identification based on exposure status; (2) ascertainment of outcomes; (3) common biases and potential solutions; and (iv) data operational challenges specific to COVID‐19 CER using EHRs. We provide structured guidance for the proper conduct and appraisal of drug and vaccine effectiveness and safety research using EHR data for the pandemic. This paper is endorsed by the International Society for Pharmacoepidemiology (ISPE).
Background Many patients with cirrhosis have concurrent nonvalvular atrial fibrillation (NVAF). Data are lacking regarding recent oral anticoagulant (OAC) usage trends among US patients with ...cirrhosis and NVAF. Methods and Results Using MarketScan claims data (2012-2019), we identified patients with cirrhosis and NVAF eligible for OACs (CHA
DS
-VASc score ≥2 men or ≥3 women). We calculated the yearly proportion of patients prescribed a direct OAC (DOAC), warfarin, or no OAC. We stratified by high-risk features (decompensated cirrhosis, thrombocytopenia, coagulopathy, chronic kidney disease, or end-stage renal disease). Among 32 487 patients (mean age=71.6 years, 38.5% women, 15.1% with decompensated cirrhosis, mean CHA
DS
-VASc=4.2), 44.6% used OACs within 180 days of NVAF diagnosis, including DOACs (20.2%) or warfarin (24.4%). Compared with OAC nonusers, OAC users were less likely to have decompensated cirrhosis (18.6% versus 10.7%), thrombocytopenia (19.5% versus 12.5%), or chronic kidney disease/end-stage renal disease (15.5% versus 14.0%). Between 2012 and 2019, warfarin use decreased by 21.0% (32.0% to 11.0%), whereas DOAC use increased by 30.6% (7.4% to 38.0%), and among all DOACs between 2012 and 2019, apixaban was the most commonly prescribed (46.1%). Warfarin use decreased and DOAC use increased in all subgroups, including in compensated and decompensated cirrhosis, thrombocytopenia, coagulopathy, chronic kidney disease/end-stage renal disease, and across CHA
DS
-VASc categories. Among OAC users (2012-2019), DOAC use increased by 58.9% (18.7% to 77.6%). Among DOAC users, the greatest proportional increase was with apixaban (61.2%;
<0.001). Conclusions Among US patients with cirrhosis and NVAF, DOAC use has increased substantially and surpassed warfarin, including in decompensated cirrhosis. Nevertheless, >55% of patients remain untreated, underscoring the need for clearer treatment guidance.
We aimed to use setting-appropriate comparisons to estimate the effects of different gastrointestinal (GI) prophylaxis pharmacotherapies for patients hospitalized with COVID-19 and ...setting-inappropriate comparisons to illustrate how improper design choices could result in biased results.
We identified 3,804 hospitalized patients aged ≥ 18 years with COVID-19 from March to November 2020. We compared the effects of different gastroprotective agents on clinical improvement of COVID-19, as measured by a published severity scale. We used propensity score–based fine-stratification for confounding adjustment. Based on guidelines, we prespecified comparisons between agents with clinical equipoise and inappropriate comparisons of users vs. nonusers of GI prophylaxis in the intensive care unit (ICU).
No benefit was detected when comparing oral famotidine to omeprazole in patients treated in the general ward or ICUs. We also found no associations when comparing intravenous famotidine to intravenous pantoprazole. For inappropriate comparisons of users vs. nonusers in the ICU, the probability of improvement was reduced by 32%–45% in famotidine users and 21%–48% in omeprazole or pantoprazole users.
We found no evidence that GI prophylaxis improved outcomes for patients hospitalized with COVID-19 in setting-appropriate comparisons. An improper comparator choice can lead to spurious associations in critically ill patients.
Display omitted
Assessment of activities of daily living (ADLs) and instrumental ADLs (iADLs) is key to determining the severity of dementia and care needs among older adults. However, such information is often only ...documented in free-text clinical notes within the electronic health record and can be challenging to find.
This study aims to develop and validate machine learning models to determine the status of ADL and iADL impairments based on clinical notes.
This cross-sectional study leveraged electronic health record clinical notes from Mass General Brigham's Research Patient Data Repository linked with Medicare fee-for-service claims data from 2007 to 2017 to identify individuals aged 65 years or older with at least 1 diagnosis of dementia. Notes for encounters both 180 days before and after the first date of dementia diagnosis were randomly sampled. Models were trained and validated using note sentences filtered by expert-curated keywords (filtered cohort) and further evaluated using unfiltered sentences (unfiltered cohort). The model's performance was compared using area under the receiver operating characteristic curve and area under the precision-recall curve (AUPRC).
The study included 10,000 key-term-filtered sentences representing 441 people (n=283, 64.2% women; mean age 82.7, SD 7.9 years) and 1000 unfiltered sentences representing 80 people (n=56, 70% women; mean age 82.8, SD 7.5 years). Area under the receiver operating characteristic curve was high for the best-performing ADL and iADL models on both cohorts (>0.97). For ADL impairment identification, the random forest model achieved the best AUPRC (0.89, 95% CI 0.86-0.91) on the filtered cohort; the support vector machine model achieved the highest AUPRC (0.82, 95% CI 0.75-0.89) for the unfiltered cohort. For iADL impairment, the Bio+Clinical bidirectional encoder representations from transformers (BERT) model had the highest AUPRC (filtered: 0.76, 95% CI 0.68-0.82; unfiltered: 0.58, 95% CI 0.001-1.0). Compared with a keyword-search approach on the unfiltered cohort, machine learning reduced false-positive rates from 4.5% to 0.2% for ADL and 1.8% to 0.1% for iADL.
In this study, we demonstrated the ability of machine learning models to accurately identify ADL and iADL impairment based on free-text clinical notes, which could be useful in determining the severity of dementia.
Electronic health record (EHR) discontinuity (i.e., receiving care outside of the study EHR system), can lead to information bias in EHR‐based real‐world evidence (RWE) studies. An algorithm has been ...previously developed to identify patients with high EHR‐continuity. We sought to assess whether applying this algorithm to patient selection for inclusion can reduce bias caused by data‐discontinuity in four RWE examples. Among Medicare beneficiaries aged >=65 years from 2007 to 2014, we established 4 cohorts assessing drug effects on short‐term or long‐term outcomes, respectively. We linked claims data with two US EHR systems and calculated %bias of the multivariable‐adjusted effect estimates based on only EHR vs. linked EHR‐claims data because the linked data capture medical information recorded outside of the study EHR. Our study cohort included 77,288 patients in system 1 and 60,309 in system 2. We found the subcohort in the lowest quartile of EHR‐continuity captured 72–81% of the short‐term and only 21–31% of the long‐term outcome events, leading to %bias of 6–99% for the short‐term and 62–112% for the long‐term outcome examples. This trend appeared to be more pronounced in the example using a nonuser comparison rather than an active comparison. We did not find significant treatment effect heterogeneity by EHR‐continuity for most subgroups across empirical examples. In EHR‐based RWE studies, investigators may consider excluding patients with low algorithm‐predicted EHR‐continuity as the EHR data capture relatively few of their actual outcomes, and treatment effect estimates in these patients may be unreliable.
Background & Aims We investigated the effect of different prevention strategies against upper gastrointestinal bleeding (UGIB) in the general population and in patients on antithrombotic or ...anti-inflammatory treatments. Methods We performed a population-based, nested, case-control study using The Health Improvement Network UK primary care database. From 2000 to 2007, we identified 2049 cases of UGIB and 20,000 controls. The relative risk (RR) of UGIB associated with various gastroprotective agents was estimated by comparing current use (defined as use within 30 days of the index date) with nonuse in the previous year, using multivariate logistic regression. Results The adjusted RR of UGIB associated with current use of proton pump inhibitors (PPIs) for more than 1 month was 0.58 (95% confidence interval CI, 0.42–0.79) among patients who received low-dose acetylsalicylic acid (ASA), 0.18 (95% CI, 0.04–0.79) for clopidogrel, 0.17 (95% CI, 0.04–0.76) for dual antiplatelet therapy, 0.48 (95% CI, 0.22–1.04) for warfarin, and 0.51 (95% CI, 0.34–0.78) for nonsteroidal anti-inflammatory drugs. The corresponding estimates for therapy with histamine-2–receptor antagonists (H2 RAs) were more unstable, but tended to be of a smaller magnitude. In the general population, PPI use was associated with a reduced risk of UGIB compared with nonuse (RR, 0.80; 95% CI, 0.68–0.94); no such reduction was observed for H2 RAs or nitrates. Conclusions PPI use was associated with a lower risk of UGIB in the general population and in patients on antithrombotic or anti-inflammatory therapy compared with nonuse of PPIs. The reduction in risks of UGIB was smaller in H2 RA than in PPI users.
Purpose
Supplementing investigator‐specified variables with large numbers of empirically identified features that collectively serve as ‘proxies’ for unspecified or unmeasured factors can often ...improve confounding control in studies utilizing administrative healthcare databases. Consequently, there has been a recent focus on the development of data‐driven methods for high‐dimensional proxy confounder adjustment in pharmacoepidemiologic research. In this paper, we survey current approaches and recent advancements for high‐dimensional proxy confounder adjustment in healthcare database studies.
Methods
We discuss considerations underpinning three areas for high‐dimensional proxy confounder adjustment: (1) feature generation—transforming raw data into covariates (or features) to be used for proxy adjustment; (2) covariate prioritization, selection, and adjustment; and (3) diagnostic assessment. We discuss challenges and avenues of future development within each area.
Results
There is a large literature on methods for high‐dimensional confounder prioritization/selection, but relatively little has been written on best practices for feature generation and diagnostic assessment. Consequently, these areas have particular limitations and challenges.
Conclusions
There is a growing body of evidence showing that machine‐learning algorithms for high‐dimensional proxy‐confounder adjustment can supplement investigator‐specified variables to improve confounding control compared to adjustment based on investigator‐specified variables alone. However, more research is needed on best practices for feature generation and diagnostic assessment when applying methods for high‐dimensional proxy confounder adjustment in pharmacoepidemiologic studies.
Background The bias implications of outcome misclassification arising from imperfect capture of mortality in claims-based studies are not well understood. Methods and Results We identified 2 cohorts ...of patients: (1) type 2 diabetes mellitus (n=8.6 million), and (2) heart failure (n=3.1 million), from Medicare claims (2012-2016). Within the 2 cohorts, mortality was identified from claims using the following approaches: (1) all-place all-cause mortality, (2) in-hospital all-cause mortality, (3) all-place cardiovascular mortality (based on diagnosis codes for a major cardiovascular event within 30 days of death date), or (4) in-hospital cardiovascular mortality, and compared against National Death Index identified mortality. Empirically identified sensitivity and specificity based on observed values in the 2 cohorts were used to conduct Monte Carlo simulations for treatment effect estimation under differential and nondifferential misclassification scenarios. From National Death Index, 1 544 805 deaths (549 996 35.6% cardiovascular deaths) in the type 2 diabetes mellitus cohort and 1 175 202 deaths (523 430 44.5% cardiovascular deaths) in the heart failure cohort were included. Sensitivity was 99.997% and 99.207% for the all-place all-cause mortality approach, whereas it was 27.71% and 33.71% for the in-hospital all-cause mortality approach in the type 2 diabetes mellitus and heart failure cohorts, respectively, with perfect positive predicted values. For all-place cardiovascular mortality, sensitivity was 52.01% in the type 2 diabetes mellitus cohort and 53.83% in the heart failure cohort with positive predicted values of 49.98% and 54.45%, respectively. Simulations suggested a possibility for substantial bias in treatment effects. Conclusions Approaches to identify mortality from claims had variable performance compared with the National Death Index. Investigators should anticipate the potential for bias from outcome misclassification when using administrative claims to capture mortality.