The Veterans Affairs Diabetes Trial previously showed that intensive glucose lowering, as compared with standard therapy, did not significantly reduce the rate of major cardiovascular events among ...1791 military veterans (median follow-up, 5.6 years). We report the extended follow-up of the study participants.
After the conclusion of the clinical trial, we followed participants, using central databases to identify procedures, hospitalizations, and deaths (complete cohort, with follow-up data for 92.4% of participants). Most participants agreed to additional data collection by means of annual surveys and periodic chart reviews (survey cohort, with 77.7% follow-up). The primary outcome was the time to the first major cardiovascular event (heart attack, stroke, new or worsening congestive heart failure, amputation for ischemic gangrene, or cardiovascular-related death). Secondary outcomes were cardiovascular mortality and all-cause mortality.
The difference in glycated hemoglobin levels between the intensive-therapy group and the standard-therapy group averaged 1.5 percentage points during the trial (median level, 6.9% vs. 8.4%) and declined to 0.2 to 0.3 percentage points by 3 years after the trial ended. Over a median follow-up of 9.8 years, the intensive-therapy group had a significantly lower risk of the primary outcome than did the standard-therapy group (hazard ratio, 0.83; 95% confidence interval CI, 0.70 to 0.99; P=0.04), with an absolute reduction in risk of 8.6 major cardiovascular events per 1000 person-years, but did not have reduced cardiovascular mortality (hazard ratio, 0.88; 95% CI, 0.64 to 1.20; P=0.42). No reduction in total mortality was evident (hazard ratio in the intensive-therapy group, 1.05; 95% CI, 0.89 to 1.25; P=0.54; median follow-up, 11.8 years).
After nearly 10 years of follow-up, patients with type 2 diabetes who had been randomly assigned to intensive glucose control for 5.6 years had 8.6 fewer major cardiovascular events per 1000 person-years than those assigned to standard therapy, but no improvement was seen in the rate of overall survival. (Funded by the VA Cooperative Studies Program and others; VADT ClinicalTrials.gov number, NCT00032487.).
Summary Background During in-hospital cardiac arrests, how long resuscitation attempts should be continued before termination of efforts is unknown. We investigated whether duration of resuscitation ...attempts varies between hospitals and whether patients at hospitals that attempt resuscitation for longer have higher survival rates than do those at hospitals with shorter durations of resuscitation efforts. Methods Between 2000 and 2008, we identified 64 339 patients with cardiac arrests at 435 US hospitals within the Get With The Guidelines—Resuscitation registry. For each hospital, we calculated the median duration of resuscitation before termination of efforts in non-survivors as a measure of the hospital's overall tendency for longer attempts. We used multilevel regression models to assess the association between the length of resuscitation attempts and risk-adjusted survival. Our primary endpoints were immediate survival with return of spontaneous circulation during cardiac arrest and survival to hospital discharge. Findings 31 198 of 64 339 (48·5%) patients achieved return of spontaneous circulation and 9912 (15·4%) survived to discharge. For patients achieving return of spontaneous circulation, the median duration of resuscitation was 12 min (IQR 6–21) compared with 20 min (14–30) for non-survivors. Compared with patients at hospitals in the quartile with the shortest median resuscitation attempts in non-survivors (16 min IQR 15–17), those at hospitals in the quartile with the longest attempts (25 min 25–28) had a higher likelihood of return of spontaneous circulation (adjusted risk ratio 1·12, 95% CI 1·06–1·18; p<0·0001) and survival to discharge (1·12, 1·02–1·23; 0·021). Interpretation Duration of resuscitation attempts varies between hospitals. Although we cannot define an optimum duration for resuscitation attempts on the basis of these observational data, our findings suggest that efforts to systematically increase the duration of resuscitation could improve survival in this high-risk population. Funding American Heart Association, Robert Wood Johnson Foundation Clinical Scholars Program, and the National Institutes of Health.
Many health systems are exploring how to implement low-dose computed tomography (LDCT) screening programs that are effective and patient-centered.
To examine factors that influence when LDCT ...screening is preference-sensitive.
State-transition microsimulation model.
Two large randomized trials, published decision analyses, and the SEER (Surveillance, Epidemiology, and End Results) cancer registry.
U.S.-representative sample of simulated patients meeting current U.S. Preventive Services Task Force criteria for screening eligibility.
Lifetime.
Individual.
LDCT screening annually for 3 years.
Lifetime quality-adjusted life-year gains and reduction in lung cancer mortality. To examine the effect of preferences on net benefit, disutilities (the "degree of dislike") quantifying the burden of screening and follow-up were varied across a likely range. The effect of varying the rate of false-positive screening results and overdiagnosis associated with screening was also examined.
Moderate differences in preferences about the downsides of LDCT screening influenced whether screening was appropriate for eligible persons with annual lung cancer risk less than 0.3% or life expectancy less than 10.5 years. For higher-risk eligible persons with longer life expectancy (roughly 50% of the study population), the benefits of LDCT screening overcame even highly negative views about screening and its downsides.
Rates of false-positive findings and overdiagnosed lung cancer were not highly influential.
The quantitative thresholds that were identified may vary depending on the structure of the microsimulation model.
Identifying circumstances in which LDCT screening is more versus less preference-sensitive may help clinicians personalize their screening discussions, tailoring to both preferences and clinical benefit.
None.
Intensive glucose control is understood to prevent complications in adults with type 2 diabetes. We aimed to more precisely estimate the effects of more intensive glucose control, compared with less ...intensive glucose control, on the risk of microvascular events.
In this meta-analysis, we obtained de-identified individual participant data from large-scale randomised controlled trials assessing the effects of more intensive glucose control versus less intensive glucose control in adults with type 2 diabetes, with at least 1000 patient-years of follow-up in each treatment group and a minimum of 2 years average follow-up on randomised treatment. The prespecified and standardised primary outcomes were kidney events (a composite of end-stage kidney disease, renal death, development of an estimated glomerular filtration rate <30 mL/min per 1·73m
, or development of overt diabetic nephropathy), eye events (a composite of requirement for retinal photocoagulation therapy or vitrectomy, development of proliferative retinopathy, or progression of diabetic retinopathy), and nerve events (a composite of new loss of vibratory sensation, ankle reflexes, or light touch). We used a random-effects model to calculate overall estimates of effect.
We included four trials (ACCORD, ADVANCE, UKPDS, and VADT) with 27 049 participants. 1626 kidney events, 795 eye events, and 7598 nerve events were recorded during the follow-up period (median 5·0 years, IQR 4·5-5·0). Compared with less intensive glucose control, more intensive glucose control resulted in an absolute difference of -0·90% (95% CI -1·22 to -0·58) in mean HbA
at completion of follow-up. The relative risk was reduced by 20% for kidney events (hazard ratio 0·80, 95% CI 0·72 to 0·88; p<0·0001) and by 13% for eye events (0·87, 0·76 to 1·00; p=0·04), but was not reduced for nerve events (0·98, 0·87 to 1·09; p=0·68).
More intensive glucose control over 5 years reduced both kidney and eye events. Glucose lowering remains important for the prevention of long-term microvascular complications in adults with type 2 diabetes.
None.
Randomized clinical trials (RCTs) are conducted to guide clinicians' selection of therapies for individual patients. Currently, RCTs in critical care often report an overall mean effect and selected ...individual subgroups. Yet work in other fields suggests that such reporting practices can be improved. Specifically, this Critical Care Perspective reviews recent work on so-called "heterogeneity of treatment effect" (HTE) by baseline risk and extends that work to examine its applicability to trials of acute respiratory failure and severe sepsis. Because patients in RCTs in critical care medicine-and patients in intensive care units-have wide variability in their risk of death, these patients will have wide variability in the absolute benefit that they can derive from a given therapy. If the side effects of the therapy are not perfectly collinear with the treatment benefits, this will result in HTE, where different patients experience quite different expected benefits of a therapy. We use simulations of RCTs to demonstrate that such HTE could result in apparent paradoxes, including: (1) positive trials of therapies that are beneficial overall but consistently harm or have little benefit to low-risk patients who met enrollment criteria, and (2) overall negative trials of therapies that still consistently benefit high-risk patients. We further show that these results persist even in the presence of causes of death unmodified by the treatment under study. These results have implications for reporting and analyzing RCT data, both to better understand how our therapies work and to improve the bedside applicability of RCTs. We suggest a plan for measurement in future RCTs in the critically ill.
To determine the risk factors for severe hypoglycemia and the association between severe hypoglycemia and serious cardiovascular adverse events and cardiovascular and all-cause mortality in the ...Veterans Affairs Diabetes Trial (VADT).
This post hoc analysis of data from the VADT included 1,791 military veterans (age 60.5 ± 9.0 years) with suboptimally controlled type 2 diabetes (HbA
9.4 ± 2.0%) of 11.5 ± 7.5 years disease duration with or without known cardiovascular disease and additional cardiovascular risk factors. Participants were randomized to intensive (HbA
<7.0%) versus standard (HbA
<8.5%) glucose control.
The rate of severe hypoglycemia in the intensive treatment group was 10.3 per 100 patient-years compared with 3.7 per 100 patient-years in the standard treatment group (
< 0.001). In multivariable analysis, insulin use at baseline (
= 0.02), proteinuria (
= 0.009), and autonomic neuropathy (
= 0.01) were independent risk factors for severe hypoglycemia, and higher BMI was protective (
= 0.017). Severe hypoglycemia within the past 3 months was associated with an increased risk of serious cardiovascular events (
= 0.032), cardiovascular mortality (
= 0.012), and total mortality (
= 0.024). However, there was a relatively greater increased risk for total mortality in the standard group compared with the intensive group (
= 0.019). The association between severe hypoglycemia and cardiovascular events increased significantly as overall cardiovascular risk increased (
= 0.012).
Severe hypoglycemic episodes within the previous 3 months were associated with increased risk for major cardiovascular events and cardiovascular and all-cause mortality regardless of glycemic treatment group assignment. Standard therapy further increased the risk for all-cause mortality after severe hypoglycemia.
Randomized controlled trials (RCTs) have yielded varying estimates of the benefit of flexible sigmoidoscopy (FS) screening for colorectal cancer (CRC). Our objective was to more precisely estimate ...the effect of FS-based screening on the incidence and mortality of CRC by performing a meta-analysis of published RCTs.
Medline and Embase databases were searched for eligible articles published between 1966 and 28 May 2012. After screening 3,319 citations and 29 potentially relevant articles, two reviewers identified five RCTs evaluating the effect of FS screening on the incidence and mortality of CRC. The reviewers independently extracted relevant data; discrepancies were resolved by consensus. The quality of included studies was assessed using criteria set out by the Evidence-Based Gastroenterology Steering Group. Random effects meta-analysis was performed. The five RCTs meeting eligibility criteria were determined to be of high methodologic quality and enrolled 416,159 total subjects. Four European studies compared FS to no screening and one study from the United States compared FS to usual care. By intention to treat analysis, FS-based screening was associated with an 18% relative risk reduction in the incidence of CRC (0.82, 95% CI 0.73-0.91, p<0.001, number needed to screen NNS to prevent one case of CRC = 361), a 33% reduction in the incidence of left-sided CRC (RR 0.67, 95% CI 0.59-0.76, p<0.001, NNS = 332), and a 28% reduction in the mortality of CRC (relative risk RR 0.72, 95% CI 0.65-0.80, p<0.001, NNS = 850). The efficacy estimate, the amount of benefit for those who actually adhered to the recommended treatment, suggested that FS screening reduced CRC incidence by 32% (p<0.001), and CRC-related mortality by 50% (p<0.001). Limitations of this meta-analysis include heterogeneity in the design of the included trials, absence of studies from Africa, Asia, or South America, and lack of studies comparing FS with colonoscopy or stool-based testing.
This meta-analysis of randomized controlled trials demonstrates that FS-based screening significantly reduces the incidence and mortality of colorectal cancer in average-risk patients.
Policymakers and researchers are strongly encouraging clinicians to support patient autonomy through shared decision-making (SDM). In setting policies for clinical care, decision-makers need to ...understand that current models of SDM have tended to focus on major decisions (e.g., surgeries and chemotherapy) and focused less on everyday primary care decisions. Most decisions in primary care are
substantive everyday decisions
: intermediate-stakes decisions that occur dozens of times every day, yet are non-trivial for patients, such as whether routine mammography should start at age 40, 45, or 50. Expectations that busy clinicians use current models of SDM (here referred to as “detailed” SDM) for these decisions can feel overwhelming to clinicians. Evidence indicates that detailed SDM is simply not realistic for most of these decisions and without a feasible alternative, clinicians usually default to a decision-making approach with little to no personalization. We propose, for discussion and refinement, a compromise approach to personalizing these decisions (everyday SDM). Everyday SDM is based on a feasible process for supporting patient autonomy that also allows clinicians to continue being respectful health advocates for their patients. We propose that alternatives to detailed SDM are needed to make progress toward more patient-centered care.
Type 2 diabetes mellitus is common, and treatment to correct blood glucose levels is standard. However, treatment burden starts years before treatment benefits accrue. Because guidelines often ignore ...treatment burden, many patients with diabetes may be overtreated.
To examine how treatment burden affects the benefits of intensive vs moderate glycemic control in patients with type 2 diabetes.
We estimated the effects of hemoglobin A1c (HbA1c) reduction on diabetes outcomes and overall quality-adjusted life years (QALYs) using a Markov simulation model. Model probabilities were based on estimates from randomized trials and observational studies. Simulated patients were based on adult patients with type 2 diabetes drawn from the National Health and Nutrition Examination Study.
Glucose lowering with oral agents or insulin in type 2 diabetes.
Main outcomes were QALYs and reduction in risk of microvascular and cardiovascular diabetes complications.
Assuming a low treatment burden (0.001, or 0.4 lost days per year), treatment that lowered HbA1c level by 1 percentage point provided benefits ranging from 0.77 to 0.91 QALYs for simulated patients who received a diagnosis at age 45 years to 0.08 to 0.10 QALYs for those who received a diagnosis at age 75 years. An increase in treatment burden (0.01, or 3.7 days lost per year) resulted in HbA1c level lowering being associated with more harm than benefit in those aged 75 years. Across all ages, patients who viewed treatment as more burdensome (0.025-0.05 disutility) experienced a net loss in QALYs from treatments to lower HbA1c level.
Improving glycemic control can provide substantial benefits, especially for younger patients; however, for most patients older than 50 years with an HbA1c level less than 9% receiving metformin therapy, additional glycemic treatment usually offers at most modest benefits. Furthermore, the magnitude of benefit is sensitive to patients' views of the treatment burden, and even small treatment adverse effects result in net harm in older patients. The current approach of broadly advocating intensive glycemic control should be reconsidered; instead, treating patients with HbA1c levels less than 9% should be individualized on the basis of estimates of benefit weighed against the patient's views of the burdens of treatment.
Intensive blood pressure (BP) treatment can avert cardiovascular disease (CVD) events but can cause some serious adverse events. We sought to develop and validate risk models for predicting absolute ...risk difference (increased risk or decreased risk) for CVD events and serious adverse events from intensive BP therapy. A secondary aim was to test if the statistical method of elastic net regularization would improve the estimation of risk models for predicting absolute risk difference, as compared to a traditional backwards variable selection approach.
Cox models were derived from SPRINT trial data and validated on ACCORD-BP trial data to estimate risk of CVD events and serious adverse events; the models included terms for intensive BP treatment and heterogeneous response to intensive treatment. The Cox models were then used to estimate the absolute reduction in probability of CVD events (benefit) and absolute increase in probability of serious adverse events (harm) for each individual from intensive treatment. We compared the method of elastic net regularization, which uses repeated internal cross-validation to select variables and estimate coefficients in the presence of collinearity, to a traditional backwards variable selection approach. Data from 9,069 SPRINT participants with complete data on covariates were utilized for model development, and data from 4,498 ACCORD-BP participants with complete data were utilized for model validation. Participants were exposed to intensive (goal systolic pressure < 120 mm Hg) versus standard (<140 mm Hg) treatment. Two composite primary outcome measures were evaluated: (i) CVD events/deaths (myocardial infarction, acute coronary syndrome, stroke, congestive heart failure, or CVD death), and (ii) serious adverse events (hypotension, syncope, electrolyte abnormalities, bradycardia, or acute kidney injury/failure). The model for CVD chosen through elastic net regularization included interaction terms suggesting that older age, black race, higher diastolic BP, and higher lipids were associated with greater CVD risk reduction benefits from intensive treatment, while current smoking was associated with fewer benefits. The model for serious adverse events chosen through elastic net regularization suggested that male sex, current smoking, statin use, elevated creatinine, and higher lipids were associated with greater risk of serious adverse events from intensive treatment. SPRINT participants in the highest predicted benefit subgroup had a number needed to treat (NNT) of 24 to prevent 1 CVD event/death over 5 years (absolute risk reduction ARR = 0.042, 95% CI: 0.018, 0.066; P = 0.001), those in the middle predicted benefit subgroup had a NNT of 76 (ARR = 0.013, 95% CI: -0.0001, 0.026; P = 0.053), and those in the lowest subgroup had no significant risk reduction (ARR = 0.006, 95% CI: -0.007, 0.018; P = 0.71). Those in the highest predicted harm subgroup had a number needed to harm (NNH) of 27 to induce 1 serious adverse event (absolute risk increase ARI = 0.038, 95% CI: 0.014, 0.061; P = 0.002), those in the middle predicted harm subgroup had a NNH of 41 (ARI = 0.025, 95% CI: 0.012, 0.038; P < 0.001), and those in the lowest subgroup had no significant risk increase (ARI = -0.007, 95% CI: -0.043, 0.030; P = 0.72). In ACCORD-BP, participants in the highest subgroup of predicted benefit had significant absolute CVD risk reduction, but the overall ACCORD-BP participant sample was skewed towards participants with less predicted benefit and more predicted risk than in SPRINT. The models chosen through traditional backwards selection had similar ability to identify absolute risk difference for CVD as the elastic net models, but poorer ability to correctly identify absolute risk difference for serious adverse events. A key limitation of the analysis is the limited sample size of the ACCORD-BP trial, which expanded confidence intervals for ARI among persons with type 2 diabetes. Additionally, it is not possible to mechanistically explain the physiological relationships explaining the heterogeneous treatment effects captured by the models, since the study was an observational secondary data analysis.
We found that predictive models could help identify subgroups of participants in both SPRINT and ACCORD-BP who had lower versus higher ARRs in CVD events/deaths with intensive BP treatment, and participants who had lower versus higher ARIs in serious adverse events.