Summary Background Antimicrobial stewardship is advocated to improve the quality of antimicrobial use. We did a systematic review and meta-analysis to assess whether antimicrobial stewardship ...objectives had any effects in hospitals and long-term care facilities on four predefined patients' outcomes: clinical outcomes, adverse events, costs, and bacterial resistance rates. Methods We identified 14 stewardship objectives and did a separate systematic search for articles relating to each one in Embase, Ovid MEDLINE, and PubMed. Studies were included if they reported data on any of the four predefined outcomes in patients in whom the specific antimicrobial stewardship objective was assessed and compared the findings in patients in whom the objective was or was not met. We used a random-effects model to calculate relative risk reductions with relative risks and 95% CIs. Findings We identified 145 unique studies with data on nine stewardship objectives. Overall, the quality of evidence was generally low and heterogeneity between studies was mostly moderate to high. For the objectives empirical therapy according to guidelines, de-escalation of therapy, switch from intravenous to oral treatment, therapeutic drug monitoring, use of a list of restricted antibiotics, and bedside consultation the overall evidence showed significant benefits for one or more of the four outcomes. Guideline-adherent empirical therapy was associated with a relative risk reduction for mortality of 35% (relative risk 0·65, 95% CI 0·54–0·80, p<0·0001) and for de-escalation of 56% (0·44, 0·30–0·66, p<0·0001). Evidence of effects was less clear for adjusting therapy according to renal function, discontinuing therapy based on lack of clinical or microbiological evidence of infection, and having a local antibiotic guide. We found no reports for the remaining five stewardship objectives or for long-term care facilities. Interpretation Our findings of beneficial effects on outcomes with nine antimicrobial stewardship objectives suggest they can guide stewardship teams in their efforts to improve the quality of antibiotic use in hospitals. Funding Dutch Working Party on Antibiotic Policy and Netherlands National Institute for Public Health and the Environment.
In southeast Asia, antibiotic prescription in febrile patients attending primary care is common, and a probable contributor to the high burden of antimicrobial resistance. The objective of this trial ...was to explore whether C-reactive protein (CRP) testing at point of care could rationalise antibiotic prescription in primary care, comparing two proposed thresholds to classify CRP concentrations as low or high to guide antibiotic treatment.
We did a multicentre, open-label, randomised, controlled trial in participants aged at least 1 year with a documented fever or a chief complaint of fever (regardless of previous antibiotic intake and comorbidities other than malignancies) recruited from six public primary care units in Thailand and three primary care clinics and one outpatient department in Myanmar. Individuals were randomly assigned using a computer-based randomisation system at a ratio of 1:1:1 to either the control group or one of two CRP testing groups, which used thresholds of 20 mg/L (group A) or 40 mg/L CRP (group B) to guide antibiotic prescription. Health-care providers were masked to allocation between the two intervention groups but not to the control group. The primary outcome was the prescription of any antibiotic from day 0 to day 5 and the proportion of patients who were prescribed an antibiotic when CRP concentrations were above and below the 20 mg/L or 40 mg/L thresholds. The primary outcome was analysed in the intention-to-treat and per-protocol populations. The trial is registered with ClinicalTrials.gov, number NCT02758821, and is now completed.
Between June 8, 2016, and Aug 25, 2017, we recruited 2410 patients, of whom 803 patients were randomly assigned to CRP group A, 800 to CRP group B, and 807 to the control group. 598 patients in CRP group A, 593 in CRP group B, and 767 in the control group had follow-up data for both day 5 and day 14 and had been prescribed antibiotics (or not) in accordance with test results (per-protocol population). During the trial, 318 (39%) of 807 patients in the control group were prescribed an antibiotic by day 5, compared with 290 (36%) of 803 patients in CRP group A and 275 (34%) of 800 in CRP group B. The adjusted odds ratio (aOR) of 0·80 (95% CI 0·65–0·98) and risk difference of −5·0 percentage points (95% CI −9·7 to −0·3) between group B and the control group were significant, although lower than anticipated, whereas the reduction in prescribing in group A compared with the control group was not significant (aOR 0·86 0·70–1·06; risk difference −3·3 percentage points –8·0 to 1·4). Patients with high CRP concentrations in both intervention groups were more likely to be prescribed an antibiotic than in the control group (CRP ≥20 mg/L: group A vs control group, p<0·0001; CRP ≥40 mg/L: group B vs control group, p<0·0001), and those with low CRP concentrations were more likely to have an antibiotic withheld (CRP <20 mg/L: group A vs control group, p<0·0001; CRP <40 mg/L: group B vs control group, p<0·0001). 24 serious adverse events were recorded, consisting of 23 hospital admissions and one death, which occurred in CRP group A. Only one serious adverse event was thought to be possibly related to the study (a hospital admission in CRP group A).
In febrile patients attending primary care, testing for CRP at point of care with a threshold of 40 mg/L resulted in a modest but significant reduction in antibiotic prescribing, with patients with high CRP being more likely to be prescribed an antibiotic, and no evidence of a difference in clinical outcomes. This study extends the evidence base from lower-income settings supporting the use of CRP tests to rationalise antibiotic use in primary care patients with an acute febrile illness. A key limitation of this study is the individual rather than cluster randomised study design which might have resulted in contamination between the study groups, reducing the effect size of the intervention.
Wellcome Trust Institutional Strategic Support Fund grant (105605/Z/14/Z) and Foundation for Innovative New Diagnostics (FIND) funding from the Australian Government.
Summary Background Dual antiplatelet therapy (DAPT) cessation increases the risk of adverse events after percutaneous coronary intervention (PCI). Whether risk changes over time, depends on the ...underlying reason for DAPT cessation, or both is unknown. We assessed associations between different modes of DAPT cessation and cardiovascular risk after PCI. Methods The PARIS (patterns of non-adherence to anti-platelet regimens in stented patients) registry is a prospective observational study of patients undergoing PCI with stent implantation in 15 clinical sites in the USA and Europe between July 1, 2009, and Dec 2, 2010. Adult patients (aged 18 years or older) undergoing successful stent implantation in one or more native coronary artery and discharged on DAPT were eligible for enrolment. Patients were followed up at months 1, 6, 12, and 24 after implantation. Prespecified categories for DAPT cessation included physician-recommended discontinuation, brief interruption (for surgery), or disruption (non-compliance or because of bleeding). All adverse events and episodes of DAPT cessation were independently adjudicated. Using Cox models with time-varying covariates, we examined the effect of DAPT cessation on major adverse events (MACE composite of cardiac death, definite or probable stent thrombosis, myocardial infarction, or target-lesion revascularisation). Incidence rates for DAPT cessation and adverse events were calculated as Kaplan-Meier estimates of time to the first event. This study is registered with ClinicalTrials.gov , number NCT00998127. Findings We enrolled 5031 patients undergoing PCI, including 5018 in the final study population. Over 2 years, the overall incidence of any DAPT cessation was 57·3%. Rate of any discontinuation was 40·8%, of interruption was 10·5%, and of disruption was 14·4%. The corresponding overall 2 year MACE rate was 11·5%, most of which (74%) occurred while patients were taking DAPT. Compared with those on DAPT, the adjusted hazard ratio (HR) for MACE due to interruption was 1·41 (95% CI 0·94–2·12; p=0·10) and to disruption was 1·50 (1·14–1.97; p=0·004). Within 7 days, 8–30 days, and more than 30 days after disruption, adjusted HRs were 7·04 (3·31–14·95), 2·17 (0·97–4·88), and 1·3 (0·97–1·76), respectively. By contrast with patients who remained on DAPT, those who discontinued had lower MACE risk (0·63 0·46–0·86). Results were similar after excluding patients receiving bare metal stents and using an alternative MACE definition that did not include target lesion revascularisation. Interpretation In a real-world setting, for patients undergoing PCI and discharged on DAPT, cardiac events after DAPT cessation depend on the clinical circumstance and reason for cessation and attenuates over time. While most events after PCI occur in patients on DAPT, early risk for events due to disruption is substantial irrespective of stent type. Funding Bristol-Myers Squibb and Sanofi-Aventis.
Recent thymic emigrants can be identified by T cell receptor excision circles (TRECs) formed during T-cell receptor rearrangement. Decreasing numbers of TRECs have been observed with aging and in ...human immunodeficiency virus (HIV)-1 infected individuals, suggesting thymic impairment. Here, we show that in healthy individuals, declining thymic output will affect the TREC content only when accompanied by naive T-cell division. The rapid decline in TRECs observed during HIV-1 infection and the increase following HAART are better explained not by thymic impairment, but by changes in peripheral T-cell division rates. Our data indicate that TREC content in healthy individuals is only indirectly related to thymic output, and in HIV-1 infection is mainly affected by immune activation.
Celotno besedilo
Dostopno za:
DOBA, IJS, IZUM, KILJ, NUK, PILJ, PNG, SAZU, UILJ, UKNU, UL, UM, UPUK
In human immunodeficiency virus (HIV)-1 infection, highly increased T-cell turnover was proposed to cause exhaustion of lymphocyte production and consequently development of AIDS. Here, we ...investigated cell proliferation, as measured by expression of the Ki-67 nuclear antigen, in peripheral blood CD4+ and CD8+ lymphocyte subpopulations before and during highly active antiretroviral therapy (HAART). In untreated HIV-1 infection, both the percentage and number of Ki-67+CD4+ and CD8+ lymphocytes were significantly increased, compared with values obtained from healthy individuals. A more than 10-fold increase in the percentage of dividing naive CD4+ T cells in the blood was found when the number of these cells were below 100 per μL.. HAART induced an immediate decline in Ki-67 antigen expression, despite often very low CD4+ T-cell numbers, arguing against increased proliferation being a homeostatic response. After approximately 24 weeks of HAART treatment, a transient increase in the number of proliferating cells was seen, but only in the CD4+CD27+ memory pool. In the CD8+ T-cell compartment, the number of dividing cells was elevated 20- to 25-fold. This increase was most notable in the CD27+ CD 45RO+ and CD27−CD45RO+ memory CD8+ T-cell pool, corresponding with the degree of expansion of these subsets. Reduction of plasma HIV-RNA load by HAART was accompanied by a decrease in numbers and percentages of dividing cells in all CD8+T-cell subsets. Taken together, our results indicate that peripheral T-cell proliferation is a consequence of generalized immune activation. (Blood. 2000;95:249-255)
To evaluate the efficacy and tolerability of gabapentin enacarbil (GEn) 1200 mg or 600 mg compared with placebo in subjects with moderate-to-severe primary restless legs syndrome (RLS).
This 12-week, ...multicenter, double-blind, placebo-controlled study randomized subjects (1:1:1) to GEn 1200 mg, 600 mg, or placebo. Co-primary endpoints: mean change from baseline in International Restless Legs Scale (IRLS) total score and proportion of responders (rated as "very much" or "much" improved) on the investigator-rated Clinical Global Impression-Improvement scale (CGI-I) at Week 12 LOCF for GEn 1200 mg compared with placebo. Secondary endpoints included GEn 600 mg compared with placebo on the IRLS and CGI-I at Week 12 LOCF and subjective measures for sleep. Safety and tolerability assessments included adverse events.
325 subjects were randomized (GEn 1200 mg = 113; 600 mg = 115; placebo = 97). GEn 1200 mg significantly improved mean SD IRLS total score at Week 12 LOCF (baseline: 23.2 5.32; Week 12: 10.2 8.03) compared with placebo (baseline: 23.8 4.58; Week 12: 14.0 7.87; adjusted mean treatment difference AMTD: -3.5; p = 0.0015), and significantly more GEn 1200 mg-treated (77.5%) than placebo-treated (44.8%) subjects were CGI-I responders (p < 0.0001). Similar significant results were observed with GEn 600 mg for IRLS (AMTD: -4.3; p < 0.0001) and CGI-I (72.8% compared with 44.8%; p < 0.0001). GEn also significantly improved sleep outcomes (Post-Sleep Questionnaire, Pittsburgh Sleep Diary and Medical Outcomes Sleep Scale) compared with placebo. The most commonly reported adverse events were somnolence (GEn 1200 mg = 18.0%; 600 mg = 21.7%; placebo = 2.1%) and dizziness (GEn 1200 mg = 24.3%; 600 mg = 10.4%; placebo = 5.2%). Dizziness increased with increased dose and led to discontinuation in 2 subjects (GEn 1200 mg, n = 1; GEn 600 mg, n = 1). Somnolence led to discontinuation in 3 subjects (GEn 600 mg).
GEn 1200 mg and 600 mg significantly improve RLS symptoms and sleep disturbance compared with placebo and are generally well tolerated.
To distinguish between antigenic stimulation and CD4 T-cell homeostasis as the cause of T-cell hyperactivation in HIV infection, we studied T-cell activation in 47 patients before and during highly ...active antiretroviral therapy (HAART). We show that expression of human leukocyte antigen (HLA)-DR, CD38, and Ki67 on T cells decreased during HAART but remained elevated over normal values until week 48 of therapy. We confirm previous reports that T-cell activation correlates positively with plasma HIV RNA levels (suggesting antigenic stimulation), and negatively with CD4 count (suggesting CD4 T-cell homeostasis). However, these correlations may be spurious, because misleading, due to the well-established negative correlation between CD4 count and plasma HIV RNA levels. To resolve this conflict, we computed partial correlation coefficients. Correcting for CD4 counts, we show that plasma HIV RNA levels contributed to T-cell hyperactivation. Correcting for plasma HIV RNA levels, we show that CD4 T-cell depletion contributed to T-cell activation. Correcting for both, activation of CD4 and CD8 T cells remained positively correlated. Because this suggests that CD4 and CD8 T-cell activation is caused by a common additional factor, we conclude that antigenic stimulation by HIV or other (opportunistic) infections is the most parsimonious explanation for T-cell activation in HIV infection. Persistence of HIV antigens may explain why T-cell activation fails to revert to levels found in healthy individuals after 48 weeks of therapy.