Abstract Background We sought to compare the association of whole-blood lactate kinetics with survival in patients with septic shock undergoing early quantitative resuscitation. Methods This was a ...preplanned analysis of a multicenter, ED-based, randomized, controlled trial of early sepsis resuscitation. Inclusion criteria were suspected infection, two or more systemic inflammation criteria, either systolic BP< 90 mm Hg after a fluid bolus or lactate level > 4 mM, two serial lactate measurements, and an initial lactate level > 2.0 mM. We calculated the relative lactate clearance, rate of lactate clearance, and occurrence of early lactate normalization (decline to < 2.0 mM in the first 6 h). Area under the receiver operating characteristic curve (AUC) and multivariate logistic regression were used to determine the lactate kinetic parameters that were the strongest predictors of survival. Results The analysis included 187 patients, of whom 36% (n = 68) normalized their lactate level. Overall survival was 76.5% (143 of 187 patients), and the AUC of initial lactate to predict survival was 0.64. The AUCs for relative lactate clearance and lactate clearance rate were 0.67 and 0.58, respectively. Lactate normalization was the strongest predictor of survival (adjusted OR, 5.2; 95% CI, 1.7–15.8), followed by lactate clearance ≥ 50% (OR, 4.0; 95% CI, 1.6–10.0). Lactate clearance ≥ 10% (OR, 1.6; 95% CI, 0.6–4.4) was not a significant independent predictor in this cohort. Conclusions In patients in the ED with a sepsis diagnosis, early lactate normalization during the first 6 h of resuscitation was the strongest independent predictor of survival and was superior to other measures of lactate kinetics. Trial registry ClinicalTrials.gov ; No.: NCT00372502 ; URL: clinicaltrials.gov
Study objective The Third International Consensus Definitions Task Force (SEP-3) proposed revised criteria defining sepsis and septic shock. We seek to evaluate the performance of the SEP-3 ...definitions for prediction of inhospital mortality in an emergency department (ED) population and compare the performance of the SEP-3 definitions to that of the previous definitions. Methods This was a secondary analysis of 3 prospectively collected, observational cohorts of infected ED subjects aged 18 years or older. The primary outcome was all-cause inhospital mortality. In accordance with the SEP-3 definitions, we calculated test characteristics of sepsis (quick Sequential Organ Failure Assessment qSOFA score ≥2) and septic shock (vasopressor dependence plus lactate level >2.0 mmol/L) for mortality and compared them to the original 1992 consensus definitions. Results We identified 7,754 ED patients with suspected infection overall; 117 had no documented mental status evaluation, leaving 7,637 patients included in the analysis. The mortality rate for the overall population was 4.4% (95% confidence interval CI 3.9% to 4.9%). The mortality rate for patients with qSOFA score greater than or equal to 2 was 14.2% (95% CI 12.2% to 16.2%), with a sensitivity of 52% (95% CI 46% to 57%) and specificity of 86% (95% CI 85% to 87%) to predict mortality. The original systemic inflammatory response syndrome–based 1992 consensus sepsis definition had a 6.8% (95% CI 6.0% to 7.7%) mortality rate, sensitivity of 83% (95% CI 79% to 87%), and specificity of 50% (95% CI 49% to 51%). The SEP-3 septic shock mortality was 23% (95% CI 16% to 30%), with a sensitivity of 12% (95% CI 11% to 13%) and specificity of 98.4% (95% CI 98.1% to 98.7%). The original 1992 septic shock definition had a 22% (95% CI 17% to 27%) mortality rate, sensitivity of 23% (95% CI 18% to 28%), and specificity of 96.6% (95% CI 96.2% to 97.0%). Conclusion Both the new SEP-3 and original sepsis definitions stratify ED patients at risk for mortality, albeit with differing performances. In terms of mortality prediction, the SEP-3 definitions had improved specificity, but at the cost of sensitivity. Use of either approach requires a clearly intended target: more sensitivity versus specificity.
Summary Background The Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3) present clinical criteria for the classification of patients with sepsis. We investigated ...incidence and long-term outcomes of patients diagnosed with these classifications, which are currently unknown. Methods We did a retrospective analysis using data from 30 239 participants from the USA who were aged at least 45 years and enrolled in the Reasons for Geographic and Racial Differences in Stroke (REGARDS) cohort. Patients were enrolled between Jan 25, 2003, and Oct 30, 2007, and we identified hospital admissions from Feb 5, 2003, to Dec 31, 2012, and applied three classifications: infection and systemic inflammatory response syndrome (SIRS) criteria, elevated sepsis-related organ failure assessment (SOFA) score from Sepsis-3, and elevated quick SOFA (qSOFA) score from Sepsis-3. We estimated incidence during the study period, in-hospital mortality, and 1-year mortality. Findings Of 2593 first infection events, 1526 met SIRS criteria, 1080 met SOFA criteria, and 378 met qSOFA criteria. Incidence was 8·2 events (95% CI 7·8–8·7) per 1000 person-years for SIRS, 5·8 events (5·4–6·1) per 1000 person-years for SOFA, and 2·0 events (1·8–2·2) per 1000 person-years for qSOFA. In-hospital mortality was higher for patients with an elevated qSOFA score (67 23% of 295 patients died) than for those with an elevated SOFA score (125 13% of 960 patients died) or who met SIRS criteria (128 9% of 1392 patients died). Mortality at 1 year after discharge was also highest for patients with an elevated qSOFA score (29·4 deaths 95% CI 22·3–38·7 per 100 person-years) compared with those with an elevated SOFA score (22·6 deaths 19·2–26·6 per 100 person-years) or those who met SIRS criteria (14·7 deaths 12·5–17·2 per 100 person-years). Interpretation SIRS, SOFA, and qSOFA classifications identified different incidences and mortality. Our findings support the use of the SOFA and qSOFA classifications to identify patients with infection who are at elevated risk of poor outcomes. These classifications could be used in future epidemiological assessments and studies of patients with infection. Funding National Institute for Nursing Research, National Center for Research Resources, and National Institute of Neurological Disorders and Stroke.
Study objective Abnormal (both low and high) central venous saturation (ScvO2 ) is associated with increased mortality in emergency department (ED) patients with suspected sepsis. Methods This was a ...secondary analysis of 4 prospectively collected registries of ED patients treated with early goal-directed therapy–based sepsis resuscitation protocols from 4 urban tertiary care hospitals. Inclusion criteria were sepsis, hypoperfusion defined by systolic blood pressure less than 90 mm Hg or lactate level greater than or equal to 4 mmol/L, and early goal-directed therapy treatment. ScvO2 levels were stratified into 3 groups: hypoxia (ScvO2 <70%); normoxia (ScvO2 71% to 89%); and hyperoxia (ScvO2 90% to 100%). The primary exposures were initial ScvO2 and maximum ScvO2 achieved, with the primary outcome as inhospital mortality. Multivariate analysis was performed. Results There were 619 patients who met criteria and were included. For the maximum ScvO2 , compared with the mortality rate in the normoxia group of 96 of 465 (21%; 95% confidence interval CI 17% to 25%), both the hypoxia mortality rate, 25 of 62 (40%; 95% CI 29% to 53%) and hyperoxia mortality rate, 31 of 92 (34%; 95% CI 25% to 44%) were significantly higher, which remained significant in a multivariate modeling. When the initial ScvO2 measurement was analyzed in a multivariate model, only hyperoxia was significantly higher. Conclusion The maximum ScvO2 value achieved in the ED (both abnormally low and high) was associated with increased mortality. In multivariate analysis for initial ScvO2 , the hyperoxia group was associated with increased mortality, but not the hypoxia group. This study suggests that future research aimed at targeting methods to normalize high ScvO2 values by therapies that improve microcirculatory flow or mitochondrial dysfunction may be warranted.
Abstract Purpose Side-stream dark-field microscopy is currently used to directly visualize sublingual microcirculation at the bedside. Our experience has found inherent technical challenges in the ...image acquisition process. This article presents and assesses a quality assurance method to rate image acquisition quality before analysis. Materials and Methods We identified 6 common image capture and analysis problem areas in sublingual side-stream dark-field videos: illumination, duration, focus, content, stability, and pressure. We created the “Microcirculation Image Quality Score” by assigning a score of optimal (0 points), suboptimal but acceptable (1 point), or unacceptable (10 points) to each category (for further details, go to http://www.MicroscanAnalysis.blogspot.com ). We evaluated 59 videos from a convenience sample of 34 unselected, noncritically ill emergency department patients to create a test set. Two raters, blinded to each other, implemented the score. Any video with a cumulative score of 10 or higher (range, 0-60) was considered unacceptable for further analysis. Results We created the Microcirculation Image Quality Score and applied it to 59 videos. For this particular set of 59 videos, the mean (SD) passing quality score was 1.68 (0.90), and the mean (SD) failing quality score was 15.74 (6.19), with 27 of 59 passing the quality score less than 10. Highest failure occurred from pressure artifact. The interrater agreement for acceptability was assessed using Cohen κ for each category: illumination ( κ = 1.0), duration ( κ = 1.0), focus ( κ = 0.91), content ( κ = 0.76), stability ( κ = 0.71), and pressure ( κ = 0.82) and overall pass-fail rates (score > 10) ( κ = 0.66). Conclusion Our Microcirculation Image Quality Score addresses many of the common areas where video quality can degrade. The criteria introduced are an objective way to assess the quality of image acquisition, with the goal of selecting videos of adequate quality for analysis. The interrater reliability results in our preliminary study suggest that the Microcirculation Image Quality Score is reasonably repeatable between reviewers. Further assessment is warranted.
Study objective We assess the diagnostic accuracy of plasma neutrophil gelatinase–associated lipocalin (NGAL) to predict acute kidney injury in emergency department (ED) patients with suspected ...sepsis. Methods We conducted a secondary analysis of a prospective observational study of a convenience sample of patients from 10 academic medical center EDs. Inclusion criteria were adult patients aged 18 years or older, with suspected infection or a serum lactate level greater than 2.5 mmol/L; 2 or more systemic inflammatory response syndrome criteria; and a subsequent serum creatinine level obtained within 12 to 72 hours of enrollment. Exclusion criteria were pregnancy, do-not-resuscitate status, cardiac arrest, or dialysis dependency. NGAL was measured in plasma collected at ED presentation. Acute kidney injury was defined as an increase in serum creatinine measurement of greater than 0.5 mg/dL during 72 hours. Results There were 661 patient enrolled, with 24 cases (3.6%) of acute kidney injury that developed within 72 hours after ED presentation. Median plasma NGAL levels were 134 ng/mL (interquartile range 57 to 277 ng/mL) in patients without acute kidney injury and 456 ng/mL (interquartile range 296 to 727 ng/mL) in patients with acute kidney injury. Plasma NGAL concentrations of greater than 150 ng/mL were 96% sensitive (95% confidence interval CI 79% to 100%) and 51% (95% CI 47% to 55%) specific for acute kidney injury. In comparison, to achieve equivalent sensitivity with initial serum creatinine level at ED presentation required a cutoff of 0.7 mg/dL and resulted in specificity of 17% (95% CI 14% to 20%). Conclusion In this preliminary investigation, increased plasma NGAL concentrations measured on presentation to the ED in patients with suspected sepsis were associated with the development of acute kidney injury. Our findings support NGAL as a promising new biomarker for acute kidney injury; however, further research is warranted.
Study objective Shock index is a widely reported tool to identify patients at risk for circulatory collapse. We hypothesize that old age, diabetes, hypertension, and β- or calcium channel blockers ...weaken the association between shock index and mortality. Methods This was a cohort study of all first-time emergency department (ED) visits between 1995 and 2011 (n=111,019). We examined whether age 65 years or older, diabetes, hypertension, and use of β- or calcium channel blockers modified the association between shock index and 30-day mortality. Results The 30-day mortality was 3.0%. For all patients, with shock index less than 0.7 as reference, a shock index of 0.7 to 1 had an adjusted odds ratio (OR) of 2.9 (95% confidence interval CI 2.7 to 3.2) for 30-day mortality, whereas shock index greater than or equal to 1 had an OR of 10.5 (95% CI 9.3 to 11.7). The crude OR for shock index greater than or equal to 1 in patients aged 65 years or older was 8.2 (95% CI 7.2 to 9.4) compared with 18.9 (95% CI 15.6 to 23.0) in younger patients. β- Or calcium channel-blocked patients had an OR of 6.4 (95% CI 4.9 to 8.3) versus 12.3 (95% CI 11.0 to 13.8) in nonusers and hypertensive patients had an OR of 8.0 (95% CI 6.6 to 9.4) versus 12.9 (95% CI 11.1 to 14.9) in normotensive patients. Diabetic patients had an OR of 9.3 (95% CI 6.7 to 12.9) versus 10.8 (95% CI 9.6 to 12.0) in nondiabetic patients. A shock index of 0.7 to 1 was associated with ORs greater than 1 (range 2.2 to 3.1), with no evident differences within subgroups. The adjusted analyses showed similar ORs. Conclusion Shock index is independently associated with 30-day mortality in a broad population of ED patients. Old age, hypertension, and β- or calcium channel blockers weaken this association. However, a shock index greater than or equal to 1 suggests substantial 30-day mortality risk in all ED patients.
Abstract The study objective was to derive and validate a clinical decision rule for obtaining blood cultures in Emergency Department (ED) patients with suspected infection. This was a prospective, ...observational cohort study of consecutive adult ED patients with blood cultures obtained. The study ran from February 1, 2000 through February 1, 2001. Patients were randomly assigned to derivation (2/3) or validation (1/3) sets. The outcome was “true bacteremia.” Features of the history, co-morbid illness, physical examination, and laboratory testing were used to create a clinical decision rule. Among 3901 patients, 3730 (96%) were enrolled with 305 (8.2%) episodes of true bacteremia. A decision rule was created with “major criteria” defined as: temperature > 39.5°C (103.0°F), indwelling vascular catheter, or clinical suspicion of endocarditis. “Minor criteria” were: temperature 38.3–39.4°C (101–102.9°F), age > 65 years, chills, vomiting, hypotension (systolic blood pressure < 90 mm Hg), neutrophil% > 80, white blood cell count > 18 k, bands > 5%, platelets < 150 k, and creatinine > 2.0. A blood culture is indicated by the rule if at least one major criterion or two minor criteria are present. Otherwise, patients are classified as “low risk” and cultures may be omitted. Only 4 (0.6%) low-risk patients in the derivation set and 3 (0.9%) low-risk patients in the validation set had positive cultures. The sensitivity was 98% (95% confidence interval CI 96–100%) (derivation) and 97% (95% CI 94–100%) (validation). We developed and validated a promising clinical decision rule for predicting bacteremia in patients with suspected infection.
Abstract Background Prior studies of admitted geriatric syncope patients suggest that diagnostic tests affect management < 5% of the time; whether this is true among all emergency department (ED) ...patients with syncope remains unclear. Objectives To determine the diagnostic yield of routine testing in the hospital or after ED discharge among patients presenting to an ED with syncope. Methods A prospective, observational, cohort study of consecutive ED patients aged ≥ 18 years presenting with syncope was conducted. The four most commonly utilized tests (echocardiography, telemetry, ambulatory electrocardiography monitoring, and troponin) were studied. Interobserver agreement as to whether test results determined the etiology of the syncope was measured using kappa (κ) values. Results Of 570 patients with syncope, 73 patients (8%; 95% confidence interval 7–10%) had studies that were diagnostic. One hundred fifty (26%) had echocardiography, with 33 (22%) demonstrating a likely etiology of the syncopal event, such as critical valvular disease or significantly depressed left ventricular function (κ = 0.75). On hospitalization, 330 (58%) patients were placed on telemetry, and 19 (3%) had worrisome dysrhythmias (κ = 0.66). There were 317 (55%) patients who had troponin levels drawn, of whom 19 (3%) had positive results (κ = 1); 56 (10%) patients were discharged with monitoring, with significant findings in only 2 (0.4%) patients (κ = 0.65). Conclusion Although routine testing is prevalent in ED patients with syncope, the diagnostic yield is relatively low. Nevertheless, some testing, particularly echocardiography, may yield critical findings. Current efforts to reduce the cost of medical care by eliminating nondiagnostic medical testing and increasing emphasis on practicing evidence-based medicine argue for more discriminate testing when evaluating syncope.
Abstract Objective Although storage alters red blood cells, several recent, randomized trials found no differences in clinical outcomes between patients transfused with red blood cells stored for ...shorter versus longer periods of time. The objective of this study was to see whether storage impairs the in vivo ability of erythrocytes to traverse the microcirculation and deliver oxygen at the tissue level. Methods A subset of subjects from a clinical trial of cardiac surgery patients randomized to receive transfusions of red blood cells stored ≤10 days or ≥21 days were assessed for thenar eminence and cerebral tissue hemoglobin oxygen saturation (St O2 ) via the use of near-infrared spectroscopy and sublingual microvascular blood flow via side-stream darkfield videomicroscopy. Results Among 55 subjects, there was little change in the primary endpoint (thenar eminence St O2 from before to after transfusion of one unit) and the change was similar in the 2 groups: +1.7% (95% confidence interval, −0.3, 3.8) for shorter-storage and +0.8% (95% confidence interval, −1.1, 2.9) for longer-storage; P = .61). Similarly, no significant differences were observed for cerebral St O2 or sublingual microvascular blood flow. These parameters also were not different from preoperatively to 1 day postoperatively, reflecting the absence of a cumulative effect of all red blood cell units transfused during this period. Conclusions There were no differences in thenar eminence or cerebral St O2 , or sublingual microcirculatory blood flow, in cardiac surgery patients transfused with red blood cells stored ≤10 days or ≥21 days. These results are consistent with the clinical outcomes in the parent study, which also did not differ, indicating that storage may not impair oxygen delivery by red blood cells in this setting.