Hemophagocytic lymphohistiocytosis (HLH) is a rare though often fatal hyperinflammatory syndrome mimicking sepsis in the critically ill. Diagnosis relies on the HLH-2004 criteria and HScore, both of ...which have been developed in pediatric or adult non-critically ill patients, respectively. Therefore, we aimed to determine the sensitivity and specificity of HLH-2004 criteria and HScore in a cohort of adult critically ill patients.
In this further analysis of a retrospective observational study, patients ≥ 18 years admitted to at least one adult ICU at Charité - Universitätsmedizin Berlin between January 2006 and August 2018 with hyperferritinemia of ≥ 500 μg/L were included. Patients' charts were reviewed for clinically diagnosed or suspected HLH. Receiver operating characteristics (ROC) analysis was performed to determine prediction accuracy.
In total, 2623 patients with hyperferritinemia were included, of whom 40 patients had HLH. We found the best prediction accuracy of HLH diagnosis for a cutoff of 4 fulfilled HLH-2004 criteria (95.0% sensitivity and 93.6% specificity) and HScore cutoff of 168 (100% sensitivity and 94.1% specificity). By adjusting HLH-2004 criteria cutoffs of both hyperferritinemia to 3000 μg/L and fever to 38.2 °C, sensitivity and specificity increased to 97.5% and 96.1%, respectively. Both a higher number of fulfilled HLH-2004 criteria OR 1.513 (95% CI 1.372-1.667); p < 0.001 and a higher HScore OR 1.011 (95% CI 1.009-1.013); p < 0.001 were significantly associated with in-hospital mortality.
An HScore cutoff of 168 revealed a sensitivity of 100% and a specificity of 94.1%, thereby providing slightly superior diagnostic accuracy compared to HLH-2004 criteria. Both HLH-2004 criteria and HScore proved to be of good diagnostic accuracy and consequently might be used for HLH diagnosis in critically ill patients.
The study was registered with www.ClinicalTrials.gov (NCT02854943) on August 1, 2016.
Symptom checkers are digital tools assisting laypersons in self-assessing the urgency and potential causes of their medical complaints. They are widely used but face concerns from both patients and ...health care professionals, especially regarding their accuracy. A 2015 landmark study substantiated these concerns using case vignettes to demonstrate that symptom checkers commonly err in their triage assessment.
This study aims to revisit the landmark index study to investigate whether and how symptom checkers' capabilities have evolved since 2015 and how they currently compare with laypersons' stand-alone triage appraisal.
In early 2020, we searched for smartphone and web-based applications providing triage advice. We evaluated these apps on the same 45 case vignettes as the index study. Using descriptive statistics, we compared our findings with those of the index study and with publicly available data on laypersons' triage capability.
We retrieved 22 symptom checkers providing triage advice. The median triage accuracy in 2020 (55.8%, IQR 15.1%) was close to that in 2015 (59.1%, IQR 15.5%). The apps in 2020 were less risk averse (odds 1.11:1, the ratio of overtriage errors to undertriage errors) than those in 2015 (odds 2.82:1), missing >40% of emergencies. Few apps outperformed laypersons in either deciding whether emergency care was required or whether self-care was sufficient. No apps outperformed the laypersons on both decisions.
Triage performance of symptom checkers has, on average, not improved over the course of 5 years. It decreased in 2 use cases (advice on when emergency care is required and when no health care is needed for the moment). However, triage capability varies widely within the sample of symptom checkers. Whether it is beneficial to seek advice from symptom checkers depends on the app chosen and on the specific question to be answered. Future research should develop resources (eg, case vignette repositories) to audit the capabilities of symptom checkers continuously and independently and provide guidance on when and to whom they should be recommended.
Abstract Background Preoperative anemia and transfusion are associated with increased morbidity and mortality in cardiac surgery patients. It is unclear which of these factors plays the leading role ...in poor outcomes after cardiac surgery. The goal of this study was to analyze the influence of anemias of varying severity and intraoperative transfusion on long-term survival, and to characterize their interaction in cardiac surgery patients. Methods This was an observational cohort study conducted at a German university hospital. All patients undergoing cardiac surgery between 2006 and 2011 were screened for eligibility; duration of follow-up was 3 years. A total of 4494 patients were suitable for analysis; data on long-term survival were available for 3131 of these patients. The main outcome measure was survival at the 3-year follow-up. Length of stay and in-hospital mortality were assessed as secondary outcomes. Results Multivariate Cox regression analyses indicated that both the severity of preoperative anemia (mild anemia: hazard ratio HR, 1.441; 95% confidence interval CI, 1.201-1.728; severe anemia: HR, 1.805; 95% CI, 1.336-2.440) and intraoperative transfusion (HR, 1.340; 95% CI, 1.109-1.620) were associated with decreased long-term survival. Long-term survival was worse in anemic patients who received an intraoperative transfusion compared with those who did not receive an intraoperative transfusion. Conclusions Both preoperative anemia and transfusion are by themselves and in combination associated with decreased long-term survival. When anemic patients require transfusion, our results provide evidence that the risk of death after cardiac surgery may depend to a considerable extent on the severity of preoperative anemia.
Background Significant improvements in clinical outcome can be achieved by implementing effective strategies to optimise pain management, reduce sedative exposure, and prevent and treat delirium in ...ICU patients. One important strategy is the monitoring of pain, agitation and delirium (PAD bundle). We hypothesised that there is no sufficient financial benefit to implement a monitoring strategy in a Diagnosis Related Group (DRG)-based reimbursement system, therefore we expected better clinical and decreased economic outcome for monitored patients. Methods This is a retrospective observational study using routinely collected data. We used univariate and multiple linear analysis, machine-learning analysis and a novel correlation statistic (maximal information coefficient) to explore the association between monitoring adherence and resulting clinical and economic outcome. For univariate analysis we split patients in an adherence achieved and an adherence non-achieved group. Results In total 1,323 adult patients from two campuses of a German tertiary medical centre, who spent at least one day in the ICU between admission and discharge between 1. January 2016 and 31. December 2016. Adherence to PAD monitoring was associated with shorter hospital LoS (e.g. pain monitoring 13 vs. 10 days; p<0.001), ICU LoS, duration of mechanical ventilation shown by univariate analysis. Despite the improved clinical outcome, adherence to PAD elements was associated with a decreased case mix per day and profit per day shown by univariate analysis. Multiple linear analysis did not confirm these results. PAD monitoring is important for clinical as well as economic outcome and predicted case mix better than severity of illness shown by machine learning analysis. Conclusion Adherence to PAD bundles is also important for clinical as well as economic outcome. It is associated with improved clinical and worse economic outcome in comparison to non-adherence in univariate analysis but not confirmed by multiple linear analysis. Trial registration clinicaltrials.gov NCT02265263, Registered 15 October 2014.
Background The COVID-19 pandemic posed significant challenges to global health systems. Efficient public health responses required a rapid and secure collection of health data to improve the ...understanding of SARS-CoV-2 and examine the vaccine effectiveness (VE) and drug safety of the novel COVID-19 vaccines. Objective This study (COVID-19 study on vaccinated and unvaccinated subjects over 16 years; eCOV study) aims to (1) evaluate the real-world effectiveness of COVID-19 vaccines through a digital participatory surveillance tool and (2) assess the potential of self-reported data for monitoring key parameters of the COVID-19 pandemic in Germany. Methods Using a digital study web application, we collected self-reported data between May 1, 2021, and August 1, 2022, to assess VE, test positivity rates, COVID-19 incidence rates, and adverse events after COVID-19 vaccination. Our primary outcome measure was the VE of SARS-CoV-2 vaccines against laboratory-confirmed SARS-CoV-2 infection. The secondary outcome measures included VE against hospitalization and across different SARS-CoV-2 variants, adverse events after vaccination, and symptoms during infection. Logistic regression models adjusted for confounders were used to estimate VE 4 to 48 weeks after the primary vaccination series and after third-dose vaccination. Unvaccinated participants were compared with age- and gender-matched participants who had received 2 doses of BNT162b2 (Pfizer-BioNTech) and those who had received 3 doses of BNT162b2 and were not infected before the last vaccination. To assess the potential of self-reported digital data, the data were compared with official data from public health authorities. Results We enrolled 10,077 participants (aged ≥16 y) who contributed 44,786 tests and 5530 symptoms. In this young, primarily female, and digital-literate cohort, VE against infections of any severity waned from 91.2% (95% CI 70.4%-97.4%) at week 4 to 37.2% (95% CI 23.5%-48.5%) at week 48 after the second dose of BNT162b2. A third dose of BNT162b2 increased VE to 67.6% (95% CI 50.3%-78.8%) after 4 weeks. The low number of reported hospitalizations limited our ability to calculate VE against hospitalization. Adverse events after vaccination were consistent with previously published research. Seven-day incidences and test positivity rates reflected the course of the pandemic in Germany when compared with official numbers from the national infectious disease surveillance system. Conclusions Our data indicate that COVID-19 vaccinations are safe and effective, and third-dose vaccinations partially restore protection against SARS-CoV-2 infection. The study showcased the successful use of a digital study web application for COVID-19 surveillance and continuous monitoring of VE in Germany, highlighting its potential to accelerate public health decision-making. Addressing biases in digital data collection is vital to ensure the accuracy and reliability of digital solutions as public health tools.
Previous studies have revealed that users of symptom checkers (SCs, apps that support self-diagnosis and self-triage) are predominantly female, are younger than average, and have higher levels of ...formal education. Little data are available for Germany, and no study has so far compared usage patterns with people's awareness of SCs and the perception of usefulness.
We explored the sociodemographic and individual characteristics that are associated with the awareness, usage, and perceived usefulness of SCs in the German population.
We conducted a cross-sectional online survey among 1084 German residents in July 2022 regarding personal characteristics and people's awareness and usage of SCs. Using random sampling from a commercial panel, we collected participant responses stratified by gender, state of residence, income, and age to reflect the German population. We analyzed the collected data exploratively.
Of all respondents, 16.3% (177/1084) were aware of SCs and 6.5% (71/1084) had used them before. Those aware of SCs were younger (mean 38.8, SD 14.6 years, vs mean 48.3, SD 15.7 years), were more often female (107/177, 60.5%, vs 453/907, 49.9%), and had higher formal education levels (eg, 72/177, 40.7%, vs 238/907, 26.2%, with a university/college degree) than those unaware. The same observation applied to users compared to nonusers. It disappeared, however, when comparing users to nonusers who were aware of SCs. Among users, 40.8% (29/71) considered these tools useful. Those considering them useful reported higher self-efficacy (mean 4.21, SD 0.66, vs mean 3.63, SD 0.81, on a scale of 1-5) and a higher net household income (mean EUR 2591.63, SD EUR 1103.96 mean US $2798.96, SD US $1192.28, vs mean EUR 1626.60, SD EUR 649.05 mean US $1756.73, SD US $700.97) than those who considered them not useful. More women considered SCs unhelpful (13/44, 29.5%) compared to men (4/26, 15.4%).
Concurring with studies from other countries, our findings show associations between sociodemographic characteristics and SC usage in a German sample: users were on average younger, of higher socioeconomic status, and more commonly female compared to nonusers. However, usage cannot be explained by sociodemographic differences alone. It rather seems that sociodemographics explain who is or is not aware of the technology, but those who are aware of SCs are equally likely to use them, independently of sociodemographic differences. Although in some groups (eg, people with anxiety disorder), more participants reported to know and use SCs, they tended to perceive them as less useful. In other groups (eg, male participants), fewer respondents were aware of SCs, but those who used them perceived them to be more useful. Thus, SCs should be designed to fit specific user needs, and strategies should be developed to help reach individuals who could benefit but are not aware of SCs yet.
Increased plasma concentrations of circulating cell-free hemoglobin (CFH) are supposed to contribute to the multifactorial etiology of acute kidney injury (AKI) in critically ill patients while the ...CFH-scavenger haptoglobin might play a protective role. We evaluated the association of CFH and haptoglobin with AKI in patients with an acute respiratory distress syndrome (ARDS) requiring therapy with VV ECMO.
Patients with CFH and haptoglobin measurements before initiation of ECMO therapy were identified from a cohort of 1044 ARDS patients and grouped into three CFH concentration groups using a risk stratification. The primary objective was to assess the association of CFH and haptoglobin with KDIGO stage 3 AKI. Further objectives included the identification of a target haptoglobin concentration to protect from CFH-associated AKI.
Two hundred seventy-three patients fulfilled the inclusion criteria. Of those, 154 patients (56.4%) had AKI at ECMO initiation. The incidence of AKI increased stepwise with increasing concentrations of CFH reaching a plateau at 15 mg/dl. Compared to patients with low < 5 mg/dl CFH concentrations, patients with moderate 5-14 mg/dl and high ≥ 15 mg/dl CFH concentrations had a three- and five-fold increased risk for AKI (adjusted odds ratio OR moderate vs. low, 2.69 95% CI, 1.25-5.95, P = 0.012; and OR high vs. low, 5.47 2.00-15.9, P = 0.001). Among patients with increased CFH concentrations, haptoglobin plasma levels were lower in patients with AKI compared to patients without AKI. A haptoglobin concentration greater than 2.7 g/l in the moderate and 2.4 g/l in the high CFH group was identified as clinical cutoff value to protect from CFH-associated AKI (sensitivity 89.5% 95% CI, 83-96 and 90.2% 80-97, respectively).
In critically ill patients with ARDS requiring therapy with VV ECMO, an increased plasma concentration of CFH was identified as independent risk factor for AKI. Among patients with increased CFH concentrations, higher plasma haptoglobin concentrations might protect from CFH-associated AKI and should be subject of future research.
The impact of algorithms on everyday life is ever increasing. Medicine and public health are not excluded from this development – algorithms in medicine do not only challenge, change and inform ...research (methods) but also clinical situations. Given this development, questions arise concerning the competency level of prospective physicians, thus medical students, on algorithm related topics. This paper, based on a master's thesis in library and information science written at Humboldt‐Universität zu Berlin, gives an insight into this topic by presenting and analysing the results of a knowledge test conducted among medical students in Germany. F. J.
Abstract Objective(s) Evacuation of shed blood from around the heart and lungs is a critical requirement for patients in early recovery after cardiac surgery. Incomplete evacuation of shed blood can ...result in retained blood, which may require subsequent re-interventions to facilitate recovery. The purpose of this study was to determine the incidence of retained blood requiring re-intervention and examine the impact on outcomes. Methods Cross-sectional, observational study of all adult cardiac surgery patients between 2006 and 2013. Subjects that required an intervention to remove blood, blood clot or bloodily fluid were attributed to the retained blood group. These patients were compared to those not presenting with any of the defined criteria for retained blood. Multivariate regression was performed to account for confounders. Results Of 6,909 adult patients who underwent cardiac surgery, 1,316 (19%) patients presented with a retained blood related condition. Retained blood was associated with increased in-hospital mortality (OR 4.041, 95%-CI 2.589-6.351, p<0.001), a length of stay greater than 13 days in hospital (OR 3.853, 95%-CI 2.882-5.206, p<0.001) and 5 days in the ICU (OR 4.602, 95%-CI 3.449-6.183, p<0.001). Furthermore, the odds ratio for a time of ventilation greater than 23 hours was 3.596 (95%-CI 2.690-4.851, p<0.001), and for incidence of renal replacement therapy of 4.449 (95%-CI 3.188-6.226, p<0.001). Conclusions Postoperative retained blood is a common outcome and associated with higher in-hospital mortality, longer ICU and hospital stay as well as higher incidence of renal replacement therapy. Further research is needed to validate these results and explore interventions to reduce these complications.
Data provenance refers to the origin, processing, and movement of data. Reliable and precise knowledge about data provenance has great potential to improve reproducibility as well as quality in ...biomedical research and, therefore, to foster good scientific practice. However, despite the increasing interest on data provenance technologies in the literature and their implementation in other disciplines, these technologies have not yet been widely adopted in biomedical research.
The aim of this scoping review was to provide a structured overview of the body of knowledge on provenance methods in biomedical research by systematizing articles covering data provenance technologies developed for or used in this application area; describing and comparing the functionalities as well as the design of the provenance technologies used; and identifying gaps in the literature, which could provide opportunities for future research on technologies that could receive more widespread adoption.
Following a methodological framework for scoping studies and the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines, articles were identified by searching the PubMed, IEEE Xplore, and Web of Science databases and subsequently screened for eligibility. We included original articles covering software-based provenance management for scientific research published between 2010 and 2021. A set of data items was defined along the following five axes: publication metadata, application scope, provenance aspects covered, data representation, and functionalities. The data items were extracted from the articles, stored in a charting spreadsheet, and summarized in tables and figures.
We identified 44 original articles published between 2010 and 2021. We found that the solutions described were heterogeneous along all axes. We also identified relationships among motivations for the use of provenance information, feature sets (capture, storage, retrieval, visualization, and analysis), and implementation details such as the data models and technologies used. The important gap that we identified is that only a few publications address the analysis of provenance data or use established provenance standards, such as PROV.
The heterogeneity of provenance methods, models, and implementations found in the literature points to the lack of a unified understanding of provenance concepts for biomedical data. Providing a common framework, a biomedical reference, and benchmarking data sets could foster the development of more comprehensive provenance solutions.