Blockchain Technology: Applications in Health Care Angraal, Suveen; Krumholz, Harlan M; Schulz, Wade L
Circulation Cardiovascular quality and outcomes,
09/2017, Letnik:
10, Številka:
9
Journal Article
The goal of this study is to create a predictive, interpretable model of early hospital respiratory failure among emergency department (ED) patients admitted with coronavirus disease 2019 (COVID-19).
...This was an observational, retrospective, cohort study from a 9-ED health system of admitted adult patients with severe acute respiratory syndrome coronavirus 2 (COVID-19) and an oxygen requirement less than or equal to 6 L/min. We sought to predict respiratory failure within 24 hours of admission as defined by oxygen requirement of greater than 10 L/min by low-flow device, high-flow device, noninvasive or invasive ventilation, or death. Predictive models were compared with the Elixhauser Comorbidity Index, quick Sequential Sepsis-related Organ Failure Assessment, and the CURB-65 pneumonia severity score.
During the study period, from March 1 to April 27, 2020, 1,792 patients were admitted with COVID-19, 620 (35%) of whom had respiratory failure in the ED. Of the remaining 1,172 admitted patients, 144 (12.3%) met the composite endpoint within the first 24 hours of hospitalization. On the independent test cohort, both a novel bedside scoring system, the quick COVID-19 Severity Index (area under receiver operating characteristic curve mean 0.81 95% confidence interval {CI} 0.73 to 0.89), and a machine-learning model, the COVID-19 Severity Index (mean 0.76 95% CI 0.65 to 0.86), outperformed the Elixhauser mortality index (mean 0.61 95% CI 0.51 to 0.70), CURB-65 (0.50 95% CI 0.40 to 0.60), and quick Sequential Sepsis-related Organ Failure Assessment (0.59 95% CI 0.50 to 0.68). A low quick COVID-19 Severity Index score was associated with a less than 5% risk of respiratory decompensation in the validation cohort.
A significant proportion of admitted COVID-19 patients progress to respiratory failure within 24 hours of admission. These events are accurately predicted with bedside respiratory examination findings within a simple scoring system.
The current acute kidney injury (AKI) risk prediction model for patients undergoing percutaneous coronary intervention (PCI) from the American College of Cardiology (ACC) National Cardiovascular Data ...Registry (NCDR) employed regression techniques. This study aimed to evaluate whether models using machine learning techniques could significantly improve AKI risk prediction after PCI.
We used the same cohort and candidate variables used to develop the current NCDR CathPCI Registry AKI model, including 947,091 patients who underwent PCI procedures between June 1, 2009, and June 30, 2011. The mean age of these patients was 64.8 years, and 32.8% were women, with a total of 69,826 (7.4%) AKI events. We replicated the current AKI model as the baseline model and compared it with a series of new models. Temporal validation was performed using data from 970,869 patients undergoing PCIs between July 1, 2016, and March 31, 2017, with a mean age of 65.7 years; 31.9% were women, and 72,954 (7.5%) had AKI events. Each model was derived by implementing one of two strategies for preprocessing candidate variables (preselecting and transforming candidate variables or using all candidate variables in their original forms), one of three variable-selection methods (stepwise backward selection, lasso regularization, or permutation-based selection), and one of two methods to model the relationship between variables and outcome (logistic regression or gradient descent boosting). The cohort was divided into different training (70%) and test (30%) sets using 100 different random splits, and the performance of the models was evaluated internally in the test sets. The best model, according to the internal evaluation, was derived by using all available candidate variables in their original form, permutation-based variable selection, and gradient descent boosting. Compared with the baseline model that uses 11 variables, the best model used 13 variables and achieved a significantly better area under the receiver operating characteristic curve (AUC) of 0.752 (95% confidence interval CI 0.749-0.754) versus 0.711 (95% CI 0.708-0.714), a significantly better Brier score of 0.0617 (95% CI 0.0615-0.0618) versus 0.0636 (95% CI 0.0634-0.0638), and a better calibration slope of observed versus predicted rate of 1.008 (95% CI 0.988-1.028) versus 1.036 (95% CI 1.015-1.056). The best model also had a significantly wider predictive range (25.3% versus 21.6%, p < 0.001) and was more accurate in stratifying AKI risk for patients. Evaluated on a more contemporary CathPCI cohort (July 1, 2015-March 31, 2017), the best model consistently achieved significantly better performance than the baseline model in AUC (0.785 versus 0.753), Brier score (0.0610 versus 0.0627), calibration slope (1.003 versus 1.062), and predictive range (29.4% versus 26.2%). The current study does not address implementation for risk calculation at the point of care, and potential challenges include the availability and accessibility of the predictors.
Machine learning techniques and data-driven approaches resulted in improved prediction of AKI risk after PCI. The results support the potential of these techniques for improving risk prediction models and identification of patients who may benefit from risk-mitigation strategies.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
The application of artificial intelligence (AI) for automated diagnosis of electrocardiograms (ECGs) can improve care in remote settings but is limited by the reliance on infrequently available ...signal-based data. We report the development of a multilabel automated diagnosis model for electrocardiographic images, more suitable for broader use. A total of 2,228,236 12-lead ECGs signals from 811 municipalities in Brazil are transformed to ECG images in varying lead conformations to train a convolutional neural network (CNN) identifying 6 physician-defined clinical labels spanning rhythm and conduction disorders, and a hidden label for gender. The image-based model performs well on a distinct test set validated by at least two cardiologists (average AUROC 0.99, AUPRC 0.86), an external validation set of 21,785 ECGs from Germany (average AUROC 0.97, AUPRC 0.73), and printed ECGs, with performance superior to signal-based models, and learning clinically relevant cues based on Grad-CAM. The model allows the application of AI to ECGs across broad settings.
Whether SARS-CoV-2 infection and COVID-19 vaccines confer exposure-dependent ("leaky") protection against infection remains unknown. We examined the effect of prior infection, vaccination, and hybrid ...immunity on infection risk among residents of Connecticut correctional facilities during periods of predominant Omicron and Delta transmission. Residents with cell, cellblock, and no documented exposure to SARS-CoV-2 infected residents were matched by facility and date. During the Omicron period, prior infection, vaccination, and hybrid immunity reduced the infection risk of residents without a documented exposure (HR: 0.36 0.25-0.54; 0.57 0.42-0.78; 0.24 0.15-0.39; respectively) and with cellblock exposures (0.61 0.49-0.75; 0.69 0.58-0.83; 0.41 0.31-0.55; respectively) but not with cell exposures (0.89 0.58-1.35; 0.96 0.64-1.46; 0.80 0.46-1.39; respectively). Associations were similar during the Delta period and when analyses were restricted to tested residents. Although associations may not have been thoroughly adjusted due to dataset limitations, the findings suggest that prior infection and vaccination may be leaky, highlighting the potential benefits of pairing vaccination with non-pharmaceutical interventions in crowded settings.
Modern artificial intelligence (AI) and machine learning (ML) methods are now capable of completing tasks with performance characteristics that are comparable to those of expert human operators. As a ...result, many areas throughout healthcare are incorporating these technologies, including in vitro diagnostics and, more broadly, laboratory medicine. However, there are limited literature reviews of the landscape, likely future, and challenges of the application of AI/ML in laboratory medicine.
In this review, we begin with a brief introduction to AI and its subfield of ML. The ensuing sections describe ML systems that are currently in clinical laboratory practice or are being proposed for such use in recent literature, ML systems that use laboratory data outside the clinical laboratory, challenges to the adoption of ML, and future opportunities for ML in laboratory medicine.
AI and ML have and will continue to influence the practice and scope of laboratory medicine dramatically. This has been made possible by advancements in modern computing and the widespread digitization of health information. These technologies are being rapidly developed and described, but in comparison, their implementation thus far has been modest. To spur the implementation of reliable and sophisticated ML-based technologies, we need to establish best practices further and improve our information system and communication infrastructure. The participation of the clinical laboratory community is essential to ensure that laboratory data are sufficiently available and incorporated conscientiously into robust, safe, and clinically effective ML-supported clinical diagnostics.
The benefit of primary and booster vaccination in people who experienced a prior Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) infection remains unclear. The objective of this study ...was to estimate the effectiveness of primary (two-dose series) and booster (third dose) mRNA vaccination against Omicron (lineage BA.1) infection among people with a prior documented infection.
We conducted a test-negative case-control study of reverse transcription PCRs (RT-PCRs) analyzed with the TaqPath (Thermo Fisher Scientific) assay and recorded in the Yale New Haven Health system from November 1, 2021, to April 30, 2022. Overall, 11,307 cases (positive TaqPath analyzed RT-PCRs with S-gene target failure SGTF) and 130,041 controls (negative TaqPath analyzed RT-PCRs) were included (median age: cases: 35 years, controls: 39 years). Among cases and controls, 5.9% and 8.1% had a documented prior infection (positive SARS-CoV-2 test record ≥90 days prior to the included test), respectively. We estimated the effectiveness of primary and booster vaccination relative to SGTF-defined Omicron (lineage BA.1) variant infection using a logistic regression adjusted for date of test, age, sex, race/ethnicity, insurance, comorbidities, social venerability index, municipality, and healthcare utilization. The effectiveness of primary vaccination 14 to 149 days after the second dose was 41.0% (95% confidence interval (CI): 14.1% to 59.4%, p 0.006) and 27.1% (95% CI: 18.7% to 34.6%, p < 0.001) for people with and without a documented prior infection, respectively. The effectiveness of booster vaccination (≥14 days after booster dose) was 47.1% (95% CI: 22.4% to 63.9%, p 0.001) and 54.1% (95% CI: 49.2% to 58.4%, p < 0.001) in people with and without a documented prior infection, respectively. To test whether booster vaccination reduced the risk of infection beyond that of the primary series, we compared the odds of infection among boosted (≥14 days after booster dose) and booster-eligible people (≥150 days after second dose). The odds ratio (OR) comparing boosted and booster-eligible people with a documented prior infection was 0.79 (95% CI: 0.54 to 1.16, p 0.222), whereas the OR comparing boosted and booster-eligible people without a documented prior infection was 0.54 (95% CI: 0.49 to 0.59, p < 0.001). This study's limitations include the risk of residual confounding, the use of data from a single system, and the reliance on TaqPath analyzed RT-PCR results.
In this study, we observed that primary vaccination provided significant but limited protection against Omicron (lineage BA.1) infection among people with and without a documented prior infection. While booster vaccination was associated with additional protection against Omicron BA.1 infection in people without a documented prior infection, it was not found to be associated with additional protection among people with a documented prior infection. These findings support primary vaccination in people regardless of documented prior infection status but suggest that infection history may impact the relative benefit of booster doses.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Real‐world data (RWD) and real‐world evidence (RWE) are becoming essential tools for informing regulatory decision making in health care and offer an opportunity for all stakeholders in the ...healthcare ecosystem to evaluate medical products throughout their lifecycle. Although considerable interest has been given to regulatory decisions supported by RWE for treatment authorization, especially in rare diseases, less attention has been given to RWD/RWE related to in vitro diagnostic (IVD) products and clinical decision support systems (CDSS). This review examines current regulatory practices in relation to IVD product development and discusses the use of CDSS in assisting clinicians to retrieve, filter, and analyze patient data in support of complex decisions regarding diagnosis and treatment. The review then explores how utilizing RWD could augment regulatory body understanding of test performance, clinical outcomes, and benefit‐risk profiles, and how RWD could be leveraged to augment CDSS and improve safety, quality, and efficiency of healthcare practices. Whereas we present examples of RWD assisting in the regulation of IVDs and CDSS, we also highlight key challenges within the current healthcare system which are impeding the potential of RWE to be fully realized. These challenges include issues such as data availability, reliability, accessibility, harmonization, and interoperability, often for reasons specific to diagnostics. Finally, we review ways that these challenges are actively being addressed and discuss how private‐public collaborations and the implementation of standardized language and protocols are working toward producing more robust RWD and RWE to support regulatory decision making.