Human pluripotent stem cell-based in vitro models that reflect human physiology have the potential to reduce the number of drug failures in clinical trials and offer a cost-effective approach for ...assessing chemical safety. Here, human embryonic stem (ES) cell-derived neural progenitor cells, endothelial cells, mesenchymal stem cells, and microglia/macrophage precursors were combined on chemically defined polyethylene glycol hydrogels and cultured in serum-free medium to model cellular interactions within the developing brain. The precursors self-assembled into 3D neural constructs with diverse neuronal and glial populations, interconnected vascular networks, and ramified microglia. Replicate constructs were reproducible by RNA sequencing (RNA-Seq) and expressed neurogenesis, vasculature development, and microglia genes. Linear support vector machines were used to construct a predictive model from RNA-Seq data for 240 neural constructs treated with 34 toxic and 26 nontoxic chemicals. The predictive model was evaluated using two standard hold-out testing methods: a nearly unbiased leave-one-out cross-validation for the 60 training compounds and an unbiased blinded trial using a single hold-out set of 10 additional chemicals. The linear support vector produced an estimate for future data of 0.91 in the cross-validation experiment and correctly classified 9 of 10 chemicals in the blinded trial.
BACKGROUND:Machine learning is increasingly used for risk stratification in health care. Achieving accurate predictive models do not improve outcomes if they cannot be translated into efficacious ...intervention. Here we examine the potential utility of automated risk stratification and referral intervention to screen older adults for fall risk after emergency department (ED) visits.
OBJECTIVE:This study evaluated several machine learning methodologies for the creation of a risk stratification algorithm using electronic health record data and estimated the effects of a resultant intervention based on algorithm performance in test data.
METHODS:Data available at the time of ED discharge were retrospectively collected and separated into training and test datasets. Algorithms were developed to predict the outcome of a return visit for fall within 6 months of an ED index visit. Models included random forests, AdaBoost, and regression-based methods. We evaluated models both by the area under the receiver operating characteristic (ROC) curve, also referred to as area under the curve (AUC), and by projected clinical impact, estimating number needed to treat (NNT) and referrals per week for a fall risk intervention.
RESULTS:The random forest model achieved an AUC of 0.78, with slightly lower performance in regression-based models. Algorithms with similar performance, when evaluated by AUC, differed when placed into a clinical context with the defined task of estimated NNT in a real-world scenario.
CONCLUSION:The ability to translate the results of our analysis to the potential tradeoff between referral numbers and NNT offers decisionmakers the ability to envision the effects of a proposed intervention before implementation.
Predictive models are increasingly being developed and implemented to improve patient care across a variety of clinical scenarios. While a body of literature exists on the development of models using ...existing data, less focus has been placed on practical operationalization of these models for deployment in real-time production environments. This case-study describes challenges and barriers identified and overcome in such an operationalization for a model aimed at predicting risk of outpatient falls after Emergency Department (ED) visits among older adults. Based on our experience, we provide general principles for translating an EHR-based predictive model from research and reporting environments into real-time operation.
Of the 3 million older adults seeking fall-related emergency care each year, nearly one-third visited the Emergency Department (ED) in the previous 6 months. ED providers have a great opportunity to ...refer patients for fall prevention services at these initial visits, but lack feasible tools for identifying those at highest-risk. Existing fall screening tools have been poorly adopted due to ED staff/provider burden and lack of workflow integration. To address this, we developed an automated clinical decision support (CDS) system for identifying and referring older adult ED patients at risk of future falls.
We engaged an interdisciplinary design team (ED providers, health services researchers, information technology/predictive analytics professionals, and outpatient Falls Clinic staff) to collaboratively develop a system that successfully met user requirements and integrated seamlessly into existing ED workflows. Our rapid-cycle development and evaluation process employed a novel combination of human-centered design, implementation science, and patient experience strategies, facilitating simultaneous design of the CDS tool and intervention implementation strategies. This included defining system requirements, systematically identifying and resolving usability problems, assessing barriers and facilitators to implementation (e.g., data accessibility, lack of time, high patient volumes, appointment availability) from multiple vantage points, and refining protocols for communicating with referred patients at discharge. ED physician, nurse, and patient stakeholders were also engaged through online surveys and user testing.
Successful CDS design and implementation required integration of multiple new technologies and processes into existing workflows, necessitating interdisciplinary collaboration from the onset. By using this iterative approach, we were able to design and implement an intervention meeting all project goals. Processes used in this Clinical-IT-Research partnership can be applied to other use cases involving automated risk-stratification, CDS development, and EHR-facilitated care coordination.
In recent decades, escalating healthcare costs have drawn the attention of providers and policymakers. These increased expenditures are often due to inefficiencies in patient care, a dilemma that has ...catalyzed new approaches to healthcare. Key among these are new avenues for leveraging electronic health record (EHR) data. In particular, applying machine learning methods to biomedical and clinical needs has shown remarkable promise. These techniques often present challenges that must be addressed, however. This dissertation discusses certain guiding principles we have gleaned from our own work in applying predictive machine learning models. In aggregate, these principles of machine learning used in the biomedical and healthcare domains can be taken as guiding principles for other researchers seeking to design and implement similar models. Moving forward, considering these observations and those gained from other applications will be an important tool in not only advancing strictly academic work, but also in tackling the cost and efficiency concerns that currently beset healthcare in the US.
For the Western North America Mercury Synthesis, we compiled mercury records from 165 dated sediment cores from 138 natural lakes across western North America. Lake sediments are accepted as faithful ...recorders of historical mercury accumulation rates, and regional and sub-regional temporal and spatial trends were analyzed with descriptive and inferential statistics. Mercury accumulation rates in sediments have increased, on average, four times (4×) from 1850 to 2000 and continue to increase by approximately 0.2μg/m2 per year. Lakes with the greatest increases were influenced by the Flin Flon smelter, followed by lakes directly affected by mining and wastewater discharges. Of lakes not directly affected by point sources, there is a clear separation in mercury accumulation rates between lakes with no/little watershed development and lakes with extensive watershed development for agricultural and/or residential purposes. Lakes in the latter group exhibited a sharp increase in mercury accumulation rates with human settlement, stabilizing after 1950 at five times (5×) 1850 rates. Mercury accumulation rates in lakes with no/little watershed development were controlled primarily by relative watershed size prior to 1850, and since have exhibited modest increases (in absolute terms and compared to that described above) associated with (regional and global) industrialization. A sub-regional analysis highlighted that in the ecoregion Northwestern Forest Mountains, <1% of mercury deposited to watersheds is delivered to lakes. Research is warranted to understand whether mountainous watersheds act as permanent sinks for mercury or if export of “legacy” mercury (deposited in years past) will delay recovery when/if emissions reductions are achieved.
Display omitted
•We compiled Hg records from lakes across western North America.•Hg accumulation rates increased, on average, four times from 1850 to 2000.•Regional and global emissions of Hg to the atmosphere result in enhanced Hg deposition.•Watershed disturbance exacerbates the problem, by reducing the retention of Hg in soils.•Hg deposition rates are highest near urban areas, where watershed disturbance is greatest.
Despite a high prevalence and association with poor outcomes, screening to identify cognitive impairment (CI) in the emergency department (ED) is uncommon. Identification of high-risk subsets of ...older adults is a critical challenge to expanding screening programs. We developed and evaluated an automated screening tool to identify a subset of patients at high risk for CI.
In this secondary analysis of existing data collected for a randomized control trial, we developed machine-learning models to identify patients at higher risk of CI using only variables available in electronic health record (EHR). We used records from 1736 community-dwelling adults age > 59 being discharged from three EDs. Potential CI was determined based on the Blessed Orientation Memory Concentration (BOMC) test, administered in the ED. A nested cross-validation framework was used to evaluate machine-learning algorithms, comparing area under the receiver-operator curve (AUC) as the primary metric of performance.
Based on BOMC scores, 121 of 1736 (7%) participants screened positive for potential CI at the time of their ED visit. The best performing algorithm, an XGBoost model, predicted BOMC positivity with an AUC of 0.72. With a classification threshold of 0.4, this model had a sensitivity of 0.73, a specificity of 0.64, a negative predictive value of 0.97, and a positive predictive value of 0.13. In a hypothetical ED with 200 older adult visits per week, the use of this model would lead to a decrease in the in-person screening burden from 200 to 77 individuals in order to detect 10 of 14 patients who would fail a BOMC.
This study demonstrates that an algorithm based on EHR data can define a subset of patients at higher risk for CI. Incorporating such an algorithm into a screening workflow could allow screening efforts and resources to be focused where they have the most impact.