Type 2 diabetes leads to premature death and reduced quality of life for 8% of Americans. Nutrition management is critical to maintaining glycemic control, yet it is difficult to achieve due to the ...high individual differences in glycemic response to nutrition. Anticipating glycemic impact of different meals can be challenging not only for individuals with diabetes, but also for expert diabetes educators. Personalized computational models that can accurately forecast an impact of a given meal on an individual's blood glucose levels can serve as the engine for a new generation of decision support tools for individuals with diabetes. However, to be useful in practice, these computational engines need to generate accurate forecasts based on limited datasets consistent with typical self-monitoring practices of individuals with type 2 diabetes. This paper uses three forecasting machines: (i) data assimilation, a technique borrowed from atmospheric physics and engineering that uses Bayesian modeling to infuse data with human knowledge represented in a mechanistic model, to generate real-time, personalized, adaptable glucose forecasts; (ii) model averaging of data assimilation output; and (iii) dynamical Gaussian process model regression. The proposed data assimilation machine, the primary focus of the paper, uses a modified dual unscented Kalman filter to estimate states and parameters, personalizing the mechanistic models. Model selection is used to make a personalized model selection for the individual and their measurement characteristics. The data assimilation forecasts are empirically evaluated against actual postprandial glucose measurements captured by individuals with type 2 diabetes, and against predictions generated by experienced diabetes educators after reviewing a set of historical nutritional records and glucose measurements for the same individual. The evaluation suggests that the data assimilation forecasts compare well with specific glucose measurements and match or exceed in accuracy expert forecasts. We conclude by examining ways to present predictions as forecast-derived range quantities and evaluate the comparative advantages of these ranges.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
The national adoption of electronic health records (EHR) promises to make an unprecedented amount of data available for clinical research, but the data are complex, inaccurate, and frequently ...missing, and the record reflects complex processes aside from the patient's physiological state. We believe that the path forward requires studying the EHR as an object of interest in itself, and that new models, learning from data, and collaboration will lead to efficient use of the valuable information currently locked in health records.
In acute ischemic stroke, thrombectomy with a stent retriever plus intravenous t-PA was more effective than t-PA in improving functional outcomes. At 90 days, 60% of patients in the intervention ...group were functionally independent, as compared with 35% in the control group.
Intravenous tissue plasminogen activator (t-PA) administered within 4.5 hours after the onset of acute ischemic stroke improves outcomes.
1
–
3
However, intravenous t-PA has multiple constraints, including unresponsiveness of large thrombi to rapid enzymatic digestion, a narrow time window for administration, and the risk of cerebral and systemic hemorrhage. Among patients with occlusions of the intracranial internal carotid artery or the first segment of the middle cerebral artery (or both), intravenous t-PA results in early reperfusion in only 13 to 50%.
4
–
7
Neurovascular thrombectomy is a reperfusion strategy that is distinct from pharmacologic fibrinolysis. Endovascular mechanical treatments can remove large, proximal . . .
Abstract
Electronic health record phenotyping is the use of raw electronic health record data to assert characterizations about patients. Researchers have been doing it since the beginning of ...biomedical informatics, under different names. Phenotyping will benefit from an increasing focus on fidelity, both in the sense of increasing richness, such as measured levels, degree or severity, timing, probability, or conceptual relationships, and in the sense of reducing bias. Research agendas should shift from merely improving binary assignment to studying and improving richer representations. The field is actively researching new temporal directions and abstract representations, including deep learning. The field would benefit from research in nonlinear dynamics, in combining mechanistic models with empirical data, including data assimilation, and in topology. The health care process produces substantial bias, and studying that bias explicitly rather than treating it as merely another source of noise would facilitate addressing it.
Background Fields like nonlinear physics offer methods for analyzing time series, but many methods require that the time series be stationary—no change in properties over time.
Objective Medicine is ...far from stationary, but the challenge may be able to be ameliorated by reparameterizing time because clinicians tend to measure patients more frequently when they are ill and are more likely to vary.
Methods We compared time parameterizations, measuring variability of rate of change and magnitude of change, and looking for homogeneity of bins of temporal separation between pairs of time points. We studied four common laboratory tests drawn from 25 years of electronic health records on 4 million patients.
Results We found that sequence time—that is, simply counting the number of measurements from some start—produced more stationary time series, better explained the variation in values, and had more homogeneous bins than either traditional clock time or a recently proposed intermediate parameterization. Sequence time produced more accurate predictions in a single Gaussian process model experiment.
Conclusions Of the three parameterizations, sequence time appeared to produce the most stationary series, possibly because clinicians adjust their sampling to the acuity of the patient. Parameterizing by sequence time may be applicable to association and clustering experiments on electronic health record data. A limitation of this study is that laboratory data were derived from only one institution. Sequence time appears to be an important potential parameterization.
Objective
Within the context of a prospective randomized trial (SWIFT PRIME), we assessed whether early imaging of stroke patients, primarily with computed tomography (CT) perfusion, can estimate the ...size of the irreversibly injured ischemic core and the volume of critically hypoperfused tissue. We also evaluated the accuracy of ischemic core and hypoperfusion volumes for predicting infarct volume in patients with the target mismatch profile.
Methods
Baseline ischemic core and hypoperfusion volumes were assessed prior to randomized treatment with intravenous (IV) tissue plasminogen activator (tPA) alone versus IV tPA + endovascular therapy (Solitaire stent‐retriever) using RAPID automated postprocessing software. Reperfusion was assessed with angiographic Thrombolysis in Cerebral Infarction scores at the end of the procedure (endovascular group) and Tmax > 6‐second volumes at 27 hours (both groups). Infarct volume was assessed at 27 hours on noncontrast CT or magnetic resonance imaging (MRI).
Results
A total of 151 patients with baseline imaging with CT perfusion (79%) or multimodal MRI (21%) were included. The median baseline ischemic core volume was 6ml (interquartile range = 0–16). Ischemic core volumes correlated with 27‐hour infarct volumes in patients who achieved reperfusion (r = 0.58, p < 0.0001). In patients who did not reperfuse (<10% reperfusion), baseline Tmax > 6‐second lesion volumes correlated with 27‐hour infarct volume (r = 0.78, p = 0.005). In target mismatch patients, the union of baseline core and early follow‐up Tmax > 6‐second volume (ie, predicted infarct volume) correlated with the 27‐hour infarct volume (r = 0.73, p < 0.0001); the median absolute difference between the observed and predicted volume was 13ml.
Interpretation
Ischemic core and hypoperfusion volumes, obtained primarily from CT perfusion scans, predict 27‐hour infarct volume in acute stroke patients who were treated with reperfusion therapies. ANN NEUROL 2016;79:76–89
Abstract
Background
It would be useful to be able to assess the utility of predictive models of continuous values before clinical trials are performed.
Objective
The aim of the study is to compare ...metrics to assess the potential clinical utility of models that produce continuous value forecasts.
Methods
We ran a set of data assimilation forecast algorithms on time series of glucose measurements from neurological intensive care unit patients. We evaluated the forecasts using four sets of metrics: glucose root mean square (RMS) error, a set of metrics on a transformed glucose value, the estimated effect on clinical care based on an insulin guideline, and a glucose measurement error grid (Parkes grid). We assessed correlation among the metrics and created a set of factor models.
Results
The metrics generally correlated with each other, but those that estimated the effect on clinical care correlated with others the least and were generally associated with their own independent factors. The other metrics appeared to separate into those that emphasized errors in low glucose versus errors in high glucose. The Parkes grid was well correlated with the transformed glucose but not the estimation of clinical care.
Discussion
Our results indicate that we need to be careful before we assume that commonly used metrics like RMS error in raw glucose or even metrics like the Parkes grid that are designed to measure importance of differences will correlate well with actual effect on clinical care processes. A combination of metrics appeared to explain the most variance between cases. As prediction algorithms move into practice, it will be important to measure actual effects.
Ensemble Kalman methods with constraints Albers, David J; Blancquart, Paul-Adrien; Levine, Matthew E ...
Inverse problems,
09/2019, Letnik:
35, Številka:
9
Journal Article
Recenzirano
Odprti dostop
Ensemble Kalman methods constitute an increasingly important tool in both state and parameter estimation problems. Their popularity stems from the derivative-free nature of the methodology which may ...be readily applied when computer code is available for the underlying state-space dynamics (for state estimation) or for the parameter-to-observable map (for parameter estimation). There are many applications in which it is desirable to enforce prior information in the form of equality or inequality constraints on the state or parameter. This paper establishes a general framework for doing so, describing a widely applicable methodology, a theory which justifies the methodology, and a set of numerical experiments exemplifying it.
To study the relation between electronic health record (EHR) variables and healthcare process events.
Lagged linear correlation was calculated between five healthcare process events and 84 EHR ...variables (24 clinical laboratory values and 60 clinical concepts extracted from clinical notes) in a 24-year database. The EHR variables were clustered for each healthcare process event and interpreted.
Laboratory tests tended to cluster together and note concepts tended to cluster together. Within each of those two classes, the variables clustered into clinically sensible groupings. The exact groupings varied from healthcare process event to event, with the largest differences occurring between inpatient events and outpatient events.
Unlike previously reported pairwise associations between variables, which highlighted correlations across the laboratory-clinical note divide, incorporating healthcare process events appeared to be sensitive to the manner in which the variables were collected.
We believe that it may be possible to exploit this sensitivity to help knowledge engineers select variables and correct for biases.
OBJECTIVEStudies have independently shown associations of lower hemoglobin levels with larger admission intracerebral hemorrhage (ICH) volumes and worse outcomes. We investigated whether lower ...admission hemoglobin levels are associated with more hematoma expansion (HE) after ICH and whether this mediates lower hemoglobin levelsʼ association with worse outcomes.
METHODSConsecutive patients enrolled between 2009 and 2016 to a single-center prospective ICH cohort study with admission hemoglobin and neuroimaging data to calculate HE (>33% or >6 mL) were evaluated. The association of admission hemoglobin levels with HE and poor clinical outcomes using modified Rankin Scale (mRS 4–6) were assessed using separate multivariable logistic regression models. Mediation analysis investigated causal associations among hemoglobin, HE, and outcome.
RESULTSOf 256 patients with ICH meeting inclusion criteria, 63 (25%) had HE. Lower hemoglobin levels were associated with increased odds of HE (odds ratio OR 0.80 per 1.0 g/dL change of hemoglobin; 95% confidence interval CI 0.67–0.97) after adjusting for previously identified covariates of HE (admission hematoma volume, antithrombotic medication use, symptom onset to admission CT time) and hemoglobin (age, sex). Lower hemoglobin was also associated with worse 3-month outcomes (OR 0.76 per 1.0 g/dL change of hemoglobin; 95% CI 0.62–0.94) after adjusting for ICH score. Mediation analysis revealed that associations of lower hemoglobin with poor outcomes were mediated by HE (p = 0.01).
CONCLUSIONSFurther work is required to replicate the associations of lower admission hemoglobin levels with increased odds of HE mediating worse outcomes after ICH. If confirmed, an investigation into whether hemoglobin levels can be a modifiable target of treatment to improve ICH outcomes may be warranted.