Abstract
Ensemble Kalman filters use the sample covariance of an observation and a model state variable to update a prior estimate of the state variable. The sample covariance can be suboptimal as a ...result of small ensemble size, model error, model nonlinearity, and other factors. The most common algorithms for dealing with these deficiencies are inflation and covariance localization. A statistical model of errors in ensemble Kalman filter sample covariances is described and leads to an algorithm that reduces ensemble filter root-mean-square error for some applications. This sampling error correction algorithm uses prior information about the distribution of the correlation between an observation and a state variable. Offline Monte Carlo simulation is used to build a lookup table that contains a correction factor between 0 and 1 depending on the ensemble size and the ensemble sample correlation. Correction factors are applied like a traditional localization for each pair of observations and state variables during an ensemble assimilation. The algorithm is applied to two low-order models and reduces the sensitivity of the ensemble assimilation error to the strength of traditional localization. When tested in perfect model experiments in a larger model, the dynamical core of a general circulation model, the sampling error correction algorithm produces analyses that are closer to the truth and also reduces sensitivity to traditional localization strength.
The mechanisms behind the association of atrial fibrillation (AF) and dementia are unknown. One possibility is that exposure to chronic microembolism or microbleeds results in repetitive cerebral ...injury that is manifest by cognitive decline.
The purpose of this study was to test the hypothesis that AF patients with a low percentage of time in the therapeutic range (TTR) are at higher risk for dementia due to under- or overanticoagulation.
Patients anticoagulated with warfarin (target international normalized ratio INR 2-3), managed by the Intermountain Healthcare Clinical Pharmacist Anticoagulation Service with no history of dementia or stroke/transient ischemic attack, were included in the study. The primary outcome was dementia incidence defined by ICD-9 codes. Percent time in TTR was calculated using the method of linear interpolation and stratified as >75%, 51%-75%, 26%-50%, and ≤25%. Multivariable Cox hazard regression was used to determine dementia incidence by percentage categories of TTR.
A total of 2605 patients (age 73.7 ± 10.8 years, 1408 54.0% male) were studied. The CHADS2 score distribution was 0: 216 (8.3%); 1: 579 (22.2%); 2: 859(33.0%); 3: 708 (27.2%); and ≥4: 243 (9.3%). The percent TTR averaged 63.1 ± 21.3, with percent INR <2.0: 25.6% ± 17.9% and percent INR >3.0: 16.2% ± 13.6%. Dementia was diagnosed in 109 patients (4.2%) (senile: 37 1.4%; vascular: 8 0.3%; Alzheimer: 64 (2.5%). After adjustment, decreasing categories of percent TTR were associated with increased dementia risk (vs >75%): <25%: hazard ratio (HR) 5.34, P < .0001; 26%-50%: HR 4.10, P < .0001; and 51%-75%: HR = 2.57, P = .001.
Quality of anticoagulation management represented as percent TTR among AF patients without dementia was associated with dementia incidence. These data support the possibility of chronic cerebral injury as a mechanism that underlies the association of AF and dementia.
This study presents the first application of a localized particle filter (PF) for data assimilation in a high-dimensional geophysical model. Particle filters form Monte Carlo approximations of model ...probability densities conditioned on observations, while making no assumptions about the underlying error distribution. Unlike standard PFs, the local PF uses a localization function to reduce the influence of distant observations on state variables, which significantly decreases the number of particles required to maintain the filter’s stability. Because the local PF operates effectively using small numbers of particles, it provides a possible alternative to Gaussian filters, such as ensemble Kalman filters, for large geophysical models. In the current study, the local PF is compared with stochastic and deterministic ensemble Kalman filters using a simplified atmospheric general circulation model. The local PF is found to provide stable filtering results over yearlong data assimilation experiments using only 25 particles. The local PF also outperforms the Gaussian filters when observation networks include measurements that have non-Gaussian errors or relate nonlinearly to the model state, like remotely sensed data used frequently in atmospheric analyses. Results from this study encourage further testing of the local PF on more complex geophysical systems, such as weather prediction models.
ABSTRACT
Ensemble filters are used in many data assimilation applications in geophysics. Basic implementations of ensemble filters are trivial but are susceptible to errors from many sources. Model ...error, sampling error and fundamental inconsistencies between the filter assumptions and reality combine to produce assimilations that are suboptimal or suffer from filter divergence. Several auxiliary algorithms have been developed to help filters tolerate these errors. For instance, covariance inflation combats the tendency of ensembles to have insufficient variance by increasing the variance during the assimilation. The amount of inflation is usually determined by trial and error. It is possible, however, to design Bayesian algorithms that determine the inflation adaptively. A spatially and temporally varying adaptive inflation algorithm is described. A normally distributed inflation random variable is associated with each element of the model state vector. Adaptive inflation is demonstrated in two low‐order model experiments. In the first, the dominant error source is small ensemble sampling error. In the second, the model error is dominant. The adaptive inflation assimilations have better mean and variance estimates than other inflation methods.
Vitamin D recently has been proposed to play an important role in a broad range of organ functions, including cardiovascular (CV) health; however, the CV evidence-base is limited. We prospectively ...analyzed a large electronic medical records database to determine the prevalence of vitamin D deficiency and the relation of vitamin D levels to prevalent and incident CV risk factors and diseases, including mortality. The database contained 41,504 patient records with at least one measured vitamin D level. The prevalence of vitamin D deficiency (≤30 ng/ml) was 63.6%, with only minor differences by gender or age. Vitamin D deficiency was associated with highly significant (p <0.0001) increases in the prevalence of diabetes, hypertension, hyperlipidemia, and peripheral vascular disease. Also, those without risk factors but with severe deficiency had an increased likelihood of developing diabetes, hypertension, and hyperlipidemia. The vitamin D levels were also highly associated with coronary artery disease, myocardial infarction, heart failure, and stroke (all p <0.0001), as well as with incident death, heart failure, coronary artery disease/myocardial infarction (all p <0.0001), stroke (p = 0.003), and their composite (p <0.0001). In conclusion, we have confirmed a high prevalence of vitamin D deficiency in the general healthcare population and an association between vitamin D levels and prevalent and incident CV risk factors and outcomes. These observations lend strong support to the hypothesis that vitamin D might play a primary role in CV risk factors and disease. Given the ease of vitamin D measurement and replacement, prospective studies of vitamin D supplementation to prevent and treat CV disease are urgently needed.
Intermittent fasting, alternate-day fasting, and other forms of periodic caloric desistance are gaining popularity in the lay press and among animal research scientists. Whether clinical evidence ...exists for or is strong enough to support the use of such dietary regimens as health interventions is unclear.
This review sought to identify rigorous, clinically relevant research studies that provide high-quality evidence that therapeutic fasting regimens are clinically beneficial to humans.
A systematic review of the published literature through January 2015 was performed by using sensitive search strategies to identify randomized controlled clinical trials that evaluated the effects of fasting on either clinically relevant surrogate outcomes (e.g., weight, cholesterol) or actual clinical event endpoints e.g., diabetes, coronary artery disease (CAD) and any other studies that evaluated the effects of fasting on clinical event outcomes.
Three randomized controlled clinical trials of fasting in humans were identified, and the results were published in 5 articles, all of which evaluated the effects of fasting on surrogate outcomes. Improvements in weight and other risk-related outcomes were found in the 3 trials. Two observational clinical outcomes studies in humans were found in which fasting was associated with a lower prevalence of CAD or diabetes diagnosis. No randomized controlled trials of fasting for clinical outcomes were identified.
Clinical research studies of fasting with robust designs and high levels of clinical evidence are sparse in the literature. Whereas the few randomized controlled trials and observational clinical outcomes studies support the existence of a health benefit from fasting, substantial further research in humans is needed before the use of fasting as a health intervention can be recommended.
Abstract
A general framework for deterministic univariate ensemble filtering is presented. The framework fits a continuous prior probability density function (PDF) to the prior ensemble. A functional ...representation for the observation likelihood is combined with the prior PDF to get a continuous analysis (posterior) PDF. Cumulative distribution functions for the prior and analysis are also required. The key innovation is that an analysis ensemble is computed so that the quantile of each ensemble member is the same as its prior quantile. Many choices for the prior PDF family and the likelihood function are described. A choice of normal prior with normal likelihood is equivalent to the ensemble adjustment Kalman filter. Some other choices for the prior include gamma, inverse gamma, beta, beta prime, lognormal, and exponential distributions. Both prior distributions and likelihoods can be defined over a set of intervals giving additional flexibility that can be used to implement methods like a Huber likelihood for observations with occasional outliers. Priors and likelihoods can also be defined as sums of distributions allowing choices like bivariate normals or kernel filters. Empirical distributions, for instance piecewise linear approximations to arbitrary PDFs and functions can be used. Another empirical choice leads to the rank histogram filter. Results here are univariate and can be used to compute increments for observed variables or marginal distributions for any variable for a reanalysis. Linear regression of increments can be used to update state variables in a serial filter to build a comprehensive data assimilation system. Part 2 will discuss other methods for extending the framework to multivariate data assimilation.
Significance Statement
Data assimilation is used to combine information from model forecasts with subsequent observations to obtain better estimates of the current state of the atmosphere or other parts of the Earth system. Ensemble data assimilation uses a number of forecasts to get more information about uncertainty. A new method allows much more flexibility in the assumptions that must be made when doing ensemble data assimilation. As an example, the method can be better for quantities that are bounded like the amount of an atmospheric trace pollutant.
Abstract
It is possible to describe many variants of ensemble Kalman filters without loss of generality as the impact of a single observation on a single state variable. For most ensemble algorithms ...commonly applied to Earth system models, the computation of increments for the observation variable ensemble can be treated as a separate step from computing increments for the state variable ensemble. The state variable increments are normally computed from the observation increments by linear regression using the prior bivariate ensemble of the state and observation variable. Here, a new method that replaces the standard regression with a regression using the bivariate rank statistics is described. This rank regression is expected to be most effective when the relation between a state variable and an observation is nonlinear. The performance of standard versus rank regression is compared for both linear and nonlinear forward operators (also known as observation operators) using a low-order model. Rank regression in combination with a rank histogram filter in observation space produces better analyses than standard regression for cases with nonlinear forward operators and relatively large analysis error. Standard regression, in combination with either a rank histogram filter or an ensemble Kalman filter in observation space, produces the best results in other situations.