•Provide effective reliability analysis tools for structures with uncertain parameters.•Predictive failure probability is estimated through Smolyak-type algorithm.•Fit the distribution of conditional ...reliability index with cubic normal distribution.•Quantile of the conditional failure probability is obtained by an explicit formula.
In this study, the assessment of structural reliability considering distributional parametric uncertainty is investigated. If uncertainties of distribution parameters are considered in the evaluation of structural reliability, then the traditional failure probability turns into a random variable, which is defined as the conditional probability of failure. Correspondingly, the relevant reliability index is defined as the conditional reliability index. For the transparency of risk assessment, the Smolyak-type quadrature formula is first adopted to evaluate the expected value of the conditional probability of failure. Then, a new method constructed by integrating the Smolyak-type quadrature formula with the cubic normal distribution is used to determine the percentile value of the conditional probability of failure, which mainly included three steps: (1) the first four statistical moments of the conditional reliability index are estimated utilizing the Smolyak-type quadrature formula; (2) the probability density function of the conditional reliability index is fitted through the cubic normal distribution; and finally, the percentile value of the conditional probability of failure is determined from the cumulative distribution function of the conditional reliability index. Three illustrative examples are given to verify the efficiency and accuracy of the proposed method, in which Monte Carlo simulations are used as a benchmark for comparison study.
Data from turbulent numerical simulations of the global ocean demonstrate that the dissipation of kinetic energy obeys a nearly log-normal distribution even at large horizontal scales O(10 km). As ...the horizontal scales of resolved turbulence are larger than the ocean is deep, the Kolmogorov-Yaglom theory for intermittency in 3D homogeneous, isotropic turbulence cannot apply; instead, the down-scale potential enstrophy cascade of quasigeostrophic turbulence should. Yet, energy dissipation obeys approximate log-normality-robustly across depths, seasons, regions, and subgrid schemes. The distribution parameters, skewness and kurtosis, show small systematic departures from log-normality with depth and subgrid friction schemes. Log-normality suggests that a few high-dissipation locations dominate the integrated energy and enstrophy budgets, which should be taken into account when making inferences from simplified models and inferring global energy budgets from sparse observations.
Full text
Available for:
CMK, CTK, FMFMET, NUK, UL
A two-piece normal measurement error model Arellano-Valle, Reinaldo B.; Azzalini, Adelchi; Ferreira, Clécio S. ...
Computational statistics & data analysis,
April 2020, 2020-04-00, Volume:
144
Journal Article
Peer reviewed
In the context of measurement error models, the true unobservable covariates are commonly assumed to have a normal distribution. This assumption is replaced here by a more flexible two-piece normal ...distribution, which allows for asymmetry. After setting-up a general formulation for two-piece distributions, we focus on the case of the normal two-piece construction. It turns out that the joint distribution of the actual observations (the multivariate observed covariates and the response) is a two-component mixture of multivariate skew-normal distributions. This connection facilitates the construction of an EM-type algorithm for performing maximum likelihood estimation. Some numerical experimentation with two real datasets indicates a substantial improvement of the present formulation with respect to the classical normal-theory construction, which greatly compensates the introduction of a single parameter for regulation of skewness.
A prevailing notion in experimental psychology is that individuals’ performance in a task varies gradually in a continuous fashion. In a Stroop task, for example, the true average effect may be 50 ms ...with a standard deviation of say 30 ms. In this case, some individuals will have greater effects than 50 ms, some will have smaller, and some are forecasted to have negative effects in sign—they respond faster to incongruent items than to congruent ones! But are there people who have a true negative effect in Stroop or any other task? We highlight three
qualitatively different
effects: negative effects, null effects, and positive effects. The main goal of this paper is to develop models that allow researchers to explore whether all three are present in a task: Do all individuals show a positive effect? Are there individuals with truly no effect? Are there any individuals with negative effects? We develop a family of Bayesian hierarchical models that capture a variety of these constraints. We apply this approach to Stroop interference experiments and a near-liminal priming experiment where the prime may be below and above threshold for different people. We show that most tasks people are quite alike—for example everyone has positive Stroop effects and nobody fails to Stroop or Stroops negatively. We also show a case that under very specific circumstances, we could entice some people to not Stroop at all.
Covariance estimation and selection for high-dimensional multivariate datasets is a fundamental problem in modern statistics. Gaussian directed acyclic graph (DAG) models are a popular class of ...models used for this purpose. Gaussian DAG models introduce sparsity in the Cholesky factor of the inverse covariance matrix, and the sparsity pattern in turn corresponds to specific conditional independence assumptions on the underlying variables. A variety of priors have been developed in recent years for Bayesian inference in DAG models, yet crucial convergence and sparsity selection properties for these models have not been thoroughly investigated. Most of these priors are adaptations/generalizations of the Wishart distribution in the DAG context. In this paper, we consider a flexible and general class of these “DAG-Wishart” priors with multiple shape parameters. Under mild regularity assumptions, we establish strong graph selection consistency and establish posterior convergence rates for estimation when the number of variables p is allowed to grow at an appropriate subexponential rate with the sample size n.
Passivity breakdown on HDSS 2707 has been studied and the electrochemical data are interpreted in terms of the Point Defect Model. Pitting parameters of HDSS 2707 are determined, including the ...polarizability at bl/s interface, the defect annihilation rate, and the defect diffusion coefficient. The breakdown potential is demonstrated to be linearly related to log Br-, pH, and the square root of potential scan rate (v1/2), and follows a near-normal distribution. The critical vacancy concentration calculated from the PDM is consistent with that estimated crystallographically from the chromic barrier layer, and the dominant point defect is further confirmed by Mott-Schottky measurements.
•The pitting parameters of HDSS 2707 associated with the PDM are obtained in NaBr solution.•The distribution of Eb for HDSS 2707 is consistent with the PDM calculation.•Higher temperature and higher Br- lead to a lower standard deviation for a better fit.•The PDM is suitable for the prediction of pitting potential and its near-normal distribution law of HDSS 2707.
Sinh-arcsinh distributions JONES, M. C.; PEWSEY, ARTHUR
Biometrika,
12/2009, Volume:
96, Issue:
4
Journal Article
Peer reviewed
Open access
We introduce the sinh-arcsinh transformation and hence, by applying it to a generating distribution with no parameters other than location and scale, usually the normal, a new family of sinh-arcsinh ...distributions. This four-parameter family has symmetric and skewed members and allows for tailweights that are both heavier and lighter than those of the generating distribution. The central place of the normal distribution in this family affords likelihood ratio tests of normality that are superior to the state-of-the-art in normality testing because of the range of alternatives against which they are very powerful. Likelihood ratio tests of symmetry are also available and are very successful. Three-parameter symmetric and asymmetric subfamilies of the full family are also of interest. Heavy-tailed symmetric sinh-arcsinh distributions behave like Johnson SU distributions, while their light-tailed counterparts behave like sinh-normal distributions, the sinh-arcsinh family allowing a seamless transition between the two, via the normal, controlled by a single parameter. The sinh-arcsinh family is very tractable and many properties are explored. Likelihood inference is pursued, including an attractive reparameterization. Illustrative examples are given. A multivariate version is considered. Options and extensions are discussed.
A large amount of literature has been developed for the presence of multicollinearity among the explanatory variables that are often performed with the aim of reducing the undesirable effects on the ...maximum likelihood estimate(MLE). In particular, many authors have discussed ridge estimation (RE) under the framework of the mean regression model, because the RE enjoys the advantage that its mean squared error (MSE) is less than that of MLE. However, most of the existing methods assume or are applicable to symmetrical data. In this paper, we consider the case of skewed (or asymmetrical) data, which often occur in practice and include symmetrical data as a special case, and derive the RE of the skew-mormal mode regression model under multicollinearity problem. A maximum likelihood method via an EM algorithm and the eleven ridge parameter methods are investigated. Monte Carlo simulation results indicate that the suggested estimator performs better than the MLE in terms of MSE. Then proposed methods are illustrated by a real data analysis.
This article proposes an imputation procedure that uses the factors estimated from a tall block along with the re-rotated loadings estimated from a wide block to impute missing values in a panel of ...data. Assuming that a strong factor structure holds for the full panel of data and its sub-blocks, it is shown that the common component can be consistently estimated at four different rates of convergence without requiring regularization or iteration. An asymptotic analysis of the estimation error is obtained. An application of our analysis is estimation of counterfactuals when potential outcomes have a factor structure. We study the estimation of average and individual treatment effects on the treated and establish a normal distribution theory that can be useful for hypothesis testing.
Efficient and high-fidelity prior sampling and inversion for complex geological media is still a largely unsolved challenge. Here, we use a deep neural network of the variational autoencoder type to ...construct a parametric low-dimensional base model parameterization of complex binary geological media. For inversion purposes, it has the attractive feature that random draws from an uncorrelated standard normal distribution yield model realizations with spatial characteristics that are in agreement with the training set. In comparison with the most commonly used parametric representations in probabilistic inversion, we find that our dimensionality reduction (DR) approach outperforms principle component analysis (PCA), optimization-PCA (OPCA) and discrete cosine transform (DCT) DR techniques for unconditional geostatistical simulation of a channelized prior model. For the considered examples, important compression ratios (200–500) are achieved. Given that the construction of our parameterization requires a training set of several tens of thousands of prior model realizations, our DR approach is more suited for probabilistic (or deterministic) inversion than for unconditional (or point-conditioned) geostatistical simulation. Probabilistic inversions of 2D steady-state and 3D transient hydraulic tomography data are used to demonstrate the DR-based inversion. For the 2D case study, the performance is superior compared to current state-of-the-art multiple-point statistics inversion by sequential geostatistical resampling (SGR). Inversion results for the 3D application are also encouraging.