Active learning methods have recently surged in the literature due to their ability to solve complex structural reliability problems within an affordable computational cost. These methods are ...designed by adaptively building an inexpensive surrogate of the original limit-state function. Examples of such surrogates include Gaussian process models which have been adopted in many contributions, the most popular ones being the efficient global reliability analysis (EGRA) and the active Kriging Monte Carlo simulation (AK-MCS), two milestone contributions in the field. In this paper, we first conduct a survey of the recent literature, showing that most of the proposed methods actually span from modifying one or more aspects of the two aforementioned methods. We then propose a generalized modular framework to build on-the-fly efficient active learning strategies by combining the following four ingredients or modules: surrogate model, reliability estimation algorithm, learning function and stopping criterion. Using this framework, we devise 39 strategies for the solution of 20 reliability benchmark problems. The results of this extensive benchmark (more than 12,000 reliability problems solved) are analyzed under various criteria leading to a synthesized set of recommendations for practitioners. These may be refined with a priori knowledge about the feature of the problem to solve, i.e. dimensionality and magnitude of the failure probability. This benchmark has eventually highlighted the importance of using surrogates in conjunction with sophisticated reliability estimation algorithms as a way to enhance the efficiency of the latter.
•Survey of active learning reliability methods summarized into a general framework.•Benchmark with 20 structural reliability problems and 39 active learning strategies.•The combination PC-Kriging + subset simulation generally leads to best performance.•Surrogate models in overkill setup are better than direct (no surrogate) approaches.•Surrogate models should be used to fully harness the potential of reliability methods.
Polynomial chaos (PC) expansions are used in stochastic finite element analysis to represent the random model response by a set of coefficients in a suitable (so-called
polynomial chaos) basis. The ...number of terms to be computed grows dramatically with the size of the input random vector, which makes the computational cost of classical solution schemes (may it be intrusive (i.e.of Galerkin type) or non-intrusive) unaffordable when the deterministic finite element model is expensive to evaluate.
To address such problems, this paper describes a non-intrusive method that builds a
sparse PC expansion. An adaptive regression-based algorithm is proposed for automatically detecting the significant coefficients of the PC expansion. Besides the sparsity of the basis, the experimental design used at each step of the algorithm is systematically complemented in order to ensure the well-posedness of the various regression problems. The accuracy of the PC model is checked using classical tools of statistical learning theory (e.g.
leave-one-out cross-validation). As a consequence, a rather small number of PC terms is eventually retained (
sparse representation), which may be obtained at a reduced computational cost compared to the classical “full” PC approximation. The convergence of the algorithm is shown on an academic example. Then the method is illustrated on two stochastic finite element problems, namely a truss and a frame structure involving 10 and 21 input random variables, respectively.
In the context of global sensitivity analysis, the Sobol' indices constitute a powerful tool for assessing the relative significance of the uncertain input parameters of a model. We herein introduce ...a novel approach for evaluating these indices at low computational cost, by post-processing the coefficients of polynomial meta-models belonging to the class of low-rank tensor approximations. Meta-models of this class can be particularly efficient in representing responses of high-dimensional models, because the number of unknowns in their general functional form grows only linearly with the input dimension. The proposed approach is validated in example applications, where the Sobol' indices derived from the meta-model coefficients are compared to reference indices, the latter obtained by exact analytical solutions or Monte-Carlo simulation with extremely large samples. Moreover, low-rank tensor approximations are confronted to the popular polynomial chaos expansion meta-models in case studies that involve analytical rank-one functions and finite-element models pertinent to structural mechanics and heat conduction. In the examined applications, indices based on the novel approach tend to converge faster to the reference solution with increasing size of the experimental design used to build the meta-model.
•A new method is proposed for global sensitivity analysis of high-dimensional models.•Low-rank tensor approximations (LRA) are used as a meta-modeling technique.•Analytical formulas for the Sobol' indices in terms of LRA coefficients are derived.•The accuracy and efficiency of the approach is illustrated in application examples.•LRA-based indices are compared to indices based on polynomial chaos expansions.
In modern engineering, computer simulations are a popular tool to analyse, design, and optimize systems. Furthermore, concepts of uncertainty and the related reliability analysis and robust design ...are of increasing importance. Hence, an efficient quantification of uncertainty is an important aspect of the engineer's workflow. In this context, the characterization of uncertainty in the input variables is crucial. In this paper, input variables are modelled by probability-boxes, which account for both aleatory and epistemic uncertainty. Two types of probability-boxes are distinguished: free and parametric (also called distributional) p-boxes. The use of probability-boxes generally increases the complexity of structural reliability analyses compared to traditional probabilistic input models. In this paper, the complexity is handled by two-level approaches which use Kriging meta-models with adaptive experimental designs at different levels of the structural reliability analysis. For both types of probability-boxes, the extensive use of meta-models allows for an efficient estimation of the failure probability at a limited number of runs of the performance function. The capabilities of the proposed approaches are illustrated through a benchmark analytical function and two realistic engineering problems.
•A resampled polynomial chaos expansion is developed to reduce the modeling variance with small experimental designs.•Various sparse polynomial chaos expansions are built from resampled experimental ...designs.•The basis functions are ranked according to the selection frequency in the various PCE.•Cross-validation is used for further ranking of basis functions with the same selection frequency.•The proposed method achieves better accuracy than standard sparse PCE.
In surrogate modeling, polynomial chaos expansion (PCE) is popularly utilized to represent the random model responses, which are computationally expensive and usually obtained by deterministic numerical modeling approaches including finite-element and finite-difference time-domain methods. Recently, efforts have been made on improving the prediction performance of the PCE-based model and building efficiency by only selecting the influential basis polynomials (e.g., via the approach of least angle regression). This paper proposes an approach, named as resampled PCE (rPCE), to further optimize the selection by making use of the knowledge that the true model is fixed despite the statistical uncertainty inherent to sampling in the training. By simulating data variation via resampling (k-fold division utilized here) and collecting the selected polynomials with respect to all resamples, polynomials are ranked mainly according to the selection frequency. The resampling scheme (the value of k here) matters much and various configurations are considered and compared. The proposed resampled PCE is implemented with two popular selection techniques, namely least angle regression and orthogonal matching pursuit, and a combination thereof. The performance of the proposed algorithm is demonstrated on two analytical examples, a benchmark problem in structural mechanics, as well as a realistic case study in computational dosimetry.
Global sensitivity analysis aims at quantifying the relative importance of uncertain input variables onto the response of a mathematical model of a physical system. ANOVA-based indices such as the ...Sobol’ indices are well-known in this context. These indices are usually computed by direct Monte Carlo or quasi-Monte Carlo simulation, which may reveal hardly applicable for computationally demanding industrial models. In the present paper, sparse polynomial chaos (PC) expansions are introduced in order to compute sensitivity indices. An adaptive algorithm allows the analyst to build up a PC-based metamodel that only contains the significant terms whereas the PC coefficients are computed by least-square regression using a computer experimental design. The accuracy of the metamodel is assessed by leave-one-out cross validation. Due to the genuine orthogonality properties of the PC basis, ANOVA-based sensitivity indices are post-processed analytically. This paper also develops a bootstrap technique which eventually yields confidence intervals on the results. The approach is illustrated on various application examples up to 21 stochastic dimensions. Accurate results are obtained at a computational cost 2–3 orders of magnitude smaller than that associated with Monte Carlo simulation.
Abstract
We present a new power spectrum emulator named EuclidEmulator that estimates the nonlinear correction to the linear dark matter power spectrum depending on the six cosmological parameters ...ωb, ωm, ns, h, $w$0, and σ8. It is constructed using the uncertainty quantification software UQLab using a spectral decomposition method called polynomial chaos expansion. All steps in its construction have been tested and optimized: the large high-resolution N-body simulations carried out with PKDGRAV3 were validated using a simulation from the Euclid Flagship campaign and demonstrated to have converged up to wavenumbers $k\approx 5\, h\, {\rm Mpc}^{-1}$ for redshifts $z$ ≤ 5. The emulator is based on 100 input cosmologies simulated in boxes of (1250 Mpc/h)3 using 20483 particles. We show that by creating mock emulators it is possible to successfully predict and optimize the performance of the final emulator prior to performing any N-body simulations. The absolute accuracy of the final nonlinear power spectrum is as good as one obtained with N-body simulations, conservatively, ${\sim } 1$ per cent for $k\lesssim 1\, h\, {\rm Mpc}^{-1}$ and $z$ ≲ 1. This enables efficient forward modelling in the nonlinear regime, allowing for estimation of cosmological parameters using Markov Chain Monte Carlo methods. EuclidEmulator has been compared to HALOFIT, CosmicEmu, and NGenHalofit, and shown to be more accurate than these other approaches. This work paves a new way for optimal construction of future emulators that also consider other cosmological observables, use higher resolution input simulations, and investigate higher dimensional cosmological parameter spaces.
Polynomial chaos (PC) expansions are used in stochastic finite element analysis to represent the random model response by a set of coefficients in a suitable (so-called
polynomial chaos) basis. The ...number of terms to be computed grows dramatically with the size of the input random vector, which makes the computational cost of classical solution schemes (may it be intrusive (i.e. of Galerkin type) or non intrusive) unaffordable when the deterministic finite element model is expensive to evaluate.
To address such problems, the paper describes a non intrusive method that builds a
sparse PC expansion. First, an original strategy for truncating the PC expansions, based on
hyperbolic index sets, is proposed. Then an adaptive algorithm based on
least angle regression (LAR) is devised for automatically detecting the significant coefficients of the PC expansion. Beside the sparsity of the basis, the experimental design used at each step of the algorithm is systematically complemented in order to avoid the overfitting phenomenon. The accuracy of the PC metamodel is checked using an estimate inspired by statistical learning theory, namely the
corrected leave-one-out error. As a consequence, a rather small number of PC terms are eventually retained (
sparse representation), which may be obtained at a reduced computational cost compared to the classical “full” PC approximation. The convergence of the algorithm is shown on an analytical function. Then the method is illustrated on three stochastic finite element problems. The first model features 10 input random variables, whereas the two others involve an input random field, which is discretized into 38 and 30
−
500 random variables, respectively.
Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. ...Among the abundant literature on sensitivity measures, the Sobol’ indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol’ indices
analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2–3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol’ indices.