In this work we consider quasi-optimal versions of the Stochastic Galerkin method for solving linear elliptic PDEs with stochastic coefficients. In particular, we consider the case of a finite number ...N of random inputs and an analytic dependence of the solution of the PDE with respect to the parameters in a polydisc of the complex plane CN. We show that a quasi-optimal approximation is given by a Galerkin projection on a weighted (anisotropic) total degree space and prove a (sub)exponential convergence rate. As a specific application we consider a thermal conduction problem with non-overlapping inclusions of random conductivity. Numerical results show the sharpness of our estimates.
This paper is concerned with the polynomial filtering problem for a class of nonlinear systems with quantisations and missing measurements. The nonlinear functions are approximated with polynomials ...of a chosen degree and the approximation errors are described as low-order polynomial terms with norm-bounded coefficients. The transmitted outputs are quantised by a logarithmic quantiser and are also subject to randomly missing measurements governed by a Bernoulli distributed sequence taking values on 0 or 1. Dedicated efforts are made to derive an upper bound of the filtering error covariance in the simultaneous presence of the polynomial approximation errors, the quantisations as well as the missing measurements at each time instant. Such an upper bound is then minimised through designing a suitable filter gain by solving a set of matrix equations. The filter design algorithm is recursive and therefore applicable for online computation. An illustrative example is exploited to show the effectiveness of the proposed algorithm.
This work proposes and analyzes a Smolyak-type sparse grid stochastic collocation method for the approximation of statistical quantities related to the solution of partial differential equations with ...random coefficients and forcing terms (input data of the model). To compute solution statistics, the sparse grid stochastic collocation method uses approximate solutions, produced here by finite elements, corresponding to a deterministic set of points in the random input space. This naturally requires solving uncoupled deterministic problems as in the Monte Carlo method. If the number of random variables needed to describe the input data is moderately large, full tensor product spaces are computationally expensive to use due to the curse of dimensionality. In this case the sparse grid approach is still expected to be competitive with the classical Monte Carlo method. Therefore, it is of major practical relevance to understand in which situations the sparse grid stochastic collocation method is more efficient than Monte Carlo. This work provides error estimates for the fully discrete solution using L q norms and analyzes the computational efficiency of the proposed method. In particular, it demonstrates algebraic convergence with respect to the total number of collocation points and quantifies the effect of the dimension of the problem (number of input random variables) in the final estimates. The derived estimates are then used to compare the method with Monte Carlo, indicating for which problems the former is more efficient than the latter. Computational evidence complements the present theory and shows the effectiveness of the sparse grid stochastic collocation method compared to full tensor and Monte Carlo approaches.
•Polynomial approximation of functions that are invariant under permutations and isometry.•Guarantees that the basis is complete but not overcomplete.•Construction of an orthogonal basis to ensure ...better conditioning.•Open challenges for approximation and parameter estimation.
The Atomic Cluster Expansion (Drautz (2019) 21) provides a framework to systematically derive polynomial basis functions for approximating isometry and permutation invariant functions, particularly with an eye to modelling properties of atomistic systems. Our presentation extends the derivation by proposing a precomputation algorithm that yields immediate guarantees that a complete basis is obtained. We provide a fast recursive algorithm for efficient evaluation and illustrate its performance in numerical tests. Finally, we discuss generalisations and open challenges, particularly from a numerical stability perspective, around basis optimisation and parameter estimation, paving the way towards a comprehensive analysis of the convergence to a high-fidelity reference model.
We consider the problem of reconstructing an unknown bounded function u defined on a domain X ⊂ R d from noiseless or noisy samples of u at n points (x i)i=1,...,n. We measure the reconstruction ...error in a norm L 2 (X, dρ) for some given probability measure dρ. Given a linear space Vm with dim(Vm) = m ≤ n, we study in general terms the weighted least-squares approximations from the spaces Vm based on independent random samples. It is well known that least-squares approximations can be inaccurate and unstable when m is too close to n, even in the noiseless case. Recent results from 4, 5 have shown the interest of using weighted least squares for reducing the number n of samples that is needed to achieve an accuracy comparable to that of best approximation in Vm, compared to standard least squares as studied in 3. The contribution of the present paper is twofold. From the theoretical perspective, we establish results in expectation and in probability for weighted least squares in general approximation spaces Vm. These results show that for an optimal choice of sampling measure dµ and weight w, which depends on the space Vm and on the measure dρ, stability and optimal accuracy are achieved under the mild condition that n scales linearly with m up to an additional logarithmic factor. In contrast to 3, the present analysis covers cases where the function u and its approximants from Vm are unbounded, which might occur for instance in the relevant case where X = R d and dρ is the Gaussian measure. From the numerical perspective, we propose a sampling method which allows one to generate independent and identically distributed samples from the optimal measure dµ. This method becomes of interest in the multivariate setting where dµ is generally not of tensor product type. We illustrate this for particular examples of approximation spaces Vm of polynomial type, where the domain X is allowed to be unbounded and high or even infinite dimensional, motivated by certain applications to parametric and stochastic PDEs.