In this paper, we provide a detailed exposition of a generalized multivariate Birkhoff interpolation scheme (Z,S,E) and introduce the notions of invariant interpolation space and singular ...interpolation space. We prove that the space PS, which is spanned by the monomial sequence S, is invariant or singular if the incidence matrix E satisfies some conditions. The advantage of our results lie in the fact that we can deduce whether PS is always proper or not for all choices of the given node set Z, just from the property of the incidence matrix E, with very low computational complexity.
Seismic data are often coarsely or inconsistently sampled along the acquisition geometry due to the inherent limitations in survey equipment or insufficient survey budgets. Recently, machine learning ...techniques have been utilized to acquire compactly sampled seismic data. Among them, the self-supervised learning-based techniques that do not require labels are actively being used, and the interpolation technique based on the blind trace network (BTN) and spectrum suppression using suppression masks through line detection has shown high accuracy. However, the interpolation technique using BTN and the suppression masks through line detection suffers from instability caused by the reconstruction loss and inaccuracy of the suppression masks. To mitigate those problems, we propose suppression masks using generalized frequency-wavenumber ( f-k ) trace interpolation (GFKI), patch-based learning, masked UNet, and equalized learning rate. The suppression masks using GFKI are generated by correlating the zero-padded data with the data obtained by regularly removing traces from the zero-padded data in the f-k domain. Additionally, we divide data into patches to enhance the accuracy of the suppression masks. Masked UNet is used to constrain the output to contain the input signals at the designated positions using the binary mask in the space-time domain. Furthermore, we normalize each layer in the network so that the learning speeds for each layer can be commensurate with each other. The synthetic and field data experiments show that the proposed interpolation technique effectively suppresses aliasing of signals and enables the training process to stably converge.
Purpose: To design a better data structure for B‐spline registration. Method and Materials: We have designed a grid alignment technique and a data structure that greatly improves the computation ...speed of B‐spline registration. The basic idea is to align the B‐spline grid with the voxel grid, so that the volume is partitioned into tiles of equal size. The use of equal sized tiles is important for this approach, because it allows the coefficient multipliers used for B‐spline interpolation to be precomputed. A data structure consisting of four tables is used to provide fast access to precomputed index and multiplier values used for interpolation and gradient computations. Voxel indices are decomposed into tile and offset values which are accumulated in a single loop for each voxel. Results: We have implemented the aligned B‐spline registration method, and compared its performance against ITK B‐splines and demons registration. Aligned B‐splines with a mean‐squared error cost function was found to be roughly equivalent in speed to the demons algorithm, and considerably faster than ITK B‐splines. Running times were found to depend only on the image size, and not on the B‐spline control point spacing. Conclusion: When possible, an aligned grid should be used for B‐spline registration.
This work analyses three uncertainty sources affecting the observation‐based gridded data sets: station density, interpolation methodology and spatial resolution. For this purpose, we consider ...precipitation in two countries, Poland and Spain, three resolutions (0.11, 0.22 and 0.44°), three interpolation methods, both areal‐ and point‐representative implementations, and three different densities of the underlying station network (high/medium/low density). As a result, for each resolution and interpolation approach, nine different grids have been obtained for each country and inter‐compared using a variance decomposition methodology.
Results indicate larger differences among the data sets for Spain than for Poland, mainly due to the larger spatial variability and complex orography of the former region. The variance decomposition points out to station density as the most influential factor, independent of the season, the areal‐ or point‐representative implementation and the country considered, and slightly increasing with the spatial resolution. In contrast, the decomposition is stable when extreme precipitation indices are considered, in particular for the 50‐year return value.
Finally, the uncertainty due to station sub‐sampling inside a particular grid box decreases with the number of stations used in the averaging/interpolation. In the case of spatially homogeneous grid boxes, the interpolation approach obtains similar results for all the parameters, excepting the wet day frequency, independently of the number of stations. When there is a more significant internal variability in the grid box, the interpolation is more sensitive to the number of stations, pointing out to a minimum stations’ density for the target resolution (six to seven stations).
The uncertainty due to the stations density, interpolation method and resolution has been analysed for the precipitation in two countries, Poland and Spain, by means a variance decomposition analysis. The results reflect that the main factor in the development of gridded data sets is the underlying station's density for both mean and extreme precipitation. In addition, a minimum density of six to seven stations per grid box has been identified to reach an effective resolution of 0.44.
Fast Image Interpolation via Random Forests Huang, Jun-Jie; Siu, Wan-Chi; Liu, Tian-Rui
IEEE transactions on image processing,
2015-Oct., 2015-Oct, 2015-10-00, 20151001, Letnik:
24, Številka:
10
Journal Article
Recenzirano
This paper proposes a two-stage framework for fast image interpolation via random forests (FIRF). The proposed FIRF method gives high accuracy, as well as requires low computation. The underlying ...idea of this proposed work is to apply random forests to classify the natural image patch space into numerous subspaces and learn a linear regression model for each subspace to map the low-resolution image patch to high-resolution image patch. The FIRF framework consists of two stages. Stage 1 of the framework removes most of the ringing and aliasing artifacts in the initial bicubic interpolated image, while Stage 2 further refines the Stage 1 interpolated image. By varying the number of decision trees in the random forests and the number of stages applied, the proposed FIRF method can realize computationally scalable image interpolation. Extensive experimental results show that the proposed FIRF(3, 2) method achieves more than 0.3 dB improvement in peak signal-to-noise ratio over the state-of-the-art nonlocal autoregressive modeling (NARM) method. Moreover, the proposed FIRF(1, 1) obtains similar or better results as NARM while only takes its 0.3% computational time.
Barycentric rational Floater–Hormann interpolants compare favourably to classical polynomial interpolants in the case of equidistant nodes, because the Lebesgue constant associated with these ...interpolants grows logarithmically in this setting, in contrast to the exponential growth experienced by polynomials. In the Hermite setting, in which also the first derivatives of the interpolant are prescribed at the nodes, the same exponential growth has been proven for polynomial interpolants, and the main goal of this paper is to show that much better results can be obtained with a recent generalization of Floater–Hormann interpolants. After summarizing the construction of these barycentric rational Hermite interpolants, we study the behaviour of the corresponding Lebesgue constant and prove that it is bounded from above by a constant. Several numerical examples confirm this result.
In this paper, we introduce and study fractional neural network interpolation operators activated by a sigmoidal function belonging to the extended class of multivariate sigmoidal functions. We ...examine the rates of approximation by the operators in Lp-spaces using the modulus of continuity. Moreover, we give some special examples with graphics for the extended class of multivariate sigmoidal functions, and present some illustrative examples to demonstrate the interpolation quality of the operators based on various activation functions. Finally, as an application, we present an efficient image processing algorithm by the proposed neural network interpolation operators for both general and medical gray-level images.
For θ∈(0,1)$\theta \in (0,1)$ and variable exponents p0(·),q0(·)$p_0(\cdot ),q_0(\cdot )$ and p1(·),q1(·)$p_1(\cdot ),q_1(\cdot )$ with values in 1, ∞, let the variable exponents pθ(·),qθ(·)$p_\theta ...(\cdot ),q_\theta (\cdot )$ be defined by
1/pθ(·):=(1−θ)/p0(·)+θ/p1(·),1/qθ(·):=(1−θ)/q0(·)+θ/q1(·).$$\begin{equation*} 1/p_\theta (\cdot ):=(1-\theta )/p_0(\cdot )+\theta /p_1(\cdot ), \quad 1/q_\theta (\cdot ):=(1-\theta )/q_0(\cdot )+\theta /q_1(\cdot ). \end{equation*}$$The Riesz–Thorin–type interpolation theorem for variable Lebesgue spaces says that if a linear operator T acts boundedly from the variable Lebesgue space Lpj(·)$L^{p_j(\cdot )}$ to the variable Lebesgue space Lqj(·)$L^{q_j(\cdot )}$ for j=0,1$j=0,1$, then
∥T∥Lpθ(·)→Lqθ(·)≤C∥T∥Lp0(·)→Lq0(·)1−θ∥T∥Lp1(·)→Lq1(·)θ,$$\begin{equation*} \Vert T\Vert _{L^{p_\theta (\cdot )}\rightarrow L^{q_\theta (\cdot )}} \le C \Vert T\Vert _{L^{p_0(\cdot )}\rightarrow L^{q_0(\cdot )}}^{1-\theta } \Vert T\Vert _{L^{p_1(\cdot )}\rightarrow L^{q_1(\cdot )}}^{\theta }, \end{equation*}$$where C is an interpolation constant independent of T. We consider two different modulars ϱmax(·)$\varrho ^{\max }(\cdot )$ and ϱsum(·)$\varrho ^{\rm sum}(\cdot )$ generating variable Lebesgue spaces and give upper estimates for the corresponding interpolation constants Cmax and Csum, which imply that Cmax≤2$C_{\rm max}\le 2$ and Csum≤4$C_{\rm sum}\le 4$, as well as, lead to sufficient conditions for Cmax=1$C_{\rm max}=1$ and Csum=1$C_{\rm sum}=1$. We also construct an example showing that, in many cases, our upper estimates are sharp and the interpolation constant is greater than one, even if one requires that pj(·)=qj(·)$p_j(\cdot )=q_j(\cdot )$, j=0,1$j=0,1$ are Lipschitz continuous and bounded away from one and infinity (in this case, ϱmax(·)=ϱsum(·)$\varrho ^{\rm max}(\cdot )=\varrho ^{\rm sum}(\cdot )$).
Coprime arrays can achieve an increased number of degrees of freedom by deriving the equivalent signals of a virtual array. However, most existing methods fail to utilize all information received by ...the coprime array due to the non-uniformity of the derived virtual array, resulting in an inevitable estimation performance loss. To address this issue, we propose a novel virtual array interpolation-based algorithm for coprime array direction-of-arrival (DOA) estimation in this paper. The idea of array interpolation is employed to construct a virtual uniform linear array such that all virtual sensors in the non-uniform virtual array can be utilized, based on which the atomic norm of the second-order virtual array signals is defined. By investigating the properties of virtual domain atomic norm, it is proved that the covariance matrix of the interpolated virtual array is related to the virtual measurements under the Hermitian positive semi-definite Toeplitz condition. Accordingly, an atomic norm minimization problem with respect to the equivalent virtual measurement vector is formulated to reconstruct the interpolated virtual array covariance matrix in a gridless manner, where the reconstructed covariance matrix enables off-grid DOA estimation. Simulation results demonstrate the performance advantages of the proposed DOA estimation algorithm for coprime arrays.