Gaussian process regression for direction-of-arrival (DOA) estimation in random linear arrays is formally introduced in this letter, a novel methodology to estimate or interpolate information in the ...complex domain is implemented to address the correlation between the real and complex variables of the signal model. The impinging waveform over the array aperture is sampled nonuniformly in the spatial domain using antenna elements scattered in an arbitrary manner. The output from the random array is subsequently mapped to the output of a virtual uniform array composed of an equal number of elements as the original array and a separation amounting to half a wavelength of the carrier frequency. The interpolated signal vector is finally the input to a standard root-MuSiC algorithm, which computes the DOA of incoming signals using a subspace methodology. The efficacy and robustness of the algorithm is substantiated through Monte Carlo simulations with respect to various scenarios, closely replicating real-time variations in the communication channel.
We define a family of univariate many knot spline spaces of arbitrary degree defined on an initial partition that is refined by adding a point in each sub-interval. For an arbitrary smoothness r, ...splines of degrees 2r and 2r+1 are considered by imposing additional regularity when necessary. For an arbitrary degree, a B-spline-like basis is constructed by using the Bernstein–Bézier representation. Blossoming is then used to establish a Marsden’s identity from which several quasi-interpolation operators having optimal approximation orders are defined.
Frequency estimation is a fundamental problem in many areas. The previously proposed q -shift estimator (QSE), which interpolates the discrete Fourier transform (DFT) coefficients by a factor of q , ...enables the estimation accuracy to approach the Cramér-Rao lower bound (CRLB). However, it becomes less effective when the number of samples is small. In this letter, we provide an in-depth analysis to unveil the impact of q on the convergence of QSE, and derive the bounds of a refined region of q that ensures the convergence of QSE to the CRLB even with a small number of samples. Simulations validate our analysis, showing that the refined interpolation factor is able to reduce the estimation mean squared error of QSE by up to 13.14 dB when the sample number is as small as 8.
Demography researchers and scientists have been effectively utilizing advanced technologies and methods such as geographical information systems, spatial statistics, georeferenced data, and satellite ...images for the last 25 years. Areal interpolation methods have also been adopted for the development of population density maps which are essential for a variety of social and environmental studies. Still, a good number of social scientists are skeptical about such technologies due to the complexity of methods and analyses. In this regard, a practical intelligent dasymetric mapping (IDM) tool that facilitates the implementation of the statistical analyses was used in this study to develop the population distribution map for the Istanbul metropolitan area via night light data provided by the Defense Meteorological Satellite Program’s Operational Linescan System (DMSP-OLS) and the census records of the study area. A population density map was also produced using the choropleth mapping method to enable to make a comparison of the traditional and intelligent population density mapping implementations. According to the dasymetric population density map, 38.5% of the study area fell into sparse density category while low, moderate, high, and very high population density class percentages were found to be 9.4%, 5.5%, 2.9%, and 0.1% respectively. On the other hand, the percentages of the same population density classes ranking from sparse to very high in the choropleth map were determined to be 90.7%, 7.3%, 1.7%, 0.3%, and 0%. In the change analysis made as a result of the classification, the changes between the city area and the population were revealed. During this period, the city area and population grew. Spatial change has also been interpreted by comparing it with population changes. There appears to be a remarkable increase in both surface area and population. It is observed that the increase is especially in the south and northwest of the city. With the population increase, the number of new residential areas has increased. It is thought that behind this growth, there are different reasons besides the effect of the increase in residential areas. When the environmental awareness of people has increased more than in the past centuries, new solutions should be produced in order to be more controlled, smart, and sustainable while planning the cities of the future. Considering that the development of technology and remote sensing techniques is progressing in parallel with this technology, this study in which GIS technologies integrated with satellite images are used, it is thought that it will contribute positively to the studies in this area in terms of regular development of urban areas, increasing the opportunity to make fast and correct decisions, and creating infrastructure for studies such as monitoring and prevention of illegal housing.
Geometric transformations, such as resizing and rotation, are almost always needed when two or more images are spliced together to create convincing image forgeries. In recent years, researchers have ...developed many digital forensic techniques to identify these operations. Most previous works in this area focus on the analysis of images that have undergone single geometric transformations, e.g., resizing or rotation. In several recent works, researchers have addressed yet another practical and realistic situation: successive geometric transformations, e.g., repeated resizing, resizing-rotation, rotation-resizing, and repeated rotation. We will also concentrate on this topic in this paper. Specifically, we present an in-depth analysis in the frequency domain of the second-order statistics of the geometrically transformed images. We give an exact formulation of how the parameters of the first and second geometric transformations influence the appearance of periodic artifacts. The expected positions of characteristic resampling peaks are analytically derived. The theory developed here helps to address the gap left by previous works on this topic and is useful for image security and authentication, in particular, the forensics of geometric transformations in digital images. As an application of the developed theory, we present an effective method that allows one to distinguish between the aforementioned four different processing chains. The proposed method can further estimate all the geometric transformation parameters. This may provide useful clues for image forgery detection.
Recently, because of simplicity of meshless methods, they have been employed for solving many partial differential equations. One of the meshless methods is element free Galerkin technique. The ...element free Galerkin is very similar to the finite element method with this difference that the test and trial spaces of EFG procedure are shape functions of moving least squares approximation. Using the mentioned shape functions, solving a problem on a complex domain is very simple. Some researchers proposed several modifications and enriched approaches for improving the element free Galerkin method that one of them is variational multiscale element free Galerkin procedure. Up to the best of authors’ knowledge, the element free Galerkin method based on the shape functions of moving least squares approximation needs more CPU time than the element free Galerkin method based on the shape functions of moving Kriging interpolation. Thus, in the current paper, we employ the variational multiscale element free Galerkin based on the shape functions of moving Kriging interpolation. Also, for reducing the CPU time of the presented numerical scheme, we use the proper orthogonal decomposition (POD) approach. Therefore, in the current paper, we propose the proper orthogonal decomposition variational multiscale element free Galerkin (POD-VMEFG) method for solving time-dependent incompressible Navier–Stokes equation. Moreover, several test problems are given that show the acceptable accuracy and efficiency of the proposed scheme.
•Variational multiscale element free Galerkin(VMEFG) and moving Kriging are combined.•Proper orthogonal decomposition VMEFG method is employed to reduce CPU time.•Numerical simulations for incompressible Navier–Stokes equation by new technique are presented.•A calibration based on H1 Sobolev norm is used to obtain results with sufficient accuracy.•Efficiency of new technique is studied via solving some examples on irregular domains.
•A second-order corrective matrix is developed for MPS to reduce discretization error.•Error function is derived for the corrected high order schemes.•First order gradient model produces less ...numerical diffusion than the second order one in interpolation.•The relative magnitude of stabilization and truncation error is compared.
The Lagrangian nature of the moving particle semi-implicit (MPS) method brings two challenges: disordered particle distribution and particle clumping. The former can cause large random discretization error for the original MPS models while corrective matrix can effectively reduce such large error to the high-order truncation error. The latter can trigger instability easily and thus some adjustment strategies for stability are indispensable, thereby causing non-negligible stabilization error. The purpose of this paper is to compare the relative magnitude of the truncation and stabilization error, which is of great significance for future improvements. An indirect approach is developed because of the difficulty of separating different error from total error in dynamic simulations. The basic idea is to check whether the total error decreases significantly after the truncation error is further reduced. First, a second order corrective matrix (SCM) is proposed for MPS to reduce the truncation error further, as demonstrated by theoretical error analysis. Second, error analysis reveals that the first order gradient model produces less numerical diffusion than the second order gradient model in interpolation after particle shifting. Then, several numerical examples, including Taylor-Green vortex, elliptical drop deformation, excited pressure oscillation flow and continuous oil spill flow, are simulated to test the variance of total error after SCM is applied. It is found that the SCM schemes basically did not remarkably decrease the total error for incompressible free surface flow, implying that truncation error is not dominant compared to the stabilization error. Therefore, reducing the stabilization error is of more significance in future.
We devise three strategies for recognizing admissibility of non-standard inference rules via interpolation, uniform interpolation, and model completions. We apply our machinery to the case of ...symmetric implication calculus S2IC, where we also supply a finite axiomatization of the model completion of its algebraic counterpart, via the equivalent theory of contact algebras. Using this result we obtain a finite basis for admissible Π2-rules.