We demonstrate a simple greedy algorithm that can reliably recover a vector v ¿ ¿ d from incomplete and inaccurate measurements x = ¿ v + e . Here, ¿ is a N x d measurement matrix with N <<d, and e ...is an error vector. Our algorithm, Regularized Orthogonal Matching Pursuit (ROMP), seeks to provide the benefits of the two major approaches to sparse recovery. It combines the speed and ease of implementation of the greedy methods with the strong guarantees of the convex programming methods. For any measurement matrix ¿ that satisfies a quantitative restricted isometry principle, ROMP recovers a signal v with O ( n ) nonzeros from its inaccurate measurements x in at most n iterations, where each iteration amounts to solving a least squares problem. The noise level of the recovery is proportional to ¿{log n } || e || 2 . In particular, if the error term e vanishes the reconstruction is exact.
Array processing is widely used in sensing applications for estimating the locations and waveforms of the sources in a given field. In the absence of a large number of snapshots, which is the case in ...numerous practical applications, such as underwater array processing, it becomes challenging to estimate the source parameters accurately. This paper presents a nonparametric and hyperparameter, free-weighted, least squares-based iterative adaptive approach for amplitude and phase estimation (IAA-APES) in array processing. IAA-APES can work well with few snapshots (even one), uncorrelated, partially correlated, and coherent sources, and arbitrary array geometries. IAA-APES is extended to give sparse results via a model-order selection tool, the Bayesian information criterion (BIC). Moreover, it is shown that further improvements in resolution and accuracy can be achieved by applying the parametric relaxation-based cyclic approach (RELAX) to refine the IAA-APES&BIC estimates if desired. IAA-APES can also be applied to active sensing applications, including single-input single-output (SISO) radar/sonar range-Doppler imaging and multi-input single-output (MISO) channel estimation for communications. Simulation results are presented to evaluate the performance of IAA-APES for all of these applications, and IAA-APES is shown to outperform a number of existing approaches.
Purpose
Indirect or mediated effects constitute a type of relationship between constructs that often occurs in partial least squares (PLS) path modeling. Over the past few years, the methods for ...testing mediation have become more sophisticated. However, many researchers continue to use outdated methods to test mediating effects in PLS, which can lead to erroneous results. One reason for the use of outdated methods or even the lack of their use altogether is that no systematic tutorials on PLS exist that draw on the newest statistical findings. The paper aims to discuss these issues.
Design/methodology/approach
This study illustrates the state-of-the-art use of mediation analysis in the context of PLS-structural equation modeling (SEM).
Findings
This study facilitates the adoption of modern procedures in PLS-SEM by challenging the conventional approach to mediation analysis and providing more accurate alternatives. In addition, the authors propose a decision tree and classification of mediation effects.
Originality/value
The recommended approach offers a wide range of testing options (e.g. multiple mediators) that go beyond simple mediation analysis alternatives, helping researchers discuss their studies in a more accurate way.
This article proposes a magnetic flux saturation model that well represents the cross saturation of synchronous reluctance machines (SynRMs) and a parameter estimation method of the proposed ...saturation model. Existing magnetic flux models do not satisfy the reciprocity condition or express cross saturation well. The proposed flux saturation model consists of terms for self-saturation and cross saturation, and it expresses well the nonlinear relationship between current and flux of SynRMs, as well as satisfies the reciprocity condition. The data of flux saturation are obtained at standstill using the hysteresis voltage injection method. Using the flux saturation data, the parameters of the flux saturation model are estimated. Because the proposed magnetic flux saturation model includes an arctangent function, it is not possible to estimate parameters directly using the linear least-squares method (LSM). However, the proposed parameter estimation method integrates the self-saturation model and transforms it into a polynomial to which linear LSM can be applied. In addition, parameters related to cross saturation are also estimated using linear LSM. Therefore, the proposed parameter estimation method is easy to implement and can be applied to general-purposed inverter products. The effectiveness of the proposed model and its identification method are experimentally evaluated with a 1.5-kW SynRM. Additionally, the identified model is verified with the accuracy of the maximum torque per ampere table and the performance of sensorless control of the tested motor.
We present the recursive least squares dictionary learning algorithm, RLS-DLA, which can be used for learning overcomplete dictionaries for sparse signal representation. Most DLAs presented earlier, ...for example ILS-DLA and K-SVD, update the dictionary after a batch of training vectors has been processed, usually using the whole set of training vectors as one batch. The training set is used iteratively to gradually improve the dictionary. The approach in RLS-DLA is a continuous update of the dictionary as each training vector is being processed. The core of the algorithm is compact and can be effectively implemented. The algorithm is derived very much along the same path as the recursive least squares (RLS) algorithm for adaptive filtering. Thus, as in RLS, a forgetting factor ¿ can be introduced and easily implemented in the algorithm. Adjusting ¿ in an appropriate way makes the algorithm less dependent on the initial dictionary and it improves both convergence properties of RLS-DLA as well as the representation ability of the resulting dictionary. Two sets of experiments are done to test different methods for learning dictionaries. The goal of the first set is to explore some basic properties of the algorithm in a simple setup, and for the second set it is the reconstruction of a true underlying dictionary. The first experiment confirms the conjectural properties from the derivation part, while the second demonstrates excellent performance.
In this paper, the variational multiscale interpolating element-free Galerkin (VMIEFG) method is developed to obtain the numerical solution ofthenonlinearDarcy–Forchheimer model. We use the ...interpolating moving least squares method instead of the moving least squares approximation to construct meshless shape functions with delta function properties. Then the flux boundary condition of the Darcy–Forchheimer model can be handled easily. Hughes’ variational multiscale (HVM) method is applied to overcome the numerical oscillation caused by equal-order basis for the velocity and pressure. Moreover, the HVM ensures that the resultant formulation in the VMIEFG method is consistent and the stabilization parameter (or tensor) appears naturally. Consequently, the stabilization parameter is free of user-defined. The fixed point iteration method is used to deal with the nonlinear term. Some numerical examples are provided to illustrate the stability and performance of the proposed method for solving the nonlinear Darcy–Forchheimer model.
A review about the application of response surface methodology (RSM) when several responses have to be simultaneously optimized in the field of analytical methods development is presented. Several ...critical issues like response transformation, multiple response optimization and modeling with least squares and artificial neural networks are discussed. Most recent analytical applications are presented in the context of analytLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, ArgentinaLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, Argentinaical methods development, especially in multiple response optimization procedures using the desirability function.
Display omitted
•We discuss the optimization role in new analytical method development.•We show in detail the application of experimental designs in analytical chemistry.•Desirability function is highly useful when optimizing complex systems.•Authors increasingly use multivariate optimization in analytical separations.•Extraction procedures are more efficient when combined with experimental design.
Hot Stuff for One Year (HSOY) Altmann, M; Roeser, S; Demleitner, M ...
Astronomy and astrophysics (Berlin),
4/2017, Letnik:
600
Journal Article
Recenzirano
Odprti dostop
Context. Recently, the first installment of data from the ESA Gaia astrometric satellite mission (Gaia DR1) was released, containing positions of more than 1 billion stars with unprecedented ...precision. This release contains the proper motions and parallaxes, however, for only a subset of 2 million objects. The second release will include those quantities for most objects. Aims. In order to provide a dataset that bridges the time gap between the Gaia DR1 and Gaia DR2 releases and partly remedies the lack of proper motions in the former, Hot Stuff for One Year (HSOY) was created as a hybrid catalogue between Gaia and ground-based astrometry. This catalogue features proper motions (but no parallaxes) for a large percentage of the DR1 objects. While not attempting to compete with future Gaia releases in terms of data quality or number of objects, the aim of HSOY is to provide improved proper motions partly based on Gaia data and to allow studies to be carried out now or as pilot studies for later projects requiring higher precision data. Methods. The HSOY catalogue was compiled using the positions taken from Gaia DR1 combined with the input data from the PPMXL catalogue, employing the same weighted least-squares technique that was used to assemble the PPMXL catalogue itself. Results. This effort resulted in a four-parameter astrometric catalogue containing 583 million stars with Gaia DR1 quality positions and proper motions with precisions from far less than 1 mas/yr to 5 mas/yr, depending on object brightness and location on the sky.
Ecological data often show temporal, spatial, hierarchical (random effects), or phylogenetic structure. Modern statistical approaches are increasingly accounting for such dependencies. However, when ...performing cross‐validation, these structures are regularly ignored, resulting in serious underestimation of predictive error. One cause for the poor performance of uncorrected (random) cross‐validation, noted often by modellers, are dependence structures in the data that persist as dependence structures in model residuals, violating the assumption of independence. Even more concerning, because often overlooked, is that structured data also provides ample opportunity for overfitting with non‐causal predictors. This problem can persist even if remedies such as autoregressive models, generalized least squares, or mixed models are used. Block cross‐validation, where data are split strategically rather than randomly, can address these issues. However, the blocking strategy must be carefully considered. Blocking in space, time, random effects or phylogenetic distance, while accounting for dependencies in the data, may also unwittingly induce extrapolations by restricting the ranges or combinations of predictor variables available for model training, thus overestimating interpolation errors. On the other hand, deliberate blocking in predictor space may also improve error estimates when extrapolation is the modelling goal. Here, we review the ecological literature on non‐random and blocked cross‐validation approaches. We also provide a series of simulations and case studies, in which we show that, for all instances tested, block cross‐validation is nearly universally more appropriate than random cross‐validation if the goal is predicting to new data or predictor space, or for selecting causal predictors. We recommend that block cross‐validation be used wherever dependence structures exist in a dataset, even if no correlation structure is visible in the fitted model residuals, or if the fitted models account for such correlations.