A spectrally sparse signal of order r is a mixture of r damped or undamped complex sinusoids. This paper investigates the problem of reconstructing spectrally sparse signals from a random subset of n ...regular time domain samples, which can be reformulated as a low rank Hankel matrix completion problem. We introduce an iterative hard thresholding (IHT) algorithm and a fast iterative hard thresholding (FIHT) algorithm for efficient reconstruction of spectrally sparse signals via low rank Hankel matrix completion. Theoretical recovery guarantees have been established for FIHT, showing that O(r2log2(n)) number of samples are sufficient for exact recovery with high probability. Empirical performance comparisons establish significant computational advantages for IHT and FIHT. In particular, numerical simulations on 3D arrays demonstrate the capability of FIHT on handling large and high-dimensional real data.
Signals are generally modeled as a superposition of exponential functions in spectroscopy of chemistry, biology, and medical imaging. For fast data acquisition or other inevitable reasons, however, ...only a small amount of samples may be acquired, and thus, how to recover the full signal becomes an active research topic, but existing approaches cannot efficiently recover N-dimensional exponential signals with N ≥ 3. In this paper, we study the problem of recovering N-dimensional (particularly N ≥ 3) exponential signals from partial observations, and formulate this problem as a low-rank tensor completion problem with exponential factor vectors. The full signal is reconstructed by simultaneously exploiting the CANDECOMP/PARAFAC tensor structure and the exponential structure of the associated factor vectors. The latter is promoted by minimizing an objective function involving the nuclear norm of Hankel matrices. Experimental results on simulated and real magnetic resonance spectroscopy data show that the proposed approach can successfully recover full signals from very limited samples and is robust to the estimated tensor rank.
•Enhanced sparse filtering addresses the limitations of intelligent fault diagnosis method caused by noise in application.•Four tricks are using to enhance the noise adaptability of the model ...includingL3/2−2-normalized sparse filtering, Hankel matrix, normalized weight matrix and normalized feature.•The proposed method, which works directly on raw vibration signals without any time-consuming denoising preprocessing, is found to be a promising tool for the rotating machine fault diagnosis under working noise.
Intelligent fault diagnosis is an effective method to guarantee the continuous and efficient operation of rotating machinery. Compared with the experimental environment, noise is inevitable in real word industrial applications, which causes serious degradation of the performance of intelligent fault diagnosis methods. In view of this, this study aims to provide a method that could accurately diagnose faults under noisy environment. In this paper, we firstly discuss the characteristics of normalization and the feature extracting process of sparse filtering. Then, we propose a novel method based on the L3/2-norm, Hankel-training matrix, normalized weight matrix and feature normalization for rotating machinery fault diagnosis under noisy environment. The proposed method is applied to the fault diagnosis of rolling bearing and planetary gearbox with noise interference. The verification results confirm that the proposed method is a promising tool that shows strong noise adaptability using the training of original datasets without any time-consuming denoising preprocessing.
We introduce a flexible optimization framework for nuclear norm minimization of matrices with linear structure, including Hankel, Toeplitz, and moment structures and catalog applications from diverse ...fields under this framework. We discuss various first-order methods for solving the resulting optimization problem, including alternating direction methods of multipliers, proximal point algorithms, and gradient projection methods. We perform computational experiments to compare these methods on system identification problems and system realization problems. For the system identification problem, the gradient projection method (accelerated by Nesterov's extrapolation techniques) and the proximal point algorithm usually outperform other first-order methods in terms of CPU time on both real and simulated data, for small and large regularization parameters, respectively, while for the system realization problem, the alternating direction method of multipliers, as applied to a certain primal reformulation, usually outperforms other first-order methods in terms of CPU time. We also study the convergence of the proximal alternating direction methods of multipliers used in this paper. PUBLICATION ABSTRACT
Magnetic resonance imaging serves as an essential tool for clinical diagnosis. However, it suffers from a long acquisition time. The utilization of deep learning, especially the deep generative ...models, offers aggressive acceleration and better reconstruction in magnetic resonance imaging. Nevertheless, learning the data distribution as prior knowledge and reconstructing the image from limited data remains challenging. In this work, we propose a novel Hankel-k-space generative model (HKGM), which can generate samples from a training set of as little as one k-space. At the prior learning stage, we first construct a large Hankel matrix from k-space data, then extract multiple structured k-space patches from the Hankel matrix to capture the internal distribution among different patches. Extracting patches from a Hankel matrix enables the generative model to be learned from the redundant and low-rank data space. At the iterative reconstruction stage, the desired solution obeys the learned prior knowledge. The intermediate reconstruction solution is updated by taking it as the input of the generative model. The updated result is then alternatively operated by imposing low-rank penalty on its Hankel matrix and data consistency constraint on the measurement data. Experimental results confirmed that the internal statistics of patches within single k-space data carry enough information for learning a powerful generative model and providing state-of-the-art reconstruction.
In stochastic subspace methods, the modal analysis results are highly dependent on the dimensions of the Hankel matrix. Increasing the dimensions of the Hankel matrix (especially the number of rows) ...improves the estimation of the modal features by decreasing the impact of noise effects. Due to processing time and memory use, it is impossible to adjust the size of the Hankel matrix to the maximum compatible value. Therefore, this study uses a sensitivity analysis of the dimensions of the Hankel matrix to pick models with the slightest estimate error in the data-driven (DD-SSI) method. First, using the condition number criteria, the desirable models in which the effects of system errors are not predominating are determined. Second, the modal properties of desired models are validated by clustering the damping ratios and modal frequencies using the Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm and assessing the complexity of shape modes with the Modal Complexity Factor (MCF). Finally, the optimum models are identified by analyzing the estimated error of modal properties (particularly damping ratio) and the coefficient of variation (CV) of damping of validated clusters. The proposed method was investigated for a two-dimensional simulated concrete building frame, a three-dimensional experimental model, and the Namin city Overpass Bridge ambient vibration tests. Sensitivity analysis was performed for canonical correlation analysis (SSI-CCA) and canonical variate analysis (SSI-CVA) methods. Analyses of computational and laboratory models revealed that the likelihood of non-physical modes arising outside the system's maximum order is relatively high. Also, the MCF is valuable for identifying computational and noise modes. These criteria accurately detected the flexural mode of the foundation set, which was clustered as the stable mode of the slab deck in the practical bridge. Furthermore, the CV analysis revealed that the desired dimension obtained from the condition number could be a reasonable estimate of the optimal system dimension.
•A methodology is proposed to remove baseline wander and power line interference from ECG signal.•An eigenvalue decomposition based method is proposed in this work.•The relation between eigenvalues ...of ECG signal and its baseline wander, and power line interference presented is explored.
In this paper, a novel method is proposed for baseline wander (BW) and power line interference (PLI) removal from electrocardiogram (ECG) signals. The proposed methodology is based on the eigenvalue decomposition of the Hankel matrix. It has been observed that the end-point eigenvalues of the Hankel matrix formed using noisy ECG signals are correlated with BW and PLI components. We have proposed a methodology to remove BW and PLI noise by eliminating eigenvalues corresponding to noisy components. The proposed concept uses one-step process for removing both BW and PLI noise simultaneously. The proposed method has been compared with other existing methods using performance measure parameters namely output signal to noise ratio (SNRout), and percent root mean square difference (PRD). Simulation results validate the better performance of the proposed method than compared methods at different noise levels. The proposed method is suitable for preprocessing of ECG signals.
This paper presents a new framework to improve the quality of streaming synchrophasor measurements with the existence of missing data and bad data. The method exploits the low-rank property of the ...Hankel structure to identify and correct bad data, as well as to estimate and fill in the missing data. The method is advantageous compared to existing methods in the literature that only estimate missing data by leveraging the low-rank property of the synchrophasor data observation matrix. The proposed algorithm can efficiently differentiate event data from bad data, even in the existence of simultaneous and consecutive bad data. The algorithm has been verified through numerical experiments on recorded synchrophasor datasets.
For heterogeneous WSNs with various types of sensors, compressive data gathering method requires more measurements due to the increased multiple attributes. In this letter, a compressive ...multi-attribute data gathering method using low-rank Hankel matrix is proposed to reduce the required measurements and improve the recovery accuracy in heterogeneous WSNs. Beyond utilizing just the spatiotemporal correlation of the raw sensed data with compressed sensing, the proposed method further enforces the low-rank block Hankel matrix to exploit the inherent correlation among multi-attribute data. Experimental results demonstrate that the proposed method can significantly improve the recovery accuracy of multi-attribute data compared with the existing solutions in WSNs.
In this article, a unified identification framework called constrained subspace method for structured state-space models (COSMOS) is presented, where the structure is defined by a user-specified ...linear or polynomial parametrization. The new approach operates directly from the input and output data, which differs from the traditional two-step method that first obtains a state-space realization followed by the system-parameter estimation. The new identification framework relies on a subspace inspired linear regression problem which may not yield a consistent estimate in the presence of process noise. To alleviate this problem, the linear regression formulation is imposed by structured and low-rank constraints in terms of a finite set of system Markov parameters and the user specified model parameters. The nonconvex nature of the constrained optimization problem is dealt with by transforming the problem into a difference-of-convex optimization problem, which is then handled by the sequential convex programming strategy. Numerical simulation examples show that the proposed identification method is more robust than the classical prediction-error method initialized by random initial values in converging to local minima, but at the cost of heavier computational burden.