Galloping based piezoelectric energy harvester is a kind of micro-environmental energy harvesting device based on flow-induced vibrations. A novel tristable galloping-based piezoelectric energy ...harvester is constructed by introducing a nonlinear magnetic force on the traditional galloping-based piezoelectric energy harvester. Based on Euler–Bernoulli beam theory and Kirchhoff’s law, the corresponding aero-electromechanical model is proposed and validated by a series of wind tunnel experiments. The parametric study is performed to analyse the response of the tristable galloping-based piezoelectric energy harvester. Numerical results show that comparing with the galloping-based piezoelectric energy harvester, the mechanism of the tristable galloping-based piezoelectric energy harvester is more complex. With the increase of a wind speed, the vibration of the bluff body passes through three branches: intra-well oscillations, chaotic oscillations, and inter-well oscillations. The threshold wind speed of the presented harvester for efficiently harvesting energy is 1.0 m/s, which is decreased by 33% compared with the galloping-based piezoelectric energy harvester. The maximum output power of the presented harvester is 0.73 mW at 7.0 m/s wind speed, which is increased by 35.3%. Compared with the traditional galloping-based piezoelectric energy harvester, the presented tristable galloping-based piezoelectric energy harvester has a better energy harvesting performance from flow-induced vibrations.
Locality and label information of training samples play an important role in image classification. However, previous dictionary learning algorithms do not take the locality and label information of ...atoms into account together in the learning process, and thus their performance is limited. In this paper, a discriminative dictionary learning algorithm, called the locality-constrained and label embedding dictionary learning (LCLE-DL) algorithm, was proposed for image classification. First, the locality information was preserved using the graph Laplacian matrix of the learned dictionary instead of the conventional one derived from the training samples. Then, the label embedding term was constructed using the label information of atoms instead of the classification error term, which contained discriminating information of the learned dictionary. The optimal coding coefficients derived by the locality-based and label-based reconstruction were effective for image classification. Experimental results demonstrated that the LCLE-DL algorithm can achieve better performance than some state-of-the-art algorithms.
Locally linear embedding (LLE) is one of the most well-known manifold learning methods. As the representative linear extension of LLE, orthogonal neighborhood preserving projection (ONPP) has ...attracted widespread attention in the field of dimensionality reduction. In this paper, a unified sparse learning framework is proposed by introducing the sparsity or L 1 -norm learning, which further extends the LLE-based methods to sparse cases. Theoretical connections between the ONPP and the proposed sparse linear embedding are discovered. The optimal sparse embeddings derived from the proposed framework can be computed by iterating the modified elastic net and singular value decomposition. We also show that the proposed model can be viewed as a general model for sparse linear and nonlinear (kernel) subspace learning. Based on this general model, sparse kernel embedding is also proposed for nonlinear sparse feature extraction. Extensive experiments on five databases demonstrate that the proposed sparse learning framework performs better than the existing subspace learning algorithm, particularly in the cases of small sample sizes.
Robustness to noises, outliers, and corruptions is an important issue in linear dimensionality reduction. Since the sample-specific corruptions and outliers exist, the class-special structure or the ...local geometric structure is destroyed, and thus, many existing methods, including the popular manifold learning- based linear dimensionality methods, fail to achieve good performance in recognition tasks. In this paper, we focus on the unsupervised robust linear dimensionality reduction on corrupted data by introducing the robust low-rank representation (LRR). Thus, a robust linear dimensionality reduction technique termed low-rank embedding (LRE) is proposed in this paper, which provides a robust image representation to uncover the potential relationship among the images to reduce the negative influence from the occlusion and corruption so as to enhance the algorithm's robustness in image feature extraction. LRE searches the optimal LRR and optimal subspace simultaneously. The model of LRE can be solved by alternatively iterating the argument Lagrangian multiplier method and the eigendecomposition. The theoretical analysis, including convergence analysis and computational complexity, of the algorithms is presented. Experiments on some well-known databases with different corruptions show that LRE is superior to the previous methods of feature extraction, and therefore, it indicates the robustness of the proposed method. The code of this paper can be downloaded from http://www.scholat.com/laizhihui.
As an important biometric feature, human gait has great potential in video-surveillance-based applications. In this paper, we focus on the matrix representation-based human gait recognition and ...propose a novel discriminant subspace learning method called sparse bilinear discriminant analysis (SBDA). SBDA extends the recently proposed matrix-representation-based discriminant analysis methods to sparse cases. By introducing the L 1 and L 2 norms into the objective function of SBDA, two interrelated sparse discriminant subspaces can be obtained for gait feature extraction. Since the optimization problem has no closed-form solutions, an iterative method is designed to compute the optimal sparse subspace using the L 1 and L 2 norms sparse regression. Theoretical analyses reveal the close relationship between SBDA and previous matrix-representation-based discriminant analysis methods. Since each nonzero element in each subspace is selected from the most important variables/factors, SBDA is potential to perform equivalent to or even better than the state-of-the-art subspace learning methods in gait recognition. Moreover, using the strategy of SBDA plus linear discriminant analysis (LDA), we can further improve the performance. A set of experiments on the standard USF HumanID and CASIA gait databases demonstrate that the proposed SBDA and SBDA + LDA can obtain competitive performance.
This paper proposes a novel method, called robust latent subspace learning (RLSL), for image classification. We formulate an RLSL problem as a joint optimization problem over both the latent SL and ...classification model parameter predication, which simultaneously minimizes: 1) the regression loss between the learned data representation and objective outputs and 2) the reconstruction error between the learned data representation and original inputs. The latent subspace can be used as a bridge that is expected to seamlessly connect the origin visual features and their class labels and hence improve the overall prediction performance. RLSL combines feature learning with classification so that the learned data representation in the latent subspace is more discriminative for classification. To learn a robust latent subspace, we use a sparse item to compensate error, which helps suppress the interference of noise via weakening its response during regression. An efficient optimization algorithm is designed to solve the proposed optimization problem. To validate the effectiveness of the proposed RLSL method, we conduct experiments on diverse databases and encouraging recognition results are achieved compared with many state-of-the-arts methods.
Compact hash code learning has been widely applied to fast similarity search owing to its significantly reduced storage and highly efficient query speed. However, it is still a challenging task to ...learn discriminative binary codes for perfectly preserving the full pairwise similarities embedded in the high-dimensional real-valued features, such that the promising performance can be guaranteed. To overcome this difficulty, in this paper, we propose a novel scalable supervised asymmetric hashing (SSAH) method, which can skillfully approximate the full-pairwise similarity matrix based on maximum asymmetric inner product of two different non-binary embeddings. In particular, to comprehensively explore the semantic information of data, the supervised label information and the refined latent feature embedding are simultaneously considered to construct the high-quality hashing function and boost the discriminant of the learned binary codes. Specifically, SSAH learns two distinctive hashing functions in conjunction of minimizing the regression loss on the semantic label alignment and the encoding loss on the refined latent features. More importantly, instead of using only part of similarity correlations of data, the full-pairwise similarity matrix is directly utilized to avoid information loss and performance degeneration, and its cumbersome computation complexity on n × n matrix can be dexterously manipulated during the optimization phase. Furthermore, an efficient alternating optimization scheme with guaranteed convergence is designed to address the resulting discrete optimization problem. The encouraging experimental results on diverse benchmark datasets demonstrate the superiority of the proposed SSAH method in comparison with many recently proposed hashing algorithms.
In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression ...methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.
A common intrinsic limitation of the traditional subspace learning methods is the sensitivity to the outliers and the image variations of the object since they use the L 2 norm as the metric. In this ...paper, a series of methods based on the L 2,1 -norm are proposed for linear dimensionality reduction. Since the L 2,1 -norm based objective function is robust to the image variations, the proposed algorithms can perform robust image feature extraction for classification. We use different ideas to design different algorithms and obtain a unified rotational invariant (RI) dimensionality reduction framework, which extends the well-known graph embedding algorithm framework to a more generalized form. We provide the comprehensive analyses to show the essential properties of the proposed algorithm framework. This paper indicates that the optimization problems have global optimal solutions when all the orthogonal projections of the data space are computed and used. Experimental results on popular image datasets indicate that the proposed RI dimensionality reduction algorithms can obtain competitive performance compared with the previous L 2 norm based subspace learning algorithms.
The classical linear discriminant analysis has undergone great development and has recently been extended to different cases. In this paper, a novel discriminant subspace learning method called ...sparse tensor discriminant analysis (STDA) is proposed, which further extends the recently presented multilinear discriminant analysis to a sparse case. Through introducing the L 1 and L 2 norms into the objective function of STDA, we can obtain multiple interrelated sparse discriminant subspaces for feature extraction. As there are no closed-form solutions, k-mode optimization technique and the L 1 norm sparse regression are combined to iteratively learn the optimal sparse discriminant subspace along different modes of the tensors. Moreover, each non-zero element in each subspace is selected from the most important variables/factors, and thus STDA has the potential to perform better than other discriminant subspace methods. Extensive experiments on face databases (Yale, FERET, and CMU PIE face databases) and the Weizmann action database show that the proposed STDA algorithm demonstrates the most competitive performance against the compared tensor-based methods, particularly in small sample sizes.