In this paper, we study the distributed synchronization and pinning distributed synchronization of stochastic coupled neural networks via randomly occurring control. Two Bernoulli stochastic ...variables are used to describe the occurrences of distributed adaptive control and updating law according to certain probabilities. Both distributed adaptive control and updating law for each vertex in a network depend on state information on each vertex's neighborhood. By constructing appropriate Lyapunov functions and employing stochastic analysis techniques, we prove that the distributed synchronization and the distributed pinning synchronization of stochastic complex networks can be achieved in mean square. Additionally, randomly occurring distributed control is compared with periodically intermittent control. It is revealed that, although randomly occurring control is an intermediate method among the three types of control in terms of control costs and convergence rates, it has fewer restrictions to implement and can be more easily applied in practice than periodically intermittent control.
Locally linear embedding (LLE) is one of the most well-known manifold learning methods. As the representative linear extension of LLE, orthogonal neighborhood preserving projection (ONPP) has ...attracted widespread attention in the field of dimensionality reduction. In this paper, a unified sparse learning framework is proposed by introducing the sparsity or L 1 -norm learning, which further extends the LLE-based methods to sparse cases. Theoretical connections between the ONPP and the proposed sparse linear embedding are discovered. The optimal sparse embeddings derived from the proposed framework can be computed by iterating the modified elastic net and singular value decomposition. We also show that the proposed model can be viewed as a general model for sparse linear and nonlinear (kernel) subspace learning. Based on this general model, sparse kernel embedding is also proposed for nonlinear sparse feature extraction. Extensive experiments on five databases demonstrate that the proposed sparse learning framework performs better than the existing subspace learning algorithm, particularly in the cases of small sample sizes.
Robustness to noises, outliers, and corruptions is an important issue in linear dimensionality reduction. Since the sample-specific corruptions and outliers exist, the class-special structure or the ...local geometric structure is destroyed, and thus, many existing methods, including the popular manifold learning- based linear dimensionality methods, fail to achieve good performance in recognition tasks. In this paper, we focus on the unsupervised robust linear dimensionality reduction on corrupted data by introducing the robust low-rank representation (LRR). Thus, a robust linear dimensionality reduction technique termed low-rank embedding (LRE) is proposed in this paper, which provides a robust image representation to uncover the potential relationship among the images to reduce the negative influence from the occlusion and corruption so as to enhance the algorithm's robustness in image feature extraction. LRE searches the optimal LRR and optimal subspace simultaneously. The model of LRE can be solved by alternatively iterating the argument Lagrangian multiplier method and the eigendecomposition. The theoretical analysis, including convergence analysis and computational complexity, of the algorithms is presented. Experiments on some well-known databases with different corruptions show that LRE is superior to the previous methods of feature extraction, and therefore, it indicates the robustness of the proposed method. The code of this paper can be downloaded from http://www.scholat.com/laizhihui.
Compact hash code learning has been widely applied to fast similarity search owing to its significantly reduced storage and highly efficient query speed. However, it is still a challenging task to ...learn discriminative binary codes for perfectly preserving the full pairwise similarities embedded in the high-dimensional real-valued features, such that the promising performance can be guaranteed. To overcome this difficulty, in this paper, we propose a novel scalable supervised asymmetric hashing (SSAH) method, which can skillfully approximate the full-pairwise similarity matrix based on maximum asymmetric inner product of two different non-binary embeddings. In particular, to comprehensively explore the semantic information of data, the supervised label information and the refined latent feature embedding are simultaneously considered to construct the high-quality hashing function and boost the discriminant of the learned binary codes. Specifically, SSAH learns two distinctive hashing functions in conjunction of minimizing the regression loss on the semantic label alignment and the encoding loss on the refined latent features. More importantly, instead of using only part of similarity correlations of data, the full-pairwise similarity matrix is directly utilized to avoid information loss and performance degeneration, and its cumbersome computation complexity on n × n matrix can be dexterously manipulated during the optimization phase. Furthermore, an efficient alternating optimization scheme with guaranteed convergence is designed to address the resulting discrete optimization problem. The encouraging experimental results on diverse benchmark datasets demonstrate the superiority of the proposed SSAH method in comparison with many recently proposed hashing algorithms.
This paper proposes a novel method, called robust latent subspace learning (RLSL), for image classification. We formulate an RLSL problem as a joint optimization problem over both the latent SL and ...classification model parameter predication, which simultaneously minimizes: 1) the regression loss between the learned data representation and objective outputs and 2) the reconstruction error between the learned data representation and original inputs. The latent subspace can be used as a bridge that is expected to seamlessly connect the origin visual features and their class labels and hence improve the overall prediction performance. RLSL combines feature learning with classification so that the learned data representation in the latent subspace is more discriminative for classification. To learn a robust latent subspace, we use a sparse item to compensate error, which helps suppress the interference of noise via weakening its response during regression. An efficient optimization algorithm is designed to solve the proposed optimization problem. To validate the effectiveness of the proposed RLSL method, we conduct experiments on diverse databases and encouraging recognition results are achieved compared with many state-of-the-arts methods.
Bacterial inactivation by magnetic photocatalyst receives increasing interests for the ease recovery and reuse of photocatalysts. This study investigated bacterial inactivation by a magnetic ...photocatalysts, Fe2O3–AgBr, under the irradiation of a commercially available light emitting diode lamp. The effects of different factors on the inactivation of Escherichia coli were also evaluated, in term of the efficiency in inactivation. The results showed that Fe2O3–AgBr was able to inactivate both Gram negative (E. coli) and Gram positive (Staphylococcus aureus) bacteria. Bacterial inactivation by Fe2O3–AgBr was more favorable under high temperature and alkaline pH. Presence of Ca2+ promoted the bacterial inactivation while the presence of SO42− was inhibitory. The mechanisms of photocatalytic bacterial inactivation were systemically studied and the effects of the presence of various specific reactive species scavengers and argon suggest that Fe2O3–AgBr inactivate bacterial cells by the oxidation of H2O2 generated from the photo-generated electron and direct oxidation of photo-generated hole. The detection of different reactive species further supported the proposed mechanisms. The results provide information for the evaluation of bacterial inactivation performance of Fe2O3–AgBr under different conditions. More importantly, bacterial inactivation for five consecutive cycles demonstrated Fe2O3–AgBr exhibited highly stable bactericidal activity and suggest that the magnetic Fe2O3–AgBr has great potential for water disinfection.
Display omitted
•The bactericidal ability of magnetic Fe2O3–AgBr under LED lamp was demonstrated.•The effects of various factors on bacterial inactivation by Fe2O3–AgBr were studied.•Fe2O3–AgBr stably inactivated 7-log of Escherichia coli in five repeated cycles.•Fe2O3–AgBr inactivated bacteria by oxidation of H2O2 and direct oxidation of h+.
Recently, hash learning attracts great attentions since it can obtain fast image retrieval on large-scale data sets by using a series of discriminative binary codes. The popular methods include ...manifold-based hashing methods, which aim to learn the binary codes by embedding the original high-dimensional data into low-dimensional intrinsic subspace. However, most of these methods tend to relax the discrete constraint to compute the final binary codes in an easier way. Therefore, the information loss will increase. In this paper, we propose a novel jointly sparse regression model to minimize the locality information loss and obtain jointly sparse hashing method. The proposed model integrates locality, joint sparsity, and rotation operation together with a seamless formulation. Thus, the drawback in previous methods using two separated and independent stages such as PCA-ITQ and the similar methods can be addressed. Moreover, since we introduce the joint sparsity, the feature extraction and jointly sparse feature selection can also be realized in a single projection operation, which has the potentials to select more important features. The convergence of the proposed algorithm is proved, and the essences of the iterative procedures are also revealed. The experimental results on large-scale data sets demonstrate the performance of the proposed method. The source code can be downloaded from http://www.scholat.com/laizhihui.
Ridge regression (RR) and its extended versions are widely used as an effective feature extraction method in pattern recognition. However, the RR-based methods are sensitive to the variations of data ...and can learn only limited number of projections for feature extraction and recognition. To address these problems, we propose a new method called robust discriminant regression (RDR) for feature extraction. In order to enhance the robustness, the <inline-formula> <tex-math notation="LaTeX">{L_{2,1}} </tex-math></inline-formula>-norm is used as the basic metric in the proposed RDR. The designed robust objective function in regression form can be solved by an iterative algorithm containing an eigenfunction, through which the optimal orthogonal projections of RDR can be obtained by eigen decomposition. The convergence analysis and computational complexity are presented. In addition, we also explore the intrinsic connections and differences between the RDR and some previous methods. Experiments on some well-known databases show that RDR is superior to the classical and very recent proposed methods reported in the literature, no matter the <inline-formula> <tex-math notation="LaTeX">{L_{2}} </tex-math></inline-formula>-norm or the <inline-formula> <tex-math notation="LaTeX">{L_{2,1}} </tex-math></inline-formula>-norm-based regression methods. The code of this paper can be downloaded from http://www.scholat.com/laizhihui .
In this paper, the stability problem is studied for a class of stochastic neural networks (NNs) with local impulsive effects. The impulsive effects considered can be not only nonidentical in ...different dimensions of the system state but also various at distinct impulsive instants. Hence, the impulses here can encompass several typical impulses in NNs. The aim of this paper is to derive stability criteria such that stochastic NNs with local impulsive effects are exponentially stable in mean square. By means of the mathematical induction method, several easy-to-check conditions are obtained to ensure the mean square stability of NNs. Three examples are given to show the effectiveness of the proposed stability criterion.