In the past decade, great efforts have been made to extend linear discriminant analysis for higher-order data classification, generally referred to as multilinear discriminant analysis (MDA). ...Existing examples include general tensor discriminant analysis (GTDA) and discriminant analysis with tensor representation (DATER). Both the two methods attempt to resolve the problem of tensor mode dependency by iterative approximation. GTDA is known to be the first MDA method that converges over iterations. However, its performance relies highly on the tuning of the parameter in the scatter difference criterion. Although DATER usually results in better classification performance, it does not converge, yet the number of iterations executed has a direct impact on DATER's performance. In this paper, we propose a closed-form solution to the scatter difference objective in GTDA, namely, direct GTDA (DGTDA) which also gets rid of parameter tuning. We demonstrate that DGTDA outperforms GTDA in terms of both efficiency and accuracy. In addition, we propose constrained multilinear discriminant analysis (CMDA) that learns the optimal tensor subspace by iteratively maximizing the scatter ratio criterion. We prove both theoretically and experimentally that the value of the scatter ratio criterion in CMDA approaches its extreme value, if it exists, with bounded error, leading to superior and more stable performance in comparison to DATER.
•The neighborhood linear discriminant analysis (nLDA) is proposed to address multimodality in LDA.•In nLDA, the scatters are defined on a neighborhood consisting of reverse nearest neighbors.•The ...within- and between-neighborhood scatters can avoid estimating the subclasses in multimodal class.•The nLDA performs significantly better than some existing discriminators, such as LDA, LFDA, ccLDA, LM-NNDA and l2,1-RLDA.
Linear Discriminant Analysis (LDA) assumes that all samples from the same class are independently and identically distributed (i.i.d.). LDA may fail in the cases where the assumption does not hold. Particularly when a class contains several clusters (or subclasses), LDA cannot correctly depict the internal structure as the scatter matrices that LDA relies on are defined at the class level. In order to mitigate the problem, this paper proposes a neighborhood linear discriminant analysis (nLDA) in which the scatter matrices are defined on a neighborhood consisting of reverse nearest neighbors. Thus, the new discriminator does not need an i.i.d. assumption. In addition, the neighborhood can be naturally regarded as the smallest subclass, for which it is easier to be obtained than subclass without resorting to any clustering algorithms. The projected directions are sought to make sure that the within-neighborhood scatter as small as possible and the between-neighborhood scatter as large as possible, simultaneously. The experimental results show that nLDA performs significantly better than previous discriminators, such as LDA, LFDA, ccLDA, LM-NNDA, and l2,1-RLDA.
Dimensionality reduction is a critical technology in the domain of pattern recognition, and linear discriminant analysis (LDA) is one of the most popular supervised dimensionality reduction methods. ...However, whenever its distance criterion of objective function uses <inline-formula><tex-math notation="LaTeX">L_2</tex-math> <inline-graphic xlink:href="nie-ieq1-2842023.gif"/> </inline-formula>-norm, it is sensitive to outliers. In this paper, we propose a new formulation of linear discriminant analysis via joint <inline-formula><tex-math notation="LaTeX">L_{2,1}</tex-math> <inline-graphic xlink:href="nie-ieq2-2842023.gif"/> </inline-formula>-norm minimization on objective function to induce robustness, so as to efficiently alleviate the influence of outliers and improve the robustness of proposed method. An efficient iterative algorithm is proposed to solve the optimization problem and proved to be convergent. Extensive experiments are performed on an artificial data set, on UCI data sets, and on four face data sets, which sufficiently demonstrates the efficiency of comparing to other methods and robustness to outliers of our approach.
Recent works have proposed two L1-norm distance measure-based linear discriminant analysis (LDA) methods, L1-LD and LDA-L1, which aim to promote the robustness of the conventional LDA against ...outliers. In LDA-L1, a gradient ascending iterative algorithm is applied, which, however, suffers from the choice of stepwise. In L1-LDA, an alternating optimization strategy is proposed to overcome this problem. In this paper, however, we show that due to the use of this strategy, L1-LDA is accompanied with some serious problems that hinder the derivation of the optimal discrimination for data. Then, we propose an effective iterative framework to solve a general L1-norm minimization-maximization ( minmax ) problem. Based on the framework, we further develop a effective L1-norm distance-based LDA (called L1-ELDA) method. Theoretical insights into the convergence and effectiveness of our algorithm are provided and further verified by extensive experimental results on image databases.
In this paper, we propose an L1-norm two-dimensional linear discriminant analysis (L1-2DLDA) with robust performance. Different from the conventional two-dimensional linear discriminant analysis with ...L2-norm (L2-2DLDA), where the optimization problem is transferred to a generalized eigenvalue problem, the optimization problem in our L1-2DLDA is solved by a simple justifiable iterative technique, and its convergence is guaranteed. Compared with L2-2DLDA, our L1-2DLDA is more robust to outliers and noises since the L1-norm is used. This is supported by our preliminary experiments on toy example and face datasets, which show the improvement of our L1-2DLDA over L2-2DLDA.
Linear discriminant analysis-probabilistic linear discriminant analysis (LDA-PLDA) is a standard and effective backend in the field of speaker verification. The object of LDA is to perform ...dimensionality reduction while minimizing within-class covariance and maximizing between-class covariance. For a target class (or speaker), our task is to make a binary decision about whether a test utterance is from a specific target speaker. Generally, the nontarget test utterances that are close to the target speaker are easily misjudged. Inspired by this idea, we propose a local pairwise linear discriminant analysis (LPLDA) algorithm. This new method focuses on maximizing the local pairwise covariance, which represents the local structure between the target class samples and neighboring nontarget class samples, instead of the between-class covariance, which represents the global structure of the data. Experiments on the NIST SRE 2010, 2014, and 2016 database show that, the proposed LPLDA-PLDA backend has significant performance improvements over the LDA-PLDA backend.
Robust Sparse Linear Discriminant Analysis Wen, Jie; Fang, Xiaozhao; Cui, Jinrong ...
IEEE transactions on circuits and systems for video technology,
02/2019, Letnik:
29, Številka:
2
Journal Article
Recenzirano
Linear discriminant analysis (LDA) is a very popular supervised feature extraction method and has been extended to different variants. However, classical LDA has the following problems: 1) The ...obtained discriminant projection does not have good interpretability for features; 2) LDA is sensitive to noise; and 3) LDA is sensitive to the selection of number of projection directions. In this paper, a novel feature extraction method called robust sparse linear discriminant analysis (RSLDA) is proposed to solve the above problems. Specifically, RSLDA adaptively selects the most discriminative features for discriminant analysis by introducing the <inline-formula> <tex-math notation="LaTeX">l_{2,1} </tex-math></inline-formula> norm. An orthogonal matrix and a sparse matrix are also simultaneously introduced to guarantee that the extracted features can hold the main energy of the original data and enhance the robustness to noise, and thus RSLDA has the potential to perform better than other discriminant methods. Extensive experiments on six databases demonstrate that the proposed method achieves the competitive performance compared with other state-of-the-art feature extraction methods. Moreover, the proposed method is robust to the noisy data.
In many real-world applications, an object can be described from multiple views or styles, leading to the emerging multi-view analysis. To eliminate the complicated (usually highly nonlinear) view ...discrepancy for favorable cross-view recognition and retrieval, we propose a Multi-view Linear Discriminant Analysis Network (MvLDAN) by seeking a nonlinear discriminant and view-invariant representation shared among multiple views. Unlike existing multi-view methods which directly learn a common space to reduce the view gap, our MvLDAN employs multiple feedforward neural networks (one for each view) and a novel eigenvalue-based multi-view objective function to encapsulate as much discriminative variance as possible into all the available common feature dimensions. With the proposed objective function, the MvLDAN could produce representations possessing: 1) low variance within the same class regardless of view discrepancy, 2) high variance between different classes regardless of view discrepancy, and 3) high covariance between any two views. In brief, in the learned multi-view space, the obtained deep features can be projected into a latent common space in which the samples from the same class are as close to each other as possible (even though they are from different views), and the samples from different classes are as far from each other as possible (even though they are from the same view). The effectiveness of the proposed method is verified by extensive experiments carried out on five databases, in comparison with the 19 state-of-the-art approaches.
•Investigation of robustness of a Generalized Quadratic Discriminant Analysis (GQDA) under the presence of Noise in data.•The GQDA is a novel approach integrating linear & quadratic discriminant ...analyses, but is extremely sensitive under mild contamination.•Development of roust versions of GQDA classier by using robust estimators of the mean vector and the dispersion matrix.•Detailed empirical comparison of robust GQDA proposals with 6 robust estimators, 3 classes of model distribution and 4 real data examples.
Quadratic discriminant analysis (QDA) is a widely used statistical tool to classify observations from different multivariate Normal populations. The generalized quadratic discriminant analysis (GQDA) classification rule/classifier, which generalizes the QDA and the minimum Mahalanobis distance (MMD) classifiers to discriminate between populations with underlying elliptically symmetric distributions competes quite favorably with the QDA classifier when it is optimal and performs much better when QDA fails under non-Normal underlying distributions with heavy tail, e.g. Cauchy distribution. However, the classification rule in GQDA is still based on the sample mean vector and the sample dispersion matrix of a training set, which are extremely non-robust under data contamination. In real world, however, it is quite common to face data which are highly vulnerable to outliers and so the lack of robustness of the classical estimators of the mean vector and the dispersion matrix reduces the efficiency of the GQDA classifier significantly, increasing the misclassification errors. The present paper investigates the performance of the GQDA classifier when the classical estimators of the mean vector and the dispersion matrix used therein are replaced by various robust counterparts. Applications to various real data sets as well as simulation studies reveal far better performance of the proposed robust versions of the GQDA classifier. A comparative study has been made to advocate the appropriate choice of the robust estimators to be used in a specific situation.
It has always been a challenging task to develop a fast and an efficient incremental linear discriminant analysis (ILDA) algorithm. For this purpose, we conduct a new study for linear discriminant ...analysis (LDA) in this paper and develop a new ILDA algorithm. We propose a new batch LDA algorithm called LDA/QR. LDA/QR is a simple and fast LDA algorithm, which is obtained by computing the economic QR factorization of the data matrix followed by solving a lower triangular linear system. The relationship between LDA/QR and uncorrelated LDA (ULDA) is also revealed. Based on LDA/QR, we develop a new incremental LDA algorithm called ILDA/QR. The main features of our ILDA/QR include that: 1) it can easily handle the update from one new sample or a chunk of new samples; 2) it has efficient computational complexity and space complexity; and 3) it is very fast and always achieves competitive classification accuracy compared with ULDA algorithm and existing ILDA algorithms. Numerical experiments based on some real-world data sets demonstrate that our ILDA/QR is very efficient and competitive with the state-of-the-art ILDA algorithms in terms of classification accuracy, computational complexity, and space complexity.