Contingency analysis (CA) is one of the essential tools for the optimal design and security assessment of a reliable power system. However, its computational requirements rise with the growth of ...distributed generations in the interconnected power system. As CA is a complex and computationally intensive problem, it requires a fast and accurate calculation to ensure the secure operation. Therefore, efficient mathematical modelling and parallel programming are key to efficient static security analysis. This paper proposes a parallel algorithm for static CA that uses both central processing units (CPUs) and graphical processing units (GPUs). To enhance the accuracy, AC load flow is used, and parallel computation of load flow is done simultaneously, with efficient screening and ranking of the critical contingencies. We perform extensive experiments to evaluate the efficacy of the proposed algorithm. As a result, we establish that the proposed parallel algorithm with high-performance computing (HPC) computing is much faster than the traditional algorithms. Furthermore, the HPC experiments were conducted using the national supercomputing facility, which demonstrates the proposed algorithm in the context of N−1 and N−2 static CA with immense power systems, such as the Indian northern regional power grid (NRPG) 246-bus and the polish 2383-bus networks.
•We address the problem of simultaneous computation of skyline probabilities of multiple objects.•Our method is based on a novel concept of zero-contributing set and multi-level prefix-based ...absorption.•We propose a constraint-based k-level absorption to identify zero-contributing sets related to m objects at a time.•One of the major design issues is to determine the number m of reference objects. We also analyse the choice of m.•Detailed experimental analysis for real and synthetic datasets are reported to corroborate the efficiency of our algorithm.
The problem of recommending objects based on attributes is a novel recommendation problem. When the preferences of attributes are uncertain and are expressed in terms of probabilities, the recommendation problem boils down to computing skyline probabilities of all objects in the database. Though there exists efficient algorithms to compute skyline probability of a single object when pair-wise preference probabilities are given, the problem of computing skyline probabilities of all objects in the database is not yet solved. In this paper, we establish the concept of preference probability over uncertain preferences in the context of a recommender system. We propose an efficient approach to address the problem of simultaneous computation of skyline probabilities of multiple objects. Our method is based on a novel concept of zero-contributing set and multi-level prefix-based absorption. The idea is to carry out the absorption with multiple reference objects. One of the major design issues is to determine the number of m reference objects. We also analyse the choice of m. We report extensive experimental analysis to justify the efficiency of our algorithm.
Collaborative filtering (CF) has become a popular method for developing recommender systems (RSs) where ratings of a user for new items are predicted based on her past preferences and available ...preference information of other users. Despite the popularity of CF-based methods, their performance is often greatly limited by the sparsity of observed entries. In this study, we explore the data augmentation and refinement aspects of Maximum Margin Matrix Factorization (MMMF), a widely accepted CF technique for rating predictions, which has not been investigated before. We exploit the inherent characteristics of MMMF to assess the confidence level of algorithm’s prediction of individual rating and propose a semi-supervised approach for rating augmentation based on self-training. We hypothesize that any CF algorithm’s predictions with low confidence are due to some deficiency in the training data and hence, the performance of the algorithm can be improved by adopting a systematic data augmentation strategy. We iteratively use some of the ratings predicted with high confidence to augment the training data and remove low-confidence entries through a refinement process. By repeating this process, the system learns to improve prediction accuracy. Our method is experimentally evaluated on several state-of-the-art CF algorithms and leads to informative rating augmentation, improving the performance of the baseline approaches.
•We explore the data augmentation and refinement aspects of Maximum Margin MF.•We propose a self-training-based semi-supervised approach for rating augmentation.•We also propose a strategy to remove the ratings predicted with low confidence.•Extensive comparative studies validate the efficiency of our algorithm.
Recommender systems aim to enhance the overall user experience by providing tailored recommendations for a variety of products and services. These systems help users make more informed decisions, ...leading to greater user engagement with the platform. However, the implementation of these systems largely depends on the context, which can vary from recommending an item or package to a user or a group. This requires careful exploration of several models during the deployment, as there is no comprehensive and unified approach that deals with recommendations at different levels. Furthermore, these individual models must be closely attuned to their generated recommendations depending on the context to prevent significant variation in their generated recommendations. In this paper, we propose a novel unified recommendation framework that addresses all four recommendation tasks, namely, personalized, group, package, and package-to-group recommendation, filling the gap in the current research landscape. The proposed framework can be integrated with most of the traditional matrix factorization-based collaborative filtering (CF) models. This research underscores the significance of including group and package information while learning latent representations of users and items for personalized recommendations. These components help in exploiting a rich latent representation of the user/item by enforcing them to align closely with their corresponding group/package representation. We consider two prominent CF techniques, namely Regularized Matrix Factorization and Maximum Margin Matrix factorization, as the baseline models and demonstrate their customization to various recommendation tasks. Experimental results on two publicly available datasets are reported, comparing them to other baseline approaches for various recommendation tasks.
•We propose a unified framework for various recommendation tasks.•The framework can be integrated with any traditional matrix factorization models.•Latent Representation of users, items, groups, and packages are learnt simultaneously.•Extensive comparative studies validate the efficiency of our algorithm.
•In this paper, we study the embedding of labels together with the group information with an objective to build an efficient multi-label classification.•We assume the existence of a low-dimensional ...space onto which the feature vectors and label vectors can be embedded.•We ensure that labels belonging to the same group share the same sparsity pattern in their low-rank representations.•The proposed method has three major stages namely (1) Identification of groups of labels; (2) Sparsity-invariant embedding of label groups; and (3) Embedding of feature matrix to the same low-rank space.•Extensive comparative studies validate the effectiveness of the proposed method against the state-of-the-art multi-label learning approaches.
Multi-label learning is concerned with the classification of data with multiple class labels. This is in contrast to the traditional classification problem where every data instance has a single label. Due to the exponential size of output space, exploiting intrinsic information in feature and label spaces has been the major thrust of research in recent years and use of parametrization and embedding have been the prime focus. Researchers have studied several aspects of embedding which include label embedding, input embedding, dimensionality reduction and feature selection. These approaches differ from one another in their capability to capture other intrinsic properties such as label correlation, local invariance etc. We assume here that the input data form groups and as a result, the label matrix exhibits a sparsity pattern and hence the labels corresponding to objects in the same group have similar sparsity. In this paper, we study the embedding of labels together with the group information with an objective to build an efficient multi-label classifier. We assume the existence of a low-dimensional space onto which the feature vectors and label vectors can be embedded. In order to achieve this, we address three sub-problems namely; (1) Identification of groups of labels; (2) Embedding of label vectors to a low rank-space so that the sparsity characteristic of individual groups remains invariant; and (3) Determining a linear mapping that embeds the feature vectors onto the same set of points, as in stage 2, in the low-dimensional space. We compare our method with seven well-known algorithms on twelve benchmark data sets. Our experimental analysis manifests the superiority of our proposed method over state-of-art algorithms for multi-label learning.
Traditional recommendation algorithms can be used to develop techniques that can help people choose desirable items of interest. However, in many real-world applications, it is important to quantify ...each recommendation’s (un)certainty, in addition to a set of recommendations. The conformal recommender system uses the experience of a user to output a set of recommendations, each associated with a precise confidence value. A significance level ɛ provides a bound ɛ on the probability of making a wrong recommendation. The conformal framework uses a key concept called nonconformity measure that measures the strangeness of an item concerning other items. One of the significant design challenges of any conformal recommendation framework is the integration of nonconformity measures with the recommendation algorithm. This paper introduces an inductive variant of a conformal recommender system. We propose and analyze different nonconformity measures in the inductive setting. In addition, we provide theoretical proofs on the error bound and time complexity. Extensive empirical analysis on seven benchmark datasets reveals that the inductive variant substantially improves the performance in computation time while preserving the accuracy.
•We introduce the concept of inductive conformal recommendation.•We propose and analyze different nonconformity measures in the inductive setting.•We provide theoretical proofs on the error-bound and the time complexity.•Extensive comparative studies validate the efficiency of our algorithm.
•In MMMF, ratings matrix with multiple discrete values is treated by specially extending hinge loss function to suit multiple levels.•We view this process as analogous to extending two-class ...classifier to a unified multi-class classifier.•Alternatively, multi-class classifier can be built by arranging multiple two- class classifiers in a hierarchical manner.•In this paper, we investigate this aspect for collaborative filtering and propose an efficient and novel framework of multiple bi-level MMMFs.•We compare our method with nine well-known algorithms on two benchmark datasets and show that our method outperforms these methods on NMAE measure.
Maximum Margin Matrix Factorization (MMMF) has been a successful learning method in collaborative filtering research. For a partially observed ordinal rating matrix, the focus is on determining low-norm latent factor matrices U (of users) and V (of items) so as to simultaneously approximate the observed entries under some loss measure and predict the unobserved entries. When the rating matrix contains only two levels (±1), rows of V can be viewed as points in k-dimensional space and rows of U as decision hyperplanes in this space separating +1 entries from −1 entries. When hinge/smooth hinge loss is the loss function, the hyperplanes act as maximum-margin separator. In MMMF, rating matrix with multiple discrete values is treated by specially extending hinge loss function to suit multiple levels. We view this process as analogous to extending two-class classifier to a unified multi-class classifier. Alternatively, multi-class classifier can be built by arranging multiple two-class classifiers in a hierarchical manner. In this paper, we investigate this aspect for collaborative filtering and propose an efficient and novel framework of multiple bi-level MMMFs. There is substantial saving in computational overhead. We compare our method with nine well-known algorithms on two benchmark datasets and show that our method outperforms these methods on NMAE measure. We also show that our method yields latent factors of lower ranks and the trade-off between empirical and generalization error is low.
•Multi-label learning deals with the classification of data with multiple labels.•Output space with many labels is tackle by modeling inter-label correlations.•Use of parametrization and embedding ...have been the prime focus.•A piecewise-linear embedding using maximum margin matrix factorization is proposed.•Our experimental analysis manifests the superiority of our proposed method.
Multi-label learning is concerned with the classification of data with multiple class labels. This is in contrast to the traditional classification problem where every data instance has a single label. Multi-label classification (MLC) is a major research area in the machine learning community and finds application in several domains such as computer vision, data mining and text classification. Due to the exponential size of the output space, exploiting intrinsic information in feature and label spaces has been the major thrust of research in recent years and use of parametrization and embedding have been the prime focus in MLC. Most of the existing methods learn a single linear parametrization using the entire training set and hence, fail to capture nonlinear intrinsic information in feature and label spaces. To overcome this, we propose a piecewise-linear embedding which uses maximum margin matrix factorization to model linear parametrization. We hypothesize that feature vectors which conform to similar embedding are similar in some sense. Combining the above concepts, we propose a novel hierarchical matrix factorization method for multi-label classification. Practical multi-label classification problems such as image annotation, text categorization and sentiment analysis can be directly solved by the proposed method. We compare our method with six well-known algorithms on twelve benchmark datasets. Our experimental analysis manifests the superiority of our proposed method over state-of-art algorithm for multi-label learning.
Collaborative Filtering (CF) has emerged as one of the most prominent implementation strategies for building recommender systems. The key idea is to exploit the usage patterns of individuals to ...generate personalized recommendations. CF techniques, especially for newly launched platforms, often face a critical issue known as the data sparsity problem, which greatly limits their performance. Several approaches in the literature have been proposed to tackle the problem of data sparsity, among which cross-domain collaborative filtering (CDCF) has gained significant attention in the recent past. In order to compensate for the scarcity of available feedback in a target domain, the CDCF approach utilizes information available in other auxiliary domains. Traditional CDCF approaches primarily focus on finding a common set of entities (users or items) across the domains, which then act as a conduit for knowledge transfer. Nevertheless, most real-world datasets are collected from different domains, so they often lack information about anchor points or reference information for entity alignment. This paper introduces a domain adaptation technique to align the embeddings of entities across the two domains. Our approach first exploits the available textual and visual information to independently learn a multi-view latent representation for each entity in the auxiliary and target domains. The different representations of the entity are then fused to generate the corresponding unified representation. A domain classifier is then trained to learn the embedding for the domain alignment by fixing the unified features as the anchor points. Experiments on two publicly benchmark datasets indicate the effectiveness of our proposed approach.
•We propose an alternative and new MMMF scheme for discrete-valued rating matrix.•Our work draws motivation of recent advent of proximal support vector machines.•The propose method overcomes the ...problem of overtting.•We validate our hypothesis by conducting experiments on real and synthetic datasets.
Maximum Margin Matrix Factorization (MMMF) has been a successful learning method in collaborative filtering research. For a partially observed ordinal rating matrix, the focus is on determining low-norm latent factor matrices U (of users) and V (of items) so as to simultaneously approximate the observed entries under some loss measure and predict the unobserved entries. When the rating matrix contains only two levels (±1), rows of V can be viewed as points in k-dimensional space and rows of U as decision hyperplanes in this space separating +1 entries from −1 entries. The concept of optimizing a loss function to determine the separating hyperplane is prevalent in support vector machines (SVM) research and when hinge/smooth hinge loss is used, the hyperplanes act as a maximum-margin separator. In MMMF, a rating matrix with multiple discrete values is treated by specially extending hinge loss function to suit multiple levels. MMMF is an efficient technique for collaborative filtering but it has several shortcomings. A prominent shortcoming is an overfitting problem wherein if learning iteration is prolonged to decrease the training error the generalization error grows. In this paper, we propose an alternative and new maximum margin factorization scheme for discrete-valued rating matrix to overcome the problem of overfitting. Our work draws motivation from a recent work on proximal support vector machines (PSVMs) wherein two parallel hyperplanes are used for binary classification and points are classified by assigning them to the class corresponding to the closest of two parallel hyperplanes. In other words, proximity to decision hyperplane is used as the classifying criterion. We show that a similar concept can be used to factorize the rating matrix if the loss function is suitably defined. The present scheme of matrix factorization has advantages over MMMF (similar to the advantages of PSVM over standard SVM). We validate our hypothesis by carrying out experiments on real and synthetic datasets.