In this paper, we propose a physical informed neural network approach for designing the electromagnetic metamaterial. The approach can be used to deal with various practical problems such as ...cloaking, rotators, concentrators, etc. The advantage of this approach is the flexibility that we can deal with not only the continuous parameters but also the piecewise constants. As our best knowledge, there is no other faster and much efficient method to deal with these problems. As a byproduct, we propose a method to solve high frequency Helmholtz equation, which is widely used in physics and engineering. Some benchmark problems have been solved in numerical tests to verify our method.
Partial differential equations (PDEs) on surfaces are ubiquitous in all the nature science. Many traditional mathematical methods has been developed to solve surfaces PDEs. However, almost all of ...these methods have obvious drawbacks and complicate in general problems. As the fast growth of machine learning area, we show an algorithm by using the physics-informed neural networks (PINNs) to solve surface PDEs. To deal with the surfaces, our algorithm only need a set of points and their corresponding normal, while the traditional methods need a partition or a grid on the surface. This is a big advantage for real computation. A variety of numerical experiments have been shown to verify our algorithm.
Link prediction is the task of evaluating the probability that an edge exists in a network, and it has useful applications in many domains. Traditional approaches rely on measuring the similarity ...between two nodes in a static context. Recent research has focused on extending link prediction to a dynamic setting, predicting the creation and destruction of links in networks that evolve over time. Though a difficult task, the employment of deep learning techniques has shown to make notable improvements to the accuracy of predictions. To this end, we propose the novel application of weak estimators in addition to the utilization of traditional similarity metrics to inexpensively build an effective feature vector for a deep neural network. Weak estimators have been used in a variety of machine learning algorithms to improve model accuracy, owing to their capacity to estimate the changing probabilities in dynamic systems. Experiments indicate that our approach results in increased prediction accuracy on several real-world dynamic networks.
Several methods exist in classification literature to quantify the similarity between two time series data sets. Applications of these methods range from the traditional Euclidean-type metric to the ...more advanced Dynamic Time Warping metric. Most of these adequately address structural similarity but fail in meeting goals outside it. For example, a tool that could be excellent to identify the seasonal similarity between two time series vectors might prove inadequate in the presence of outliers. In this paper, we have proposed a unifying measure for binary classification that performed well while embracing several aspects of dissimilarity. This statistic is gaining prominence in various fields, such as geology and finance, and is crucial in time series database formation and clustering studies.
Network theory concepts form the core of algorithms that are designed to uncover valuable insights from various datasets. Especially, network centrality measures such as Eigenvector centrality, Katz ...centrality, PageRank centrality etc., are used in retrieving top-K viral information propagators in social networks,while web page ranking in efficient information retrieval, etc. In this paper, we propose a novel method for identifying top-K viral information propagators from a reduced search space. Our algorithm computes the Katz centrality and Local average centrality values of each node and tests the values against two threshold (constraints) values. Only those nodes, which satisfy these constraints, form the search space for top-K propagators. Our proposed algorithm is tested against four datasets and the results show that the proposed algorithm is capable of reducing the number of nodes in search space at least by 70%. We also considered the parameter (
α
and
β
) dependency of Katz centrality values in our experiments and established a relationship between the
α
values, number of nodes in search space and network characteristics. Later, we compare the top-K results of our approach against the top-K results of degree centrality.
High utility itemset mining has become an important and critical operation in the Data Mining field. High utility itemset mining generates more profitable itemsets and the association among these ...itemsets, to make business decisions and strategies. Although, high utility is important, it is not the sole measure to decide efficient business strategies such as discount offers. It is very important to consider the pattern of itemsets based on the frequency as well as utility to predict more profitable itemsets. For example, in a supermarket or restaurant, beverages like champagne or wine might generate high utility (profit), but also sell less frequently compared to other beverages like soda or beer. In previous studies, it is observed that people who buy milk, bread, or diapers from a supermarket, also tend to buy beer or soda. But the items like milk, diapers, beer, or soda generate less utility (profit value) compared to beverages like champagne or wine. If we combine items like champagne or wine having high utility but less frequency, with the frequently sold items like milk, diaper, or beer, we can increase the utility of the transaction by providing some discount offers on champagne or wine. In this paper, we are integrating low-frequency itemsets with high-frequency itemsets, both having low or high utility, and provide different association rules for this combination of itemsets. In this way, we can generate a more accurate measure of pattern mining for various business strategies.
Classifying short texts to one category or clustering semantically related texts is challenging, and the importance of both is growing due to the rise of microblogging platforms, digital news feeds, ...and the like. We can accomplish this classifying and clustering with the help of a deep neural network which produces compact binary representations of a short text, and can assign the same category to texts that have similar binary representations. But problems arise when there is little contextual information on the short texts, which makes it difficult for the deep neural network to produce similar binary codes for semantically related texts. We propose to address this issue using semantic enrichment. This is accomplished by taking the nouns, and verbs used in the short texts and generating the concepts and co-occurring words with the help of those terms. The nouns are used to generate concepts within the given short text, whereas the verbs are used to prune the ambiguous context (if any) present in the text. The enriched text then goes through a deep neural network to produce a prediction label for that short text representing it’s category.
A hypergraph is a generalization of a graph in that the restriction of pairwise affinity scores is lifted in favor of affinity scores that can be evaluated between an arbitrary number of inputs. ...Hypergraphs clustering is the process of finding groups in which members of a given hypergraph exhibit a high similarity and dissimilarity with members outside their group. In this paper, we generalize the well-known MapEquation, an optimization equation used in the clustering of nonhypergraphs, for hypergraphs. We develop an agglomerative algorithm, Hypergraph Random Walks (HRW), to find an approximate solution to the generalized MapEquation. Our algorithm requires neither hyperparameter setting nor any restriction on the underlying hypergraph. We show that our algorithm has a strong theoretical performance on the newly defined ring of hyper cliques and demonstrate that our algorithm scales to hypergraphs with large edge sets.
Community detection is a fundamental component of large network analysis. In both academia and industry, progressive research has been made on problems related to community network analysis. ...Community detection is gaining significant attention and importance in the area of network science. Regular and synthetic complex networks have motivated intense interest in studying the fundamental unifying principles of various complex networks. This paper presents a new game-theoretic approach towards community detection in large-scale complex networks based on modified modularity; this method was developed based on modified adjacency, modified Laplacian matrices and neighborhood similarity. This approach was used to partition a given network into dense communities. It is based on determining a Nash stable partition, which is a pure strategy Nash equilibrium of an appropriately defined strategic game in which the nodes of the network were the players and the strategy of a node was to decide to which community it ought to belong. Players chose to belong to a community according to a maximized fitness/payoff. Quality of the community networks was assessed using modified modularity along with a new fitness function. Community partitioning was performed using Normalized Mutual Information and a `modularity measure', which involved comparing the new game-theoretic community detection algorithm (NGTCDA) with well-studied and well-known algorithms, such as Fast Newman, Fast Modularity Detection, and Louvain Community. The quality of a network partition in communities was evaluated by looking at the contribution of each node and its neighbors against the strength of its community.
Neural networks are the cutting edge of artificial intelligence, demonstrated to reliably outperform other techniques in machine learning. Within the domain of neural networks, many different classes ...of architectures have been developed for various tasks in specific subfields, as well as a multitude of diversity in the way of activation functions, loss functions, and other such hyperparameters. These networks are often large and computationally expensive to train and deploy, restricting their utility. Furthermore, the fundamental theory behind the effectiveness of particular network architectures and hyperparameters are often not well understood, and as such, practitioners frequently resort to trial-and-error techniques to optimize their model performance. To address these concerns, we propose the use of compact directed acyclic graph neural networks (DAG-NNs) and an evolutionary approach for automating the optimization of their structure and parameters. Our experimental results demonstrate that our approach consistently outperforms conventional neural networks, even while employing fewer nodes.