High-resolution fingerprint recognition has been a hot topic for many years. Compared with a traditional fingerprint image, a high-resolution fingerprint image can provide more features, such as ...pores and ridge contours. Introducing these features into fingerprint comparison and recognition can improve the recognition accuracy and reduce the risk of identification errors. This paper proposes a novel method for comparing pores on high-resolution fingerprint images. The method can be divided into two steps. In the first step, fingerprints are aligned using the pixel-category-distance-based data-driven descending algorithm. Traditionally, fingerprints are aligned based on feature points, such as minutiae and singular points. Such alignment methods are not suitable when dealing with partial fingerprints because small overlapping areas often do not contain enough features to guarantee a correct alignment. In this research, the ridges and valleys on fingerprints are used in combination with the orientation field for alignment. The proposed algorithm performs well when aligning both partial and full fingerprints. The common areas between the two images can be estimated based on the alignment result. In the second step, pores lying in the common areas are selected for comparison. To improve the comparison accuracy, pores are compared using local features and spatial relations. A graph comparison algorithm is designed in this step. The experimental results show that the proposed method is more accurate than other state-of-the-art pore comparison algorithms.
Computing systems demand stringent security checks to guard themselves against powerful attack vectors. Deployment of the Control Flow Integrity(CFI) checks limit the control flow of a program to a ...set of valid destinations, derived from the static analysis of the code. CFI checks guarantee the integrity of the executed code, but, they do not guarantee the integrity of execution, as sophisticated attack vectors create attacks that comply with the normal control flow of the program. This paper proposes an application specific, dynamic, execution integrity verification scheme, to detect static and dynamic integrity breaches. The system builds a behaviour model consisting of function call sequences and memory access graphs, to verify the semantic behaviour exhibited by the application. Dynamic variation of the function behaviour is further recorded as Access Graph Variance Vector (AGVV). Temporal sequencing of function calls, together with the memory access graph comparison, yields a two-level integrity verification system. Apart from detecting the deviations from normal behaviour, our proposed method limits the abnormal behaviour from being propagated. Experimental evaluation of the proposed method using applications from MiBench benchmark suite against static integrity breaches shows cent percent verification accuracy with less than 5% false positive rate. The system also identifies the dynamic integrity violation caused by ROP based attacks from the RIPE benchmark suite, indicating an accuracy of 94.44% against dynamic integrity breaches.
The mutual information between graphs Escolano, Francisco; Hancock, Edwin R.; Lozano, Miguel A. ...
Pattern recognition letters,
02/2017, Letnik:
87
Journal Article
Recenzirano
Odprti dostop
•Formulation of graph similarity in terms of structural information channels.•Posing mutual information between graphs in terms of manifold alignment.•Efficient bypass estimators of mutual ...information between graphs using copulas.•Outperforming of state-of-the-art graph similarities.
Display omitted
The estimation of mutual information between graphs has been an elusive problem until the formulation of graph matching in terms of manifold alignment. Then, graphs are mapped to multi-dimensional sets of points through structure preserving embeddings. Point-wise alignment algorithms can be exploited in this context to re-cast graph matching in terms of point matching. Methods based on bypass entropy estimation must be deployed to render the estimation of mutual information computationally tractable. In this paper the novel contribution is to show how manifold alignment can be combined with copula-based entropy estimators to efficiently estimate the mutual information between graphs. We compare the empirical copula with an Archimedean copula (the independent one) in terms of retrieval/recall after graph comparison. Our experiments show that mutual information built in both choices improves significantly state-of-the art divergences.
Graph theoretical approach has proved an effective tool to understand, characterize, and quantify the complex brain network. However, much less attention has been paid to methods that quantitatively ...compare two graphs, a crucial issue in the context of brain networks. Comparing brain networks is indeed mandatory in several network neuroscience applications. Here, we discuss the current state of the art, challenges, and a collection of analysis tools that have been developed in recent years to compare brain networks. We first introduce the graph similarity problem in brain network application. We then describe the methodological background of the available metrics and algorithms of comparing graphs, their strengths, and limitations. We also report results obtained in concrete applications from normal brain networks. More precisely, we show the potential use of brain network similarity to build a “network of networks” that may give new insights into the object categorization in the human brain. Additionally, we discuss future directions in terms of network similarity methods and applications.
Regarding the software development, MDA (Model Driven Architecture) of OMG can be regarded as the concept of making an independently-designed model according to the development environment and ...language and reusing it according to the desired development environment and language by expanding the reusable unit into the software model when developing software. The problem with these traditional research methods, but the first model, design model for checking the information with the model by defining a formal representation in the form of an abstract syntax tree, as you’ve shown how to perform validation of UML design model. Additional steps need to define more complex due to a software problem that is not the way to the model suitable for model transformation verification. In this paper, as defined in the verification based meta model for input and target model. And we also suggest how to perform model transformation verification using property matching based transformation similarity and graph comparison algorithm. This paper proposes model transformation verification using verification meta information and transformation similarity by property matching. In addition, in order to support verification of the target model generated from the source model, we define verification meta model for UML model, RDBMS model and RT-UML model. Recent researches from model-based architecture did partial tests focusing on phrase-correctness about the re-use in the perspective of converted software model. To overcome such limitations, this study suggests the ways to define transformation profiles using property information of system structure models as the test-based meta-model and transformation rules, improve graph comparison algorithm, and even supports the correctness of meanings. There were problems in existing methods of model transformation verification such as graph comparison or the one considering only syntax-correctness through pattern-matching. To remedy such problems, this study suggests a new verification method by defining the meta-model which has additional structural attributes and property information and the transformation profile, and using graph comparison algorithm which checks whether the information acquired from transformation is correct.
Soft sets provide a suitable framework for representing and dealing with vagueness. A scenario for vagueness can be that alternatives are composed of specific factors and these factors have specific ...attributes. Towards this scenario, this paper introduces soft order and its associated order topology on the soft sets with a novel approach. We first present the definitions and properties of the soft order relations on the soft sets via soft elements. Next, we define soft order topology on any soft set and provide some properties of this topology. In order to implement what we introduced about the soft orders, we describe soft preference and soft utility mapping on the soft sets and we finally demonstrate a decision-making application over the soft orders intended for comparing graphs.
The convergence of extremely high levels of hardware concurrency and the effective overlap of computation and communication in asynchronous executions has resulted in increasing nondeterminism in ...High-Performance Computing (HPC) applications. Nondeterminism can manifest at multiple levels: from low-level communication primitives to libraries to application-level functions. No matter its source, nondeterminism can drastically increase the cost of result reproducibility, debugging workflows, testing parallel programs, or ensuring fault-tolerance. Nondeterministic executions of HPC applications can be modeled as event graphs, and the applications’ nondeterministic behavior can be understood and, in some cases, mitigated using graph comparison algorithms. However, a connection between graph comparison algorithms and approaches to understanding nondeterminism in HPC still needs to be established. This survey article moves the first steps toward establishing a connection between graph comparison algorithms and nondeterminism in HPC with its three contributions: it provides a survey of different graph comparison algorithms and a timeline for each category’s significant works; it discusses how existing graph comparison methods do not fully support properties needed to understand nondeterministic patterns in HPC applications; and it presents the open challenges that should be addressed to leverage the power of graph comparisons for the study of nondeterminism in HPC applications.
Alignments of discrete objects can be constructed in a very general setting as super-objects from which the constituent objects are recovered by means of projections. Here, we focus on contact maps, ...i.e. undirected graphs with an ordered set of vertices. These serve as natural discretizations of RNA and protein structures. In the general case, the alignment problem for vertex-ordered graphs is NP-complete. In the special case of RNA secondary structures, i.e. crossing-free matchings, however, the alignments have a recursive structure. The alignment problem then can be solved by a variant of the Sankoff algorithm in polynomial time. Moreover, the tree or forest alignments of RNA secondary structure can be understood as the alignments of ordered edge sets.
Networks are a fundamental and flexible way of representing various complex systems. Many domains such as communication, citation, procurement, biology, social media, and transportation can be ...modeled as a set of entities and their relationships. Temporal networks are a specialization of general networks where every relationship occurs at a discrete time. The temporal evolution of such networks is as important to understand as the structure of the entities and relationships. We present the Independent Temporal Motif (ITeM) to characterize temporal graphs from different domains. ITeMs can be used to model the structure and the evolution of the graph. In contrast to existing work, ITeMs are edge-disjoint directed motifs that measure the temporal evolution of ordered edges within the motif. For a given temporal graph, we produce a feature vector of ITeM frequencies and the time it takes to form the ITeM instances. We apply this distribution to measure the similarity of temporal graphs. We show that ITeM has higher accuracy than other motif frequency-based approaches. We define various ITeM-based metrics that reveal salient properties of a temporal network. We also present importance sampling as a method to efficiently estimate the ITeM counts. We present a distributed implementation of the ITeM discovery algorithm using Apache Spark and GraphFrame. We evaluate our approach on both synthetic and real temporal networks.
Content generation that is both relevant and up to date with the current threats of the target audience is a critical element in the success of any cyber security exercise (CSE). Through this work, ...we explore the results of applying machine learning techniques to unstructured information sources to generate structured CSE content. The corpus of our work is a large dataset of publicly available cyber security articles that have been used to predict future threats and to form the skeleton for new exercise scenarios. Machine learning techniques, like named entity recognition and topic extraction, have been utilised to structure the information based on a novel ontology we developed, named Cyber Exercise Scenario Ontology (CESO). Moreover, we used clustering with outliers to classify the generated extracted data into objects of our ontology. Graph comparison methodologies were used to match generated scenario fragments to known threat actors’ tactics and help enrich the proposed scenario accordingly with the help of synthetic text generators. CESO has also been chosen as the prominent way to express both fragments and the final proposed scenario content by our AI-assisted Cyber Exercise Framework. Our methodology was assessed by providing a set of generated scenarios for evaluation to a group of experts to be used as part of a real-world awareness tabletop exercise.