Static analysis tools have showcased their importance and usefulness in automated detection of defects. However, the tools are known to generate a large number of alarms which are warning messages to ...the user. The large number of alarms and cost incurred by their manual inspection have been identified as two major reasons for underuse of the tools in practice. To address these concerns plentitude of studies propose postprocessing of alarms: processing the alarms after they are generated. These studies differ greatly in their approaches to postprocess alarms. A comprehensive overview of the postprocessing approaches is, however, missing. In this article, we review 130 primary studies that propose postprocessing of alarms. The studies are collected by combining keywords-based database search and snowballing. We categorize approaches proposed by the collected studies into six main categories: clustering, ranking, pruning, automated elimination of false positives, combination of static and dynamic analyses, and simplification of manual inspection. We provide overview of the categories and sub-categories identified for them, their merits and shortcomings, and different techniques used to implement the approaches. Furthermore, we provide (1) guidelines for selection of the postprocessing techniques by the users/designers of static analysis tools; and (2) directions that can be explored by the researchers.
Symbolic protocol verification with dice Cheval, Vincent; Crubillé, Raphaëlle; Kremer, Steve
Journal of computer security,
10/2023, Letnik:
31, Številka:
5
Journal Article
Recenzirano
Symbolic protocol verification generally abstracts probabilities away, considering computations that succeed only with negligible probability, such as guessing random numbers or breaking an ...encryption scheme, as impossible. This abstraction, sometimes referred to as the perfect cryptography assumption, has shown very useful as it simplifies automation of the analysis. However, probabilities may also appear in the control flow where they are generally not negligible. In this paper we consider a framework for symbolic protocol analysis with a probabilistic choice operator: the probabilistic applied π-calculus. We define and explore the relationships between several behavioral equivalences. In particular we show the need for randomized schedulers and exhibit a counter-example to a result in a previous work that relied on non-randomized ones. As in other frameworks that mix both non-deterministic and probabilistic choices, schedulers may sometimes be unrealistically powerful. We therefore consider two subclasses of processes that avoid this problem. In particular, when considering purely non-deterministic protocols, as is done in classical symbolic verification, we show that a probabilistic adversary has – maybe surprisingly – a strictly superior distinguishing power for may testing, which, when the number of sessions is bounded, we show to coincide with purely possibilistic similarity.
Lot-to-lot verification is an important laboratory activity that is performed to monitor the consistency of analytical performance over time. In this opinion paper, the concept, clinical impact, ...challenges and potential solutions for lot-to-lot verification are exained.
•SM-DTW is a weighted DTW algorithm based on the concept of stability regions and handwriting generation models.•Dissimilarities inside stability regions and similarities outside them are ...penalized.•The performance are evaluated by varying the set of features used for representing signatures.•SM-DTW improves the performance of a classical DTW system and it outperforms some state of the art ASV systems.•Stability regions exploits shape information better than the whole feature set.
Building upon findings in computational model of handwriting learning and execution, we introduce the concept of stability to explain the difference between the actual movements performed during multiple execution of the subject’s signature, and conjecture that the most stable parts of the signature should play a paramount role in evaluating the similarity between a questioned signature and the reference ones during signature verification. We then introduce the Stability Modulated Dynamic Time Warping algorithm for incorporating the stability regions, i.e. the most similar parts between two signatures, into the distance measure between a pair of signatures computed by the Dynamic Time Warping for signature verification. Experiments were conducted on two datasets largely adopted for performance evaluation. Experimental results show that the proposed algorithm improves the performance of the baseline system and compares favourably with other top performing signature verification systems.
Kinship verification is a very important technique in many real-world applications, e.g., personal album organization, missing person investigation and forensic analysis. However, it is extremely ...difficult to verify a family pair with generation gap, e.g., father and son, since there exist both age gap and identity variation. It is essential to well fight off such challenges to achieve promising kinship verification performance. To this end, we propose a towards-young cross-generation model for effective kinship verification by mitigating both age and identity divergences. Specifically, we explore a conditional generative model to bring in an intermediate domain to bridge each pair. Thus, we could extract more effective features through deep architectures with a newly-designed Sparse Discriminative Metric Loss (SDM-Loss), which is exploited to involve the positive and negative information. Experimental results on kinship benchmark demonstrate the superiority of our proposed model by comparing with the state-of-the-art kinship verification methods.
The data race problem is common in the interrupt-driven program, and it is difficult to find as a result of complicated interrupt interleaving. Static analysis is a mainstream technology to detect ...those problems, however, the synchronization mechanism of interrupt is hard to be processed by the existing method, which brings many false alarms. Eliminating false alarms in static analysis is the main challenge for precisely data race detection. In this paper, we present a framework of static analysis combined with program verification, which performs static analysis to find all potential races, and then verifies every race to eliminate false alarms. The experiment results on related race benchmarks show that our implementation finds all race bugs in the phase of static analysis, and eliminates all false alarms through program verification.
This open access two-volume set LNCS 13371 and 13372 constitutes the refereed proceedings of the 34rd International Conference on Computer Aided Verification, CAV 2022, which was held in Haifa, ...Israel, in August 2022. The 40 full papers presented together with 9 tool papers and 2 case studies were carefully reviewed and selected from 209 submissions. The papers were organized in the following topical sections: Part I: Invited papers; formal methods for probabilistic programs; formal methods for neural networks; software Verification and model checking; hyperproperties and security; formal methods for hardware, cyber-physical, and hybrid systems. Part II: Probabilistic techniques; automata and logic; deductive verification and decision procedures; machine learning; synthesis and concurrency. This is an open access book.
Deep Neural Network (DNN) testing is one of the most widely-used ways to guarantee the quality of DNNs. However, labeling test inputs to check the correctness of DNN prediction is very costly, which ...could largely affect the efficiency of DNN testing, even the whole process of DNN development. To relieve the labeling-cost problem, we propose a novel test input prioritization approach (called PRIMA) for DNNs via intelligent mutation analysis in order to label more bug-revealing test inputs earlier for a limited time, which facilitates to improve the efficiency of DNN testing. PRIMA is based on the key insight: a test input that is able to kill many mutated models and produce different prediction results with many mutated inputs, is more likely to reveal DNN bugs, and thus it should be prioritized higher. After obtaining a number of mutation results from a series of our designed model and input mutation rules for each test input, PRIMA further incorporates learning-to-rank (a kind of supervised machine learning to solve ranking problems) to intelligently combine these mutation results for effective test input prioritization. We conducted an extensive study based on 36 popular subjects by carefully considering their diversity from five dimensions (i.e., different domains of test inputs, different DNN tasks, different network structures, different types of test inputs, and different training scenarios). Our experimental results demonstrate the effectiveness of PRIMA, significantly outperforming the state-of-the-art approaches (with the average improvement of 8.50%~131.01% in terms of prioritization effectiveness). In particular, we have applied PRIMA to the practical autonomous-vehicle testing in a large motor company, and the results on 4 real-world scene-recognition models in autonomous vehicles further confirm the practicability of PRIMA.