Uppaal SMC tutorial David, Alexandre; Larsen, Kim G.; Legay, Axel ...
International journal on software tools for technology transfer,
08/2015, Letnik:
17, Številka:
4
Journal Article
Recenzirano
Odprti dostop
This tutorial paper surveys the main features of
Uppaal
SMC, a model checking approach in
Uppaal
family that allows us to reason on networks of complex real-timed systems with a stochastic semantic. ...We demonstrate the modeling features of the tool, new verification algorithms and ways of applying them to potentially complex case studies.
BPMN collaboration models have acquired increasing relevance in software development since they shorten the communication gap between domain experts and IT specialists and permit clarifying the ...characteristics of software systems needed to provide automatic support for the activities of complex organizations. Nonetheless, the lack of effective formal verification capabilities can hinder the full adoption of the BPMN standard by IT specialists, as it prevents precisely check the satisfaction of behavioral properties, with negative impacts on the quality of the software. To address these issues, this paper proposes BProVe, a novel verification approach for BPMN collaborations. This combines both standard model checking techniques, through the MAUDE’s LTL model checker, and statistical model checking techniques, through the statistical analyzer MultiVeStA. The latter makes BProVe effective also on those scenarios suffering from the state–space explosion problem, made even more acute by the presence of asynchronous message exchanges. To support the adoption of the BProVe approach, we propose a complete web-based tool-chain that allows for BPMN modeling, verification, and result exploration. The feasibility of BProVe has been validated both on synthetically-generated models and on models retrieved from two public repositories. The performed validation highlighted the importance and complementarity of the two supported verification strategies.
•We provided a novel verification approach for BPMN collaborations.•We combine both standard and statistical model checking techniques.•We propose a complete web-based tool-chain.
Bounded Model Checking (BMC) is one of the most prominent approaches used as a falsification engine, capable of identifying counterexamples of bounded length, in a scalable and sustainable way. ...Nevertheless, in the context of a portfolio-based verification suite, BMC can benefit from potential interaction with other engines, exploiting their capabilities and partial results as a form of application-dependant learning. In the past, previous works tackled the issue of using over-approximated state sets generated via Binary Decision Diagrams (BDD) based traversals. In a sense, BDD engines can be considered as external tools, whereas interpolants are directly related to BMC problems. Since interpolants come from Boolean satisfiability (SAT) refutation proofs, their role as a SAT-based learning can be potentially higher. Furthermore, their integration is more tightly linked to the BMC problem at hand. In this paper we aim at improving the efficiency of SAT calls in BMC problems, exploiting interpolation-based invariants computed over different cut points, as additional constraints for BMC problems. We experimentally evaluate costs and benefits of our proposed approach on a set of publicly available model checking problems.
The study of business process analysis and optimization has attracted significant scholarly interest in the recent past, due to its integral role in boosting organizational performance. A specific ...area of focus within this broader research field is
Process Mining (PM)
. Its purpose is to extract knowledge and insights from event logs maintained by information systems, thereby discovering process models and identify process-related issues. The goal of the current study is to examine how
Quantitative Model Checking (QMC)
approaches might be applied in the context of PM. Model checking is a well-known verification approach that provides thorough analysis and validation of a system’s properties in comparison to a predetermined model. The adoption of QMC is aimed at improving the accuracy, reliability, and comprehensiveness of PM models in stochastic environment. We propose a novel methodology in this research direction, which integrates QMC with PM by formally modelling discovered and replayed process models and applying QMC methods to verify PM models. The potential of QMC to overcome significant drawbacks of the existing methodologies is the main driver for its use in PM. By including probabilistic model verification, it is possible to take into account the uncertainties and stochastic behaviour that are frequently present in systems that are used in real world; while statistical model checking methods utilized where probabilistic methods fails/not suitable, such as, to handle complex models and/or models with large state-spaces.
Abstract
The distributed temporal logic (DTL) is a logic for reasoning about temporal properties of distributed systems from the local point of view of the system’s agents, which are assumed to ...execute sequentially and to interact by means of synchronous event sharing. Different versions of DTL have been proposed over the years for a number of different applications, reflecting different perspectives on how non-local information can be accessed by each agent. In a recent paper, an automata-theoretic approach to model check DTL was proposed Subtil et al. (2020, Technical Report). Herein, we follow a different approach and adapt the bounded model-checking (BMC) algorithm for linear temporal logic to the case of DTL (see Biere et al. (2003, Adv. Comput., 58, 117–148) and Biere et al. (1999, TACAS 1999, 193–207)). For that purpose, a new notion of bounded semantics for DTL is proposed. In the BMC approach, the witness problem is translated to the satisfiability of a propositional formula that can be addressed (efficiently) by SAT solvers. An important application for this approach is verification of security protocols (Basin et al. (2011, Theoret. Comput. Sci., 412, 4007–4043); Caleiro et al. (2005, Electron. Notes Theor. Comput. Sci., 125, 67–89)).
High-performance data streaming technologies are increasingly adopted in IT companies to support the integration of heterogeneous and possibly distributed applications. Compared with the traditional ...message queuing middleware, a streaming platform enables the implementation of event-streaming systems (ESS) which include not only complex queues but also pipelines that transform and react to the streams of data. By analysing the centralised data streams, one can evaluate the Quality-of-Service for other systems and components that produce or consume those streams. We consider the exploitation of probabilistic model checking as a performance monitoring technique for ESS systems. Probabilistic model checking is a mature, powerful verification technique with successful application in performance analysis. However, an ESS system may contain quantitative parameters that are determined by event streams observed in a certain period of time. In this paper, we present a novel theoretical framework called QV4M (meaning "quantitative verification for monitoring") for monitoring ESS systems, which is based on two recent methods of probabilistic model checking. QV4M assumes the parameters in a probabilistic system model as random variables and infers the statistical significance for the probabilistic model checking output. We also present an empirical evaluation of computational time and data cost for QV4M.
Modern software-intensive systems often interact with an environment whose behavior changes over time, often unpredictably. The occurrence of changes may jeopardize their ability to meet the desired ...requirements. It is therefore desirable to design software in a way that it can self-adapt to the occurrence of changes with limited, or even without, human intervention. Self-adaptation can be achieved by bringing software models and model checking to run time, to support perpetual automatic reasoning about changes. Once a change is detected, the system itself can predict if requirements violations may occur and enable appropriate counter-actions. However, existing mainstream model checking techniques and tools were not conceived for run-time usage; hence they hardly meet the constraints imposed by on-the-fly analysis in terms of execution time and memory usage. This paper addresses this issue and focuses on perpetual satisfaction of non-functional requirements, such as reliability or energy consumption. Its main contribution is the description of a mathematical framework for run-time efficient probabilistic model checking. Our approach statically generates a set of verification conditions that can be efficiently evaluated at run time as soon as changes occur. The proposed approach also supports sensitivity analysis, which enables reasoning about the effects of changes and can drive effective adaptation strategies.
This paper presents our approach to the quantitative modeling and analysis of highly (re)configurable systems, such as software product lines. Different combinations of the optional features of such ...a system give rise to combinatorially many individual system variants. We use a formal modeling language that allows us to model systems with probabilistic behavior, possibly subject to quantitative feature constraints, and able to dynamically install, remove or replace features. More precisely, our models are defined in the probabilistic feature-oriented language QFLan , a rich domain specific language (DSL) for systems with variability defined in terms of features. QFLan specifications are automatically encoded in terms of a process algebra whose operational behavior interacts with a store of constraints, and hence allows to separate system configuration from system behavior. The resulting probabilistic configurations and behavior converge seamlessly in a semantics based on discrete-time Markov chains, thus enabling quantitative analysis. Our analysis is based on statistical model checking techniques, which allow us to scale to larger models with respect to precise probabilistic analysis techniques. The analyses we can conduct range from the likelihood of specific behavior to the expected average cost, in terms of feature attributes, of specific system variants. Our approach is supported by a novel Eclipse-based tool which includes state-of-the-art DSL utilities for QFLan based on the Xtext framework as well as analysis plug-ins to seamlessly run statistical model checking analyses. We provide a number of case studies that have driven and validated the development of our framework.