•This review aims at standardizing the forecast verification approaches used in deterministic solar forecasting.•The distribution-oriented forecast verification framework is introduced.•RMSE skill ...score is recommended to be universally reported in solar forecasting studies.•A series of practical issues during verification are reviewed.
The field of energy forecasting has attracted many researchers from different fields (e.g., meteorology, data sciences, mechanical or electrical engineering) over the last decade. Solar forecasting is a fast-growing subdomain of energy forecasting. Despite several previous attempts, the methods and measures used for verification of deterministic (also known as single-valued or point) solar forecasts are still far from being standardized, making forecast analysis and comparison difficult.
To analyze and compare solar forecasts, the well-established Murphy–Winkler framework for distribution-oriented forecast verification is recommended as a standard practice. This framework examines aspects of forecast quality, such as reliability, resolution, association, or discrimination, and analyzes the joint distribution of forecasts and observations, which contains all time-independent information relevant to verification. To verify forecasts, one can use any graphical display or mathematical/statistical measure to provide insights and summarize the aspects of forecast quality. The majority of graphical methods and accuracy measures known to solar forecasters are specific methods under this general framework.
Additionally, measuring the overall skillfulness of forecasters is also of general interest. The use of the root mean square error (RMSE) skill score based on the optimal convex combination of climatology and persistence methods is highly recommended. By standardizing the accuracy measure and reference forecasting method, the RMSE skill score allows—with appropriate caveats—comparison of forecasts made using different models, across different locations and time periods.
Autonomous robotic systems are complex, hybrid, and often safety critical; this makes their formal specification and verification uniquely challenging. Though commonly used, testing and simulation ...alone are insufficient to ensure the correctness of, or provide sufficient evidence for the certification of, autonomous robotics. Formal methods for autonomous robotics have received some attention in the literature, but no resource provides a current overview. This article systematically surveys the state of the art in formal specification and verification for autonomous robotics. Specially, it identifies and categorizes the challenges posed by, the formalisms aimed at, and the formal approaches for the specification and verification of autonomous robotics.
Pthread-style multithreaded programs feature rich thread communication mechanisms, such as shared variables, signals, and broadcasts. In this article, we consider the automated verification of such ...programs where an unknown number of threads execute a given finite-data procedure in parallel. Such procedures are typically obtained as predicate abstractions of recursion-free source code written in C or Java. Many safety problems over finite-data replicated multithreaded programs are decidable via a reduction to the coverability problem in certain types of well-ordered infinite-state transition systems. On the other hand, in full generality, this problem is Ackermann-hard, which seems to rule out efficient algorithmic treatment. We present a novel, sound, and complete yet empirically efficient solution. Our approach is to judiciously widen the original set of coverability targets by configurations that involve fewer threads and are thus easier to decide, and whose exploration may well be sufficient: if they turn out uncoverable, so are the original targets. To soften the impact of "bad guesses"-configurations that turn out coverable-the exploration is accompanied by a parallel engine that generates coverable configurations; none of these is ever selected for widening. Its job being merely to prevent bad widening choices, such an engine need not be complete for coverability analysis, which enables a range of existing partial (e.g., nonterminating) techniques. We present extensive experiments on multithreaded C programs, including device driver code from FreeBSD, Solaris, and Linux distributions. Our approach outperforms existing coverability methods by orders of magnitude.
Survey on software defect prediction techniques Mahesh Kumar Thota; Francis H Shajin; P. Rajesh
International Journal of Applied Science and Engineering,
01/2020, Letnik:
17, Številka:
4
Journal Article
Recenzirano
Recent advancements in technology have emerged the requirements of hardware and software applications. Along with this technical growth, software industries also have faced drastic growth in the ...demand of software for several applications. For any software industry, developing good quality software and maintaining its eminence for user end is considered as most important task for software industrial growth. In order to achieve this, software engineering plays an important role for software industries. Software applications are developed with the help of computer programming where codes are written for desired task. Generally, these codes contain some faulty instances which may lead to the buggy software development cause due to software defects. In the field of software engineering, software defect prediction is considered as most important task which can be used for maintaining the quality of software. Defect prediction results provide the list of defect-prone source code artefacts so that quality assurance team scan effectively allocate limited resources for validating software products by putting more effort on the defect-prone source code. As the size of software projects becomes larger, defect prediction techniques will play an important role to support developers as well as to speed up time to market with more reliable software products. One of the most exhaustive and pricey part of embedded software development is consider as the process of finding and fixing the defects. Due to complex infrastructure, magnitude, cost and time limitations, monitoring and fulfilling the quality is a big challenge, especially in automotive embedded systems. However, meeting the superior product quality and reliability is mandatory. Hence, higher importance is given to V&V (Verification & Validation). Software testing is an integral part of software V&V, which is focused on promising accurate functionality and long-term reliability of software systems. Simultaneously, software testing requires much effort, cost, infrastructure and expertise as the development. The costs and efforts elevate in safety critical software systems. Therefore, it is essential to have a good testing strategy for any industry with high software development costs. In this work, we are planning to develop an efficient approach for software defect prediction by using soft computing based machine learning techniques which helps to predict optimize the features and efficiently learn the features.
jPredictor is a tool for detecting concurrency errors in Java programs. The Java program is instrumented to emit property-relevant events at runtime and then executed. The resulting execution trace ...is collected and analyzed by Predictor, which extracts a causality relation sliced using static analysis and refined with lock-atomicity information. The resulting abstract model, a hybrid of a partial order and atomic blocks, is then exhaustively analyzed against the property and errors with counter-examples are reported to the user. Thus, jPredictor can "predict" errors that did not happen in the observed execution, but which could have happened under a different thread scheduling. The analysis technique employed in jPredictor is fully automatic, generic (works for any trace property), sound (produces no false alarms) but it is incomplete may miss errors). Two common types of errors are investigated in this paper: dataraces and atomicity violations. Experiments show that jPredictor is precise (in its predictions), effective and efficient. After the code producing them was executed only once, jPredictor found all the errors reported by other tools. It also found errors missed by other tools, including static race detectors, as well as unknown errors in popular systems like Tomcat and the Apache FTP server.
Formal methods are mathematically based techniques for the rigorous development of software-intensive systems. The railway signaling domain is a field in which formal methods have traditionally been ...applied, with several success stories. This article reports on a mapping study that surveys the landscape of research on applications of formal methods to the development of railway systems. Following the guidelines of systematic reviews, we identify 328 relevant primary studies, and extract information about their demographics, the characteristics of formal methods used and railway-specific aspects. Our main results are as follows: (i) we identify a total of 328 primary studies relevant to our scope published between 1989 and 2020, of which 44% published during the last 5 years and 24% involved industry; (ii) the majority of studies are evaluated through Examples (41%) and Experience Reports (38%), while full-fledged Case Studies are limited (1.5%); (iii) Model checking is the most commonly adopted technique (47%), followed by simulation (27%) and theorem proving (19.5%); (iv) the dominant languages are UML (18%) and B (15%), while frequently used tools are ProB (9%), NuSMV (8%), and UPPAAL (7%); however, a diverse landscape of languages and tools is employed; (v) the majority of systems are interlocking products (40%), followed by models of high-level control logic (27%); and (vi) most of the studies focus on the Architecture (66%) and Detailed Design (45%) development phases. Based on these findings, we highlight current research gaps and expected actions. In particular, the need to focus on more empirically sound research methods, such as Case Studies and Controlled Experiments, and to lower the degree of abstraction, by applying formal methods and tools to development phases that are closer to software development. Our study contributes with an empirically based perspective on the future of research and practice in formal methods applications for railways. It can be used by formal methods researchers to better focus their scientific inquiries, and by railway practitioners for an improved understanding of the interplay between formal methods and their specific application domain.
Self-adaptation enables software to execute successfully in dynamic, unpredictable, and uncertain environments.
Control theory provides a broad set of mathematically grounded techniques for adapting ...the behavior of dynamic systems. While it has been applied to specific software control problems, it has proved difficult to define methodologies allowing non-experts to systematically apply control techniques to create adaptive software. These difficulties arise because computer systems are usually non-linear, with varying workloads and heterogeneous components, making it difficult to model software as a dynamic system; i.e., by means of differential or difference equations.
This paper proposes a broad scope methodology for automatically constructing both an approximate dynamic model of a software system and a suitable controller for managing its non-functional requirements. Despite its generality, this methodology provides formal guarantees concerning the system's dynamic behavior by keeping its model continuously updated to compensate for changes in the execution environment and effects of the initial approximation.
We apply the methodology to three case studies, demonstrating its generality by tackling different domains (and different non-functional requirements) with the same approach. Being broadly applicable and fully automated, this methodology may allow the adoption of control theoretical solutions (and their formal properties) for a wide range of software adaptation problems.
This article presents a full formal machine-checked verification of a C program: the OpenSSL implementation of SHA-256. This is an interactive proof of functional correctness in the Coq proof ...assistant, using the Verifiable C program logic. Verifiable C is a separation logic for the C language, proved sound with respect to the operational semantics for C, connected to the CompCert verified optimizing C compiler.
This article presents a wp--style calculus for obtaining bounds on the expected runtime of randomized algorithms. Its application includes determining the (possibly infinite) expected termination ...time of a randomized algorithm and proving positive almost--sure termination—does a program terminate with probability one in finite expected time? We provide several proof rules for bounding the runtime of loops, and prove the soundness of the approach with respect to a simple operational model. We show that our approach is a conservative extension of Nielson’s approach for reasoning about the runtime of deterministic programs. We analyze the expected runtime of some example programs including the coupon collector’s problem, a one--dimensional random walk and a randomized binary search.
Protocol security in a composition protocol environment has always been an open problem in the field of formal analysis and verification of security protocols. As a well-known tool to analyze and ...verify the logical consistency of concurrent systems, SPIN (Simple Promela Interpreter) has been widely used in the analysis and verification of the security of a single protocol. There is no special research on the verification of protocol security in a composition protocol environment. To solve this problem, firstly, a formal analysis method for composition protocol based on SPIN is proposed, and a formal description of protocol operation semantics is given. Then the attacker model is formalized, and a message specification method based on field detection and component recognition is presented to alleviate the state explosion problem. Finally, the NSB protocol and the NSL protocol are used as examples for compositional analysis. It is demonstrated that the proposed method can effectively verify the security of the protocol in a composition protocol environment and enhance the efficiency of composition protocol verification.