Virtual prototypes (VPs) are crucial in today’s design flow. VPs are predominantly created in SystemC transaction-level modeling (TLM) and are leveraged for early software development and other ...system-level use cases. Recently, virtual prototyping has been introduced for the emerging RISC-V instruction set architecture (ISA) and become an important piece of the growing RISC-V ecosystem. In this paper, we present enhanced virtual prototyping solutions tailored for RISC-V. The foundation is an advanced open source RISC-V VP implemented in SystemC TLM and designed as a configurable and extensible platform. It scales from small bare-metal systems to large multi-core systems that run applications on top of the Linux operating system. Based on the RISC-V VP, this paper also discusses advanced VP-based verification approaches and open challenges. In combination, we provide for the first time an integrated and unified overview and perspective on advanced virtual prototyping for RISC-V.
PREASC Goli, Mehran; Drechsler, Rolf
ACM transactions on design automation of electronic systems,
10/2020, Letnik:
25, Številka:
5
Journal Article
Recenzirano
The increasing functionality of electronic systems due to the constant evolution of the market requirements makes the non-functional aspects of such systems (e.g., energy consumption, area overhead, ...or performance) a major concern in the design process. Approximate computing is a promising way to optimize these criteria by trading accuracy within acceptable limits. Since the cost of applying significant structural changes to a given design increases with the stage of development, the optimization solution needs to be incorporated into the design as early as possible. For the early design entry, modeling hardware at the
Electronic System Level
(ESL) using the SystemC language is nowadays widely used in the industry. To apply approximation techniques to optimize a given SystemC design, designers need to know which parts of the design can be approximated. However, identifying these parts is a crucial and non-trivial starting point of approximate computing, as the incorrect detection of even one critical part as resilient may result in an unacceptable output. This usually requires a significant programming effort by designers, especially when exploring the design space manually.
In this article, we present PREASC, a fully automated framework to identify the resilience portions of a given SystemC design. PREASC is based on a combination of static and dynamic analysis methods along with regression analysis techniques (a fast machine learning method providing an accurate function estimation). Once the resilient portions are identified, an approximation degree analysis is performed to determine the maximum error rate that each resilient portion can tolerate. Subsequently, the maximum number of resilient portions that can be approximated at the same time are reported to designers at different granularity levels. The effectiveness of our approach is evaluated using several standard SystemC benchmarks from various domains.
Nowadays, prefix adders are widely used in different designs and applications due to their flexible carry propagation hardware. The variety of these adders makes it possible to find the best choice ...based on the design parameters, e.g., area, delay, number of wiring tracks. Proving the correctness of prefix adders is an important task after their design as they usually have a complex and error-prone structure. It has been experimentally shown that Binary Decision Diagrams (BDDs) are very efficient in the formal verification of adders, including prefix adders. However, it has been never proved theoretically. In this paper, we calculate the computational complexity of proving the correctness of prefix adders using BDDs. Based on these calculations, we show that the formal verification of prefix adders can be done in time polynomial in n, where n is the size of the adder (i.e., the number of bits per input). We also compare the theoretical calculations with the experimental results to clarify the differences between the complexities in theory and practice.
Approximate circuits have become ubiquitous in error-resilient applications. Given their widespread use, formal verification of these approximate designs is essential. Recently, there have been ...attempts that use formal verification techniques based on Boolean Satisfiability (SAT) Methods and Binary Decisions Diagrams (BDDs) to perform formal error analysis. These methods guarantee that the designed approximate circuits satisfy the error metrics, e.g., Mean Square Error (MSE), etc. However, in certain approximate circuits tailored for a particular input data distribution, only performing error analysis is not enough. In addition to error metrics, the functional specifications in these approximate circuits should also be satisfied. Otherwise, it can lead to a larger-than-expected deterioration in the output quality. In this work, for the first time, we alleviate this issue by proposing an automated formal verification methodology, cecApprox, that guarantees the approximate circuit matches the functional specification. This is crucial for circuits that are tailored for a particular input distribution. We show that in the designs where the structure is preserved, the corrected circuit can be generated from the approximate circuit. This corrected circuit can then be formally verified by performing combinational equivalence checking with the exact circuit as the golden reference. If the corrected circuit is equivalent to the exact circuit, it guarantees that the approximate circuit matches the given functional specification in addition to error metrics. Hence, the cecApprox methodology complements the formal error analysis method. We show the efficacy of cecApprox by generating and formally verifying millions of different 8-bit and 16-bit structural preserved approximate adders and multipliers using combinational equivalence checking.
A novel method for evolutionary material development by using high-throughput processing is established. For the purpose of this high-throughput approach, spherical micro samples are used, which have ...to be characterized, up-scaled to macro level and valued. For the evaluation of the microstructural state of the micro samples and the associated micro-properties, fast characterization methods based on physical testing methods such as calorimetry and universal microhardness measurements are developed. Those measurements result in so-called descriptors. The increase in throughput during calorimetric characterization using differential scanning calorimetry is achieved by accelerating the heating rate. Consequently, descriptors are basically measured in a non-equilibrium state. The maximum heating rate is limited by the possibility to infer the microstructural state from the calorimetric results. The substantial quality of the measured descriptors for micro samples has to be quantified and analyzed depending on the heating rate. In this work, the first results of the measurements of calorimetric descriptors with increased heating rates for 100Cr6 will be presented and discussed. The results of low and high heating rates will be compared and analyzed using additional microhardness measurements. Furthermore, the validation of the method regarding the suitability for the evolutionary material development includes up-scaling to macro level and therefore different sample masses will be investigated using micro and macro samples during calorimetry.
Hardware verification requires a lot of effort. A recent study showed that on average, there are more verification engineers working on a project than design engineers. Hence, one of the biggest ...challenges in design and verification today is to find new ways to increase the productivity. For software development the agile methodology as an incremental approach has been proposed and is heavily used. Behavior Driven Development (BDD) as an agile technique additionally enables a direct link to natural language based testing. In this article, we show how BDD can be extended to make it viable for hardware design. In addition, we present a two-fold strategy which allows to specify textual acceptance tests and textual formal properties. Finally, this strategy is complemented by methods to generalize tests to properties, and to enhance design understanding by presenting debug and witness scenarios in natural language.
Polynomial Formal Verification Drechsler, Rolf; Mahzoon, Alireza
2022 IEEE/ACM International Conference On Computer Aided Design (ICCAD),
10/2022
Conference Proceeding
Recently, a lot of effort has been put into developing formal verification approaches by both academic and industrial research. In practice, these techniques often give satisfying results for some ...types of circuits, while they fail for others. A major challenge in this domain is that the verification techniques suffer from unpredictability in their performance. The only way to overcome this challenge is the calculation of bounds for the space and time complexities. If a verification method has polynomial space and time complexities, scalability can be guaranteed.
In this tutorial paper, we review recent developments in formal verification techniques and give a comprehensive overview of Polynomial Formal Verification (PFV). In PFV, polynomial upper bounds for the run-time and memory needed during the entire verification task hold. Thus, correctness under resource constraints can be ensured. We discuss the importance and advantages of PFV in the design flow. Formal methods on the bit-level and the word-level, and their complexities when used to verify different types of circuits, like adders, multipliers, or ALUs are presented. The current status of this new research field and directions for future work are discussed.
The identification of power-risky test patterns is a crucial task in the design phase of digital circuits. Excessive test power could lead to test failures due to IR-drop, noise, etc. This has to be ...avoided to prevent yield loss and chip damages. However, the accurate power simulation of all test patterns to identify power-risky patterns as well as to find critical areas within each pattern is not possible due to run time and resource constraints. An important task is therefore the selection of a subset of potentially power-risky patterns, which will be simulated in an accurate manner. In this paper, we propose an independent test pattern analysis methodology for the integration into an existing industrial design flow. The proposed test pattern analysis technique is a lightweight method based on the cell's Transient Power Activity (TPA) to identify potentially power-risky patterns. The method uses layout and power information to identify critical power activity areas using machine learning techniques. Experiments were performed on opensource benchmarks as well as on an industrial design. The results were correlated with commercial power and IR-drop simulation tools. The proposed methodology was found to be effective in terms of speed and localization of the critical areas for unsafe patterns.