Due to the ever increasing complexity of embedded and cyber‐physical systems, corresponding design solutions relying on modelling languages such as Unified Modelling Language (UML)/Object Constraint ...Language (OCL) find increasing attention. Due to the recent success of formal verification techniques, UML/OCL models also allow to verify and/or check certain properties of a given model in early stages of the design phase. To this end, different approaches for verification and validation have been proposed. In this work, the authors motivate, define, and describe different verification tasks for structural, as well as behavioural UML/OCL models that can be solved using solvers for Boolean satisfiability. They describe how these verification tasks can be translated into a symbolic formulation which is passed to off‐the‐shelf solvers afterwards. The obtained results enable designers to draw conclusions about the correctness of the considered model.
The development of computing machines found great success in the last decades. But the ongoing miniaturization of integrated circuits will reach its limits in the near future. Shrinking transistor ...sizes and power dissipation are the major barriers in the development of smaller and more powerful circuits. Reversible logic provides an alternative that may overcome many of these problems in the future. For low-power design, reversible logic offers significant advantages since zero power dissipation will only be possible if computation is reversible. Furthermore, quantum computation profits from enhancements in this area, because every quantum circuit is inherently reversible and thus requires reversible descriptions. However, since reversible logic is subject to certain restrictions (e.g. fanout and feedback are not directly allowed), the design of reversible circuits significantly differs from the design of traditional circuits. Nearly all steps in the design flow (like synthesis, verification, or debugging) must be redeveloped so that they become applicable to reversible circuits as well. But research in reversible logic is still at the beginning. No continuous design flow exists so far. In Towards a Design Flow for Reversible Logic, contributions to a design flow for reversible logic are presented. This includes advanced methods for synthesis, optimization, verification, and debugging. Formal methods like Boolean satisfiability and decision diagrams are thereby exploited. By combining the techniques proposed in the book, it is possible to synthesize reversible circuits representing large functions. Optimization approaches ensure that the resulting circuits are of small cost. Finally, a method for equivalence checking and automatic debugging allows to verify the obtained results and helps to accelerate the search for bugs in case of errors in the design. Combining the respective approaches, a first design flow for reversible circuits of significant size results. TOC:1. Introduction.- 2. Preliminaries.- 3. Synthesis of Reversible Logic.- 4. Exact Synthesis of Reversible Logic.- 5. Embedding of Irreversible Functions.- 6. Optimization.- 7. Formal Veri¿cation and Debugging.- 8. Summary and Conclusions.- References.- Index.
In order to overcome the von Neumann bottleneck, recently the paradigm of in-memory computing has emerged. Here, instead of transferring data from the memory to the CPU for computation, the ...computation is directly performed within the memory. ReRAM, a resistance-based storage device, is a promising technology for this paradigm. Based on ReRAM, the PLiM computer architecture and LiM-HDL, an HDL for specifying PLiM programs have emerged. In this paper, we first present a novel levelization algorithm for LiM-HDL. Based on this novel algorithm, large circuits can be compiled to PLiM programs. Then, we present a verification scheme for these programs. This scheme is separated into two steps: (1) A proof of purity and (2) a proof of equivalence. Finally, in the experiments, we first apply our levelization algorithms to a well-known benchmark set, where we show that we can generate PLiM programs for large benchmarks, for which existing levelization algorithms fails. Then, we apply our proposed verification scheme to these PLiM programs.
The integration of Design for Testability measures is strictly required when designing complex Integrated Circuits (ICs) to ensure that a good testability prevails in the resulting design. By this, a ...high-quality manufacturing test can be performed, giving a certain level of confidence that no defects have occurred during the manufacturing process, which potentially tamper with the functional behavior's correctness. However, a high-quality test implies large test data volume and high test application time, yielding high test costs. This effect is even more amplified when testing ICs for safety-critical applications like automotive systems or avionics, enforcing a zero-defect policy. Analogously, specific structures for the Design for Debug and Diagnosis are introduced since similar problems exist when debugging complex systems. Finally, the Design for Reliability is becoming increasingly important in applications like avionics since the introduced system has typically to deal with harsh environmental conditions and, hence, the IC has to exhibit a specific level of robustness to withstand. This paper proposes novel contributions to, in the end, pave the way for the next generation of IC, which can be successfully and reliably integrated even in safety-critical applications. In particular, this paper combines formal techniques, such as the Boolean satisfiability problem and bounded model checking, to propose (I) a novel test access mechanism with embedded compression including an optimization-based retargeting framework, (II) a new hybrid compression architecture to address compression aborts and (II) an effective fault detection mechanism for single transient faults. The proposed measures are evaluated by considering industrial-relevant benchmark candidates, demonstrating their effectiveness and showing that state-of-the-art techniques are outperformed.
In this paper we propose μRV32 (MicroRV32) an open source RISC-V platform for education and research. μRV32 integrates several peripherals alongside a configurable 32 bit RISC-V core interconnected ...with a generic bus system. It supports bare-metal applications as well as the FreeRTOS operating system. Beside an RTL implementation in the modern SpinalHDL language (μRV32 RTL) we also provide a corresponding binary compatible Virtual Prototype (VP) that is implemented in standard compliant SystemC TLM (μRV32 VP). In combination the VP and RTL descriptions pave the way for advanced cross-level methodologies in the RISC-V context. Moreover, based on a readily available open source tool flow, μRV32 RTL can be exported into a Verilog description and simulated with the Verilator tool or synthesized onto an FPGA. The tool flow is very accessible and fully supported under Linux. As part of our experiments we provide a set of ready to use application benchmarks and report execution performance results of μRV32 at the RTL, VP and FPGA level together with a proof-of-concept FPGA synthesis statistic for different processor configurations. We believe that our μRV32 platform is a suitable foundation for further research and education purposes due to its open source nature, accessible toolchain working in Linux and support for small low-priced FPGAs in combination with a solid feature set.
HDLs are widely used in EDA for abstract specification and synthesis of logic circuits. Despite the popularity and the many benefits of HDL-based synthesis, it has not yet been performed for ...in-memory computing. Hence, there is a need to design a particular HDL which supplies efficient and compatible descriptions. In this paper, we enable HDL-based synthesis for the Programmable Logic-in-Memory (PLiM) computer architecture. We present LiM-HDL - a Verilog-based HDL - which allows for the detailed description of programs for in-memory computation. Having the description given in LiM-HDL, we propose a synthesis scheme which translates the description into PLiM programs, i.e. a sequence of resistive majority operations. This includes lexical and syntax analysis as well as preprocessing, custom levelization and a compiler. In our experiments, we show the benefits of LiM-HDL compared to classical Verilog-based synthesis. We show in a case-study that LiM-HDL can be used to implement programs with respect to constraints of specific applications such as edge computing in IoT, for which the PLiM computer is of particular interest and where low area is a key requirement. In our case-study, we show that we can reduce the number of ReRAM devices needed for the computation of an encryption module by 69%.
In Test Pattern Generation using Boolean Proof Engines, we give an introduction to ATPG. The basic concept and classical ATPG algorithms are reviewed. Then, the formulation as a SAT problem is ...considered. As the underlying engine, modern SAT solvers and their use on circuit related problems are comprehensively discussed. Advanced techniques for SAT-based ATPG are introduced and evaluated in the context of an industrial environment. The chapters of the book cover efficient instance generation, encoding of multiple-valued logic, usage of various fault models, and detailed experiments on multi-million gate designs. The book describes the state of the art in the field, highlights research aspects, and shows directions for future work.
The IEEE 1687 Std. (IJTAG) introduces an efficient access methodology based on reconfigurable scan networks to address the ever-increasing complexity of the latest system-on-chips. By invoking this ...new methodology, the overall test time can considerably be reduced by shortening the scan chains' length without sacrificing the test quality. IJTAG allows for designing highly complex test networks compromising the latest test structure that enables to achieve the high test quality, as required by today's customers' applications. Besides the time overhead induced by the required network (re-)configuration, the access sequence of the instruments itself greatly affects the overall test time. Furthermore, the power characteristics of the complex test facilities have to be taken into account to avoid effects like IR drop during the later test application that is even more critical in highly multi-power domain networks. Consequently, the IJTAG methodology strictly requires an effective test scheduler that considers the individual instruments' constraints and is compatible with recent multi-power domain chip designs. This work proposes two novel power-aware test scheduling approaches based on pseudo-Boolean optimization and integer linear programming techniques that are both seamlessly integrated into a fully-automated test scheduling framework for IJTAG test networks even with multiple power domains. The first proposed optimization scheme allows for determining a local optimal test schedule, applicable to networks with more than one thousand instruments since the required run-time is manageable. Furthermore, the second optimization scheme determines a global optimal test scheduler, which is most suitable for mid-sized networks in which it is clearly outperforming any other existing technique.
•Power-aware test scheduling framework for IJTAG multi-power domain networks is proposed.•IJTAG network modeled as conjunctive normal form and pseudo-Boolean co-factor power constraints.•Test scheduling algorithms invoke Pseudo-Boolean optimization and Integer Linear Programming techniques.•A fully-automated framework require ICL, netlist and constraint files only.•Framework yields highly optimized but power-safe test sequences.
Modern System-on-Chips (SoCs) have been increasingly deployed in critical aspects of our lives. As a consequence, they have access to a large number of secret assets that must be protected against ...unauthorized access. In order to provide sound security guarantees, an SoC typically has a security architecture as authentication mechanisms to control the access of different Intellectual Properties (IPs) to secret assets. Since the SoC's security architecture cannot be changed after production, it is of utmost importance to detect any security flaws in the design phase. Moreover, to prevent costly fixes in later stages, security validation should start as early as possible. In this paper, we propose a novel approach to validate the security architecture of a given SoC against timing flows using SystemC-based Virtual Prototype (VP) and static information flow tracking technique at the system level. Experimental results on two real-world VP-based SoCs demonstrate the scalability and applicability of the proposed approach in identifying timing flows.
Data Science ermöglicht die Gewinnung von Erkenntnissen aus komplexen, hochdimensionalen Daten und wird daher als Schlüsseldisziplin unserer Zeit und zentrales Element der datenintensiven Forschung ...angesehen. Das Fundament dafür bilden gut kuratierte, FAIRe Daten, gewährleistet durch ein nachhaltiges Forschungsdatenmanagement bzw. Data Stewardship. Dementsprechend verspricht eine enge Abstimmung und systematische Verzahnung von Data Science und Data Stewardship im Forschungsprozess eine signifikante Effizienzsteigerung und maximale Wertschöpfung aus Daten. Das mit Unterstützung des Landes Bremen aufgebaute Data Science Center (DSC@UB) der Universität Bremen bietet dafür bestmögliche Rahmenbedingungen. Als zentrale, interdisziplinäre Infrastruktur für die datenintensive Forschung dient es als Ort für den fachübergreifenden Austausch und stellt essentielle Services zur Unterstützung von Forschenden bereit. Darüber hinaus ermöglicht das DSC die forschungsnahe Ansiedlung von Data Scientists und Data Stewards, die Wissenschaftler:innen im gesamten Datenlebenszyklus und in enger Zusammenarbeit unterstützen sollen. Dies soll (1) die FAIRe Bereitstellung von Daten, (2) einen möglichst großen Erkenntnisgewinn aus Daten, und (3) die Optimierung von Prozessen und Abläufen im Umgang mit Daten nachhaltig gewährleisten, wodurch neue Innovationsmöglichkeiten geschaffen werden.