Design Verification

Introduction

Charles E. Stroud , ... Yao-Wen Chang , in Electronic Design Automation, 2009

1.2.2 Design verification

Design verification is the most important aspect of the product development process illustrated in Figures 1.3 and 1.5, consuming as much as 80% of the total product development time. The intent is to verify that the design meets the system requirements and specifications. Approaches to design verification consist of (1) logic simulation/emulation and circuit simulation, in which detailed functionality and timing of the design are checked by means of simulation or emulation; (2) functional verification, in which functional models describing the functionality of the design are developed to check against the behavioral specification of the design without detailed timing simulation; and (3) formal verification, in which the functionality is checked against a "golden" model. Formal verification further includes property checking (or model checking), in which the property of the design is checked against some presumed "properties" specified in the functional or behavioral model (e.g., a finite-state machine should not enter a certain state), and equivalence checking, in which the functionality is checked against a "golden" model [Wile 2005]. Although equivalence checking can be used to verify the synthesis results in the lower levels of the EDA flow (denoted "regression test" in Figure 1.5), the original design capture requires property checking.

Simulation-based techniques are the most popular approach to verification, even though these are time-consuming and may be incomplete in finding design errors. Logic simulation is used throughout every stage of logic design automation, whereas circuit simulation is used after physical design. The most commonly used logic simulation techniques are compiled-code simulation and event-driven simulation [Wang 2006]. The former is most effective for cycle-based two-valued simulation; the latter is capable of handling various gate and wire delay models. Although versatile and low in cost, logic simulation is too slow for complex SOC designs or hardware/software co-simulation applications. For more accurate timing information and dynamic behavior analysis, device-level circuit simulation is used. However, limited by the computation complexity, circuit simulation is, in general, only applied to critical paths, cell library components, and memory analysis.

For simulation, usually, a number of different simulation techniques are used, including high-level simulation through a combination of behavioral modeling and testbenches. Testbenches are behavioral models that emulate the surrounding system environment to provide input stimuli to the design under test and process the output responses during simulation. RTL models of the detailed design are then developed and verified with the same testbenches that were used for verification of the architectural design, in addition to testbenches that target design errors in the RTL description of the design. With sufficient design verification at this point in the design process, functional vectors can be captured in the RTL simulation and then used for subsequent simulations (regression testing) of the more detailed levels of design, including synthesized gate-level design, transistor-level design, and physical design. These latter levels of design abstraction (gate, transistor, and physical design) provide the ability to perform additional design verification through logic, switch-level, and timing simulations. These three levels of design abstraction also provide the basis for fault models that can be used to evaluate the effectiveness of manufacturing tests.

The design verification step establishes the quality of the design and ensures the success of the project by uncovering potential errors in both the design and the architecture of the system. The objective of design verification is to simulate all functions as exhaustively as possible while carefully investigating any possibly erroneous behavior. From a designer's standpoint, this step deserves the most time and attention. One of the benefits of high-level HDLs and logic synthesis is to allow the designer to devote more time and concentration to design verification. Because much less effort is required to obtain models that can be simulated but not synthesized, design verification can begin earlier in the design process, which also allows more time for considering optimal solutions to problems found in the design or system. Furthermore, debugging a high-level model is much easier and faster than debugging a lower level description, such as a gate-level netlist.

An attractive attribute of the use of functional models for design verification (often called functional verification) is that HDL simulation of a collection of models is much faster than simulations of the gate-level descriptions that would correspond to those models. Although functional verification only verifies cycle accuracy (rather than timing accuracy), the time required to perform the design verification process is reduced with faster simulation. In addition, a more thorough verification of the design can be performed, which in turn improves the quality of the design and the probability of the success of the project as a whole. Furthermore, because these models are smaller and more functional than netlists describing the gate-level design, the detection, location, and correction of design errors are easier and faster. The reduced memory requirements and increased speed of simulation with functional models enable simulation of much larger circuits, making it practical to simulate and verify a complete hardware system to be constructed. As a result, the reduced probability of design changes resulting from errors found during system integration can be factored into the overall design schedule to meet shorter market windows. Therefore, design verification is economically significant, because it has a definite impact on time-to-market. Many tools are available to assist in the design verification process, including simulation tools, hardware emulation, and formal verification methods. It is interesting to note that many design verification techniques are borrowed from test technology, because verifying a design is similar to testing a physical product. Furthermore, the test stimuli developed for design verification of the RTL, logical, and physical levels of abstraction are often used, in conjunction with the associated output responses obtained from simulation, for functional tests during the manufacturing process.

Changes in system requirements or specifications late in the design cycle jeopardize the schedule and the quality of the design. Late changes to a design represent one of the two most significant risks to the overall project, the other being insufficient design verification. The quality of the design verification process depends on the ability of the testbenches, functional vectors, and the designers who analyze the simulated responses to detect design errors. Therefore, any inconsistency observed during the simulations at the various levels of design abstraction should be carefully studied to determine whether potential design errors to be corrected exist before design verification continues.

Emulation-based verification by use of FPGAs provides an attractive alternative to simulation-based verification as the gap between logic simulation capacity and design complexity continues growing. Before the introduction of FPGAs in the 1980s, ASICs were often verified by construction of a breadboard by use of small-scale integration (SSI) and medium-scale integration (MSI) devices on a wire-wrap board. This became impractical as the complexity and scale of ASICs moved into the VLSI realm. As a result, FPGAs became the primary hardware for emulation-based verification. Although these approaches are costly and may not be easy to use, they improve verification time by two to three orders of magnitude compared with software simulation. Alternately, a reconfigurable emulation system (or reconfigurable emulator) that automatically partitions and maps a design onto multiple FPGAs can be used to avoid building a prototype board and can be reused for various designs [Scheffer 2006a, 2006b].

Formal verification techniques are a relatively new paradigm for equivalence checking. Instead of input stimuli, these techniques perform exhaustive proof through rigorous logical reasoning. The primary approaches used for formal verification include binary decision diagrams (BDDs) and Boolean satisfiability (SAT) [Velev 2001]. These approaches, along with other algorithms specific to EDA applications, are extensively discussed in Chapter 4. The BDD approach successively applies Shannon expansion on all variables of a combinational logic function until either the constant function "0" or "1" is reached. This is applied to both the captured design and the synthesized implementation and compared to determine their equivalence. Although BDDs give a compact representation for Boolean functions in polynomial time for many Boolean operations, the size of BDD grows exponentially with input size, which is usually limited to 100 to 200 inputs. On the other hand, SAT techniques have been very successful in recent years in the verification area with the ability to handle million-gate designs and both combinational and sequential designs.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123743640500084

Introducing Bluetooth Applications

In Bluetooth Application Developer's Guide, 2002

Design Verification

Design verification can be a problem: despite the most precise synthesis, the prototype may not always exhibit the same RF characteristics in reality. This can involve lengthy diagnosis, component changes, or a board respin if layout issues are suspected to be the cause of the problem.

This can be overcome by the development of several prototypes concurrently, as well as adhering to the selected silicon vendors' design guidelines. If advice states that the device is sensitive to noise, you will know not to run digital lines from the flash next to the Bluetooth device or under the system crystal, and to take de-coupling very seriously! Figure 1.17 illustrates the problems caused if a design routes the address and data bus (or another digital line that changes rapidly) near the crystal or its traces. Any digital signal has fast edges which can easily couple several millivolts into a small signal output from a crystal; this is not helped by the lack of drive you receive from a crystal. As the crystal output passes through the Phase locked loop (PLL) comparator, a slice level is used to determine if the crystal output has changed from a zero to a one, or vice-versa. If there are glitches on the crystal output from digital coupling that are greater than the hysteresis of the comparator, it can result in the square wave output having glitches or excessive jitter. Glitches can confuse the divider and phase comparator and result in excessive frequency deviation at the output, which will cause variations in the RF output.

Figure 1.17. The Effect of Routing Digital Signals Near a System Crystal

Figure 1.18 illustrates the noise incurred on the output spectrum due to insufficient filtering of the power supply to the BT device, the top trace. This will have a detrimental effect on system performance and will impact negatively on some of the qualification tests for frequency drift and drift rate.

Figure 1.18. The Effect of Poor Filtering on the Bluetooth Output Spectrum

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781928994428500045

Computer-aided Fixture Design Verification

Dr. Yiming (Kevin) Rong , ... Dr. Zhikun Hou , in Advanced Computer-Aided Fixture Design, 2005

4.2.6 Summary

In CAFDV, when locator tolerances are given machining surface accuracy can be predicted. On the other hand, given machining surface tolerance specifications, locator tolerances can be determined to satisfy requirements. To achieve generality, the Jacobian matrix is adopted to formulate the fixture-workpiece relationship. The locators are represented with equivalent locating points and machining surfaces are represented with surface sample points. For computer implementation, machining surfaces are represented by their sample points. Six fixture-related tolerances are then defined with the surface sample points. In locator tolerance assignment, surface sensitivities on locating points are measured to best distribute tolerances among locating points.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780125947510500053

Functional and Nonfunctional Design Verification for Embedded Software Systems

Arnab Ray , ... Chris Martin , in Advances in Computers, 2011

1 Introduction

Model-based engineering [1] with a particular focus on executable visual specifications [2] has emerged as a best practice in embedded applications (aerospace, automotive, medical) where a high degree of confidence [3] needs to be placed in the software. Since modeling notations like Simulink [4] and ASCET [5] are provided with a precise notion of what it means to execute, designs rendered in them may be simulated just like C or Java code. This property of executability enables engineers' construction methods and tools to check whether their models meet expectations of proper behavior as captured by the requirements.

The ability to perform verification in the design phase itself allows engineers to catch bugs early in the development lifecycle where they are cheaper and easier to fix [6] . Because of this benefit, there is currently an enormous interest in model-based design verification techniques, particularly, into those that support formal reasoning [7].

In general, design verification consists of two kinds of activities. On one hand, there is functional verification [8] where the design is checked against requirements that specify how the software must and must not behave (functional requirements). On the other hand, there is nonfunctional verification [9] where requirements that describe expectations on the impact the software will have on the system it is part of and the organization that is responsible for it are confirmed against the design.

Model-based design verification has several challenges:

1.

The functional verification domain has been an active area of investigation for the past 30 years. While inspections [10] and code review [11] still remain popular manual verification strategies, the focus of existing research is on automated and semiautomated math-based analysis techniques like model checking [12,13] and theorem proving [14–16].

However, these methods are not without their shortcomings. Model-checking suffers from severe scalability issues [17,18], which often make it infeasible for applying on complicated concurrent software systems [19]. Theorem proving requires a lot of expert user guidance, introducing a large amount of overhead to product development time [20].

Another problem is that most of the tools and techniques [21–24] developed for formal verification have been created in a research environment where, in order to make the analysis feasible, the input notations have been kept rudimentary [25]. However, in the industrial space, tools that are popularly used like Simulink and ASCET support a wide variety of modeling constructs in order to simplify the activity of modeling. Because of this incompatibility in their fundamental approaches, techniques developed in the research community are poorly integrated with commercial modeling tools. As a result, engineers can often do little more than to invoke the native simulation features of their design tools and make ad hoc assessment about the veracity of their designs.

2.

Nonfunctional requirements, because they involve properties like resource usage (does the software make appropriate use of scarce bandwidth?), and modifiability (does the structure of the software appropriately support future modifications?) cannot be seen to be simply defining set of valid/invalid traces.

Consequently, nonfunctional requirements typically cannot be formally checked against the system in the way that functional requirements are but instead are reasoned about in a more ad hoc informal way through structured discussions and brainstorming [26–28]. This kind of analysis is often criticized as lacking in rigor.

3.

Functional verification is done on models (specifically set of traces) while nonfunctional verification is performed by analyzing the architecture (i.e., what the components of the system are and how they are connected). These two representations of the system, the functional and the nonfunctional models, are not the same, representing alternate views of the system specification. Sometimes, these views are so different from each other than it becomes difficult to make an argument that these two alternate representations are consistent with each other, that is, they represent the same system.

The motivation of this chapter is to show how each of these challenges can be addressed in a practical, industrial context. More specifically, our chapter has the following three themes:

1.

Instrumentation-based verification (IBV) [29], an automated, formal testing-based approach to the design verification of functional requirements. In IBV, functional requirements are formalized as monitors, or small models in the same modeling notation in which designs are given. Intuitively, the purpose of these monitors is to observe the behavior of a controller design model and raise a flag whenever the controller's behavior deviates from the associated requirement. Engineers may then check the correctness of a design model by connecting the monitors to the design and running automatically generated tests on the resulting instrumented model to see if any monitors ever raise flags.

2.

Quality attribute-based reasoning (QAR) [30], a semiautomated process for verifying nonfunctional requirements. In QAR, the software architecture of a system is described in terms of responsibilities (a unit of functionality) and their dependencies. This software architecture together with a nonfunctional requirement (expressed as a quality attribute scenario) is provided as inputs to a reasoning framework (RF), which then decides if the architecture satisfies that requirement. If it does not, then the framework provides suggestions or tactics to the engineer so that the architecture could be modified in order to satisfy the particular nonfunctional requirement.

3.

An integrated approach for verifying functional and nonfunctional requirements. In this approach, functional models in notations such as Simulink are used to drive both functional and nonfunctional verification by automatically extracting nonfunctional design models from functional ones. This automated extraction of artifacts used for nonfunctional analysis will ensure that the different system views remain consistent throughout.

The rest of the chapter is organized as follows. Section 2 provides a background in model-based development (MBD), and functional and nonfunctional verification. Section 3 discusses IBV, while Section 4 discusses QAR. Section 5 talks about our integrated approach for functional and nonfunctional verification, while Section 6 concludes with future work.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123855107000060

Functional Verification

In Top-Down Digital VLSI Design, 2015

5.1 Goals of design verification

The ultimate goal of design verification is to avoid the manufacturing and deployment of flawed designs. Large sums of money are wasted and precious time to market is lost when a microchip does not perform as expected. Any design is, therefore, subject to detailed verification long before manufacturing begins and to thorough testing following fabrication. One can distinguish three motivations (after the late A. Richard Newton):

1.

During specification: "Is what I am asking for what is really needed?"

2.

During design: "Have I indeed designed what I have asked for?"

3.

During testing: "Can I tell intact circuits from malfunctioning ones?"

Any of these questions can refer to two distinct circuit qualities:

Functionality describes what responses a system produces at the output when presented with given stimuli at the input. In the context of digital ICs, we tend to think of logic networks and of package pins but the concept of input-to-output mapping applies to information processing systems in general. Functionality gets expressed in terms of mathematical concepts such as algorithms, equations, impulse responses, tolerance bands for numerical inaccuracies, finite state machines (FSM), and the like, but often also informally.

Parametric characteristics, in contrast, relate to physical quantities measured in units such as Mbit/s, ns, V, μA, mW, pF, etc. that serve to express electrical and timing-related characteristics of an electronic circuit.

Observation 5.1

Experience has shown that a design's functionality and its parametric characteristics are best checked separately as goals, methods and tools are quite different.

5.1.1 Agenda

Our presentation is organized accordingly with section 5.2 presenting the options for specifying a design's functionality. The bulk of the material then is about developing a simulation strategy that maximizes the likelihood of uncovering design flaws. After having exposed the puzzling limitations of functional verification in the first part of section 5.3, we will discuss how to prepare test data sets and how to make use of assertions to render circuit models "self-checking". How to organize simulation data and simulation runs is the subject of section 5.4, while section 5.5 gives practical advice on how to code testbenches using HDLs. Neither parametric issues nor the testing of physical parts will be addressed here.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128007303000058

Foreword for "Formal Verification: An Essential Toolkit for Modern VLSI Design"

Robert Bentley , in Formal Verification, 2015

The traditional "dynamic testing" approach to design verification has been to:

Create a verification wrapper or "testbench" around all or part of the design

Write focused tests, or generate directed random tests, in order to stimulate the design

Run these tests on an executable software model of the design (typically written at the Register Transfer Level, or RTL for short)

Debug any resulting failures, fix the offending component (either the RTL, the testbench, or the test itself), and re-run tests until the design is "clean"

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128007273000162

Editor's Preface

E. Griffor , in Handbook of System Safety and Security, 2017

Chapter 7: Reasoning About Safety and Security: The Logic of Assurance—Andrea Piovesean and Edward Griffor

An approach to system safety that emphasizes the work products of the design, verification, and validation activities forces us, in the system's evaluation, to reconstruct the argument and even then there is no standard against which to assess the types of reasoning used. Some constraints on the argumentation are captured in standards that describe how these activities should be performed but only implicitly in the dictates of the standards and not through explicit constraints on the argument itself.

In this chapter we introduce a framework for developing a safety case that clearly distinguishes the part of this reasoning that is common to the analysis of any system and the patterns of acceptable reasoning, identified in standards for specific classes of cyber-physical systems. Examples of these prescribed patterns of reasoning can be found in ISO 26262, a standard for automotive software safety and in its predecessors in similar standards in other domains. This framework provides guidance both for the construction of argumentation in a case for system safety and also for assessing the soundness of that safety case.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128037737000012

Logic and circuit simulation

Jiun-Lang Huang , ... Stephen F. Cauley , in Electronic Design Automation, 2009

8.1.2 Hardware-accelerated logic simulation

As the circuit complexity continues growing, logic simulation becomes the bottleneck of design verification—available logic simulators are too slow for practical system-on-chip (SOC) designs or hardware/software (HW/SW) co-simulation applications. Several types of hardware-accelerated logic simulation techniques have been proposed, including simulation acceleration, (in-circuit) emulation, and hardware prototyping, each of which has its advantages and shortcomings. A modern emulator may be a hybrid of the preceding types or be able to execute several types to meet the requirements of different design stages.

Most emulator systems consist of arrays of reconfigurable logic computing units that are directly or indirectly interconnected. Although field programmable gate array (FPGA) is a natural choice for the computing unit, the emulation system performance is severely limited by the available input/output (I/O) pins. Indirect interconnect architectures such as full and partial crossbars and time-multiplexed I/O are possible solutions to improve the inter-chip data bandwidth. Other approaches include exploring different use models of FPGA and the use of programmable processors as the reconfigurable computing units.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123743640500151

System on Chip (SoC) Design and Test

Swarup Bhunia , Mark Tehranipoor , in Hardware Security, 2019

Abstract

With the advent of system on chip (SoC), the issues related to design, verification, debug, and testing of SoCs have become more complex and challenging compared with those of a single block or an intellectual property (IP) core. This chapter first introduces the background on very large scale integration (VLSI) testing and the IP-based SoC lifecycle, then briefly discusses the issues associated with design, verification, debug, and test at the SoC level. Thereafter, methodologies for design-for-debug and design-for-testability are presented in this chapter.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128124772000083

Software for Medical Systems

Jeff Geisler , in Mission-Critical and Safety-Critical Systems Handbook, 2010

3.6 Design Verification and Validation

Designs are subject to design V&V. Design review is one type of design verification. Others include calculating using alternative methods, comparing the new design to a proven design, demonstrating the new design, and reviewing the design deliverables before release. The manufacturer should conduct laboratory, animal, and in vitro tests, and carefully analyze the results before starting clinical testing or commercial distribution. "The manufacturer should be assured that the design is safe and effective to the extent that can be determined by various scientific tests and analysis before clinical testing on humans or use by humans. For example, the electrical, thermal, mechanical, chemical, radiation, etc., safety of devices usually can be determined by laboratory tests" [ 7]. These are usually tests specified by IEC standards.

Software system testing (sometimes known as software validation) is also conducted in the design verification phase. I have more to say about this in Section 5.2, Software System Testing. Two things are important to remember at the regulatory level. First, the software system tests need to be conducted on validated hardware, that is, hardware from pilot production that has been tested to show that it meets its specifications. Second, any tools or instruments used for the software system testing must be calibrated under the policies in Subpart G, Production and Process Controls.

And in keeping with the FDA emphasis on labeling, during verification all labeling and output must be generated and reviewed. Instructions or other displayed prompts must be checked against the manufacturer's and the FDA's standards and vis-à-vis the operator's manual. Testers should follow the instructions exactly to show that they result in correct operation of the device. Warning messages and instructions should be aimed at the user and not written in engineer's language. Any printouts should be reviewed and assessed as to how well they convey information. Patient data transmitted to a remote location should be checked for accuracy, completeness, and identification.

The FDA makes no specific mention of it in the regulations, although in its guidance document mentions that "[d]esign verification should ideally involve personnel other than those responsible for the design work under review" [23]. This is a requirement for ISO. Certainly it is a good practice to have an independent tester or reviewer. It would help to have someone not intimately familiar with the device, and hence less likely to infer information that isn't there when reviewing the labeling, for example.

Design validation is also necessary for the device to show that it "conform[s] to defined user needs and intended uses" [24]. Design validation follows design verification and must be conducted on actual devices from manufacturing production using approved manufacturing procedures and equipment. This is because part of what you are validating is that the complete design transfer took place and manufacturing can build the devices repeatably.

Not all devices require clinical trials, but they all must have some sort of clinical evaluation that tests them in a simulated or, preferably, actual medical use environment by the real customers, users, and patients that the device is intended to help.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780750685672000044