Verification Planning PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

INCISIVE VERIFICATION ARTICLE

MAY 2006

VERIFICATION PLANNING TO FUNCTIONAL CLOSURE OF PROCESSOR-BASED SoCs


ANDREW PIZIALI, CADENCE DESIGN SYSTEMS

Functional verification consumes more than 70% of the labor invested in todays SoC designs. Yet, even with such a large investment in verification, theres more risk of functional failure at tapeout than ever before. The primary reason is that the design team does not know where they are, in terms of functional correctness, relative to the tapeout goal. They lack a functional verification map for reference that employs coverage as its primary element. Coverage, in the broadest sense, is responsible for measuring verification progress across a plethora of metrics and for helping engineers assess their location relative to design completion.[1] The map to be referenced must be created by the design team upfront, so they know not only where they are starting from-specification but no implementationbut also where they are going: fully functional first silicon. The metrics of the map must be chosen for their utility: RTL written, software written, features, properties, assertion count, simulation count, failure rate, and coverage closure rate. The map is the verification plan, an executable natural language document [2],[3] that defines the scope of the verification problem and its solution. The scope of the problem is defined by implicit and explicit coverage models.[1] The solution to the verification problem is described by the methodology employed to achieve full coverage: dynamic and static verification. Simulation (dynamic) contributes to coverage closure through RTL execution. Formal analysis (static) contributes to coverage closure through proven properties. By annotating the verification plan with these (and other) progress

metrics, it becomes a live, executable document that directs the design team to their goal. Most verification planning today lacks the rigor required to recognize the full scope of the verification problem faced by the design team. The reason for this is that substantial effort is required to write a thorough verification plan. If that plan is obsolete as soon as it is written, the effort is not justified. However, by transforming the verification plan into an active specification that controls the verification process, the planning effort is more than justified. This article illustrates the application of an executable verification plan to a processor-based SoC.

THE SoC
A modern SoC is composed of one or more processors, embedded software, instruction and data caches, large register setsconfiguration registers, scratchpad, and architectural registersmultiple buses, dedicated hardware accelerators, and a dozen or more interfaces to industry-standard external interfaces. Typically, a subset of these components is reused from previous designs or commercial IP vendors. The remainder is built from scratch for the new design. The verification plan must address the reuse requirements of design and verification IP as well as describe the verification processes for the new components. Figure 1 below is a block diagram of the Eagle SoC used for illustration in this article. Under control of embedded code running on the RISC processor, an encrypted image is read from MII interfaces,

decrypted by the DES engine, rendered by the DSP, and written to the VGA interface for display. The RISC processor, DSP, MACs, USB, and hard disk drive (HDD) blocks are all off-the-shelf, pre-verified components. The DES engine and LCD/VGA blocks are new, as well as the embedded RISC code.

The architectural specification restricts its description to black-box functional requirements, whereas the design specification addresses implementation details such as pipeline depths and internal bus widths. Our objective in specification analysis is to extract the required features of the design. Two approaches are available to us, depending on the size and kind of specification: top-down analysis and bottom-up analysis.

UART

RISC

DSP

DES

FIFO memory 64KB

FIFO memory 64KB

TOP-DOWN ANALYSIS
AMBA

DMA
FIFO memory 8KB FIFO memory 8KB

AHB/Whishbone Wrapper DP memory 4KB

AHB/Whishbone Wrapper

AHB/Whishbone Wrapper

LCD/VGA

HDDI/F

MAC 1

MAC 2

USB

UART

Figure 1: Eagle block diagram

VERIFICATION PLANNING
Verification planning is the process of analyzing the design specification with the goal of quantifying the scope of the verification problem and specifying its solution. The verification problem is quantified with coverage models while the solution is described as a functional specification for the verification environment. The problem and the solution are addressed in the subsequent sections. But how should the verification plan be organized? Functional Requirements contains the interface and core features derived from the design functional specification. Design Requirements contains the interface and core features derived from the design micro-architecture specification, sometimes referred to as a Design Specification. Verification Views groups references to other parts of the verification plan by function or milestone, as discussed later. Finally, Verification Environment Design is the functional specification for the verification environment.

Top-down specification analysis is aimed at addressing the problem of distilling the requirements captured in a large specification20 pages or moreinto a manageable verification goal. The term top down refers to analysis that proceeds from a higher abstraction level to a lower level. This abstraction gap is bridged through interactive discussion in a brainstorming session. The architect, designer, verification engineer, and manager assemble to identify and structure the features of the design. Typically, the architect draws a block diagram of the design on the whiteboard to serve as a discussion vehicle. Figure 2 below uses a block diagram of the DES engine as an example.

AMBA bus (AHB)

(1K x32)

Inp ut FIFO

AHB I/F
(1K x32)

Output FIFO

Data control block

Encrypt/ decrypt block

Figure 2: DES engine block diagram

The DES engine receives data through the input FIFO consisting of encryption keys and cipher text. Sixteen cycles after the last datum is written into the FIFO, the clear text is written to the output FIFO. The input and output FIFOs are configured as 1Kx32. The data control block provides an interface to the AMBA bus for data and control flow for the encrypt/decrypt block. One of the identified features of the DES engine is flow control management between the AHB interface and the data control block by the input FIFO. We record this feature in the emerging verification plan, AHB-to-data control block flow control, and write a semantic description of the feature. The semantic description concisely captures the purpose of the functional coverage model that will be designed to record the extent that the feature is exercised. The semantic description of this feature might read, The input FIFO manages flow control between the AHB interface and the data control block to accommodate data transfer rate differences between the two blocks. There are 1,024 32-bit entries in the FIFO.

SPECIFICATION ANALYSIS
The functional specification of a design is supposed to capture its requirements and intended behavior. Since functional verification demonstrates that the intent of a design is preserved in its implementation, we need a source of design intent. Ideally the specification is a formal requirements document, but in practice it may be augmented by additional material such as a marketing requirements documents or informal engineering notes. Furthermore, the design specification is often partitioned into an architectural specification and a design specification.

Verification planning to functional closure of processor-based SoCs

Incisive Newsletter May 2006 | 2

BOTTOM-UP ANALYSIS
In contrast to top-down analysis, bottom-up specification analysis is suitable for moderate sized specifications, 20 pages or less. We walk through the specification section by section, paragraph by paragraph, and sentence by sentence to identify features and their associated attributes and behaviors. Referencing the DES block diagram description in the previous section as part of its specification, the DES engine receives data through the input FIFO consisting of encryption keys and cipher text. Sixteen cycles after the last data is written into the FIFO, the clear text is written to the output FIFO. The input and output FIFOs are configured as 1Kx32. The data control block provides an interface to the AMBA bus for data and control flow for the encrypt/decrypt block. We note that the first sentence describes a feature: input data buffering. That sentence mentions two attributes (encryption key and cipher text) and one behavior (data reception). However, since the data buffering is implemented using a FIFO, the FIFO depth and sequential ordering of read and write operations are also implied attributes. This feature and its attributes are used as the starting point for designing its associated coverage model, as with top-down analysis.

contains the attribute names or sampling times. The values column lists the observed attribute values to be recorded. The last column lists the name of the verification environment monitor responsible for observing the attribute. For example, the DES input FIFO monitor is responsible for counting the number of times the values TRUE and FALSE have been observed on the AHB write signal.
Feature Flow control management Attributes or sampling times @fifo_clock
FIFO depth AH B write

Values

Monitor

0, 1, 2..1022, 1023, 1024


FA LSE , TRU E FA LSE , TRU E

DE S input FIFO DE S input FIFO DE S input FIFO

Data control block read


FIFO depth, AH B

C{}

write, data control block read

Each row of the table is either associated with an attribute (or set of attributes) or a sampling time. For example, the first row of this table indicates that all attribute values are to be sampled on the FIFO clock. The second, third, and fourth rows are associated with each attribute while the last row defines the complete coverage model. This model is composed of the attributes FIFO depth, AHB write, and data control block read, organized as a matrix model and indicated by the notation C{}.

COVERAGE MODEL DESIGN


Once the design features have been extracted from the specification through top-down or bottom-up analysis, the next step in quantifying the scope of the verification problem is designing associated coverage models for each feature. That process is summarized in the following subsections as part of the planningto-closure flow.

DETAILED DESIGN
Once the top-level design of a coverage model is complete, it must either be mapped into the verification environment for simulation or to one or more properties for formal analysis. If we choose to achieve the goal of this model using simulation, the detailed design of the model answers the following three questions: 1. What must be sampled for the attribute values? 2. Where in the verification environment should we sample? 3. When should the data be sampled and correlated? The what in the first question refers to what element in the design-under-verification (DUV) or in the design (RTL or software) is to be sampled for each attribute. For example, we might sample the signal fifo_current_ptr for the attribute FIFO depth. The where in the second question means the location in the object or module hierarchy to instantiate the coverage model. For example, in an eRM-compliant [5] e language environment [7], the coverage group that implements the coverage model would be located in an agent. The when in the third question refers to the time that data associated with an attribute should be sampled or correlated. The attributes of our example are to be sampled on each FIFO clock edge, which may be carried by the signal fifo_clock.

TOP-LEVEL DESIGN
Coverage model top-level design consists of writing the semantic description of the model, selecting its attributes, and choosing a model structure. The semantic description of the model falls out of the specification analysis procedure described earlier. If bottom-up analysis were employed, the coverage model attributes may also have been selected. For the DES engine feature exampleflow control managementthe associated attributes are FIFO depth, AHB interface write, and data control block read. The coverage model structure reflects the relationship among the attributes and may be a matrix, hierarchical or hybrid. A matrix model, wherein each attribute defines a dimension of a coverage space, is suitable for this simple model. The top-level design of the model is added to the verification plan in the Verification Environment Design, Coverage section as a table: The left column contains the feature name. The second column

Verification planning to functional closure of processor-based SoCs

Incisive Newsletter May 2006 | 3

The correlation time is the time the most recently sampled values of the attributes of a model should be recorded. For example, if each attribute is sampled on its own clock, the current attribute values may be captured as a set at yet another interval. For the flow control management coverage model, the sampling and correlation times of the attributes are one and the same. The detailed design decisions are also recorded in the Verification Environment Design section of the verification plan.

A complementary stimulus source, useful for bringing up a design in the beginning, is directed tests. Directed tests that employ a constrained random environment are simply a set of tight generation constraints layered on top of the base CDV environment. Hence, they are usually quite short and easy to write. The stimulus aspect of the verification environment is designed in the Verification Environment Design, Stimulus section of the verification plan. The checking aspect of a dynamic verification environment must ensure that the response of the DUV to applied stimuli is correct in both the data and temporal domains. The values driven out of the DUV must adhere to the specification, as well as its sequential behavior. Checking is often implemented using a reference model, scoreboard, or distributed approach such as assertions.[6] The checking approach for each feature is specified in the Verification Environment Design, Checkers section of the verification plan.

IMPLEMENTATION
The third step in building a coverage model is implementation. The model may be implemented in a high-level verification language (HVL) such as e or SystemVerilog; in a property specification language such as SystemVerilog Assertions (SVA) or PSL; in an RTL language like VHDL or SystemVerilog; or in a conventional programming language such as C or C++. Of course, the implementation is much easier in an HVL than in other languages. In e, the implementation of the flow control management model would look like this:

STATIC VERIFICATION
Static verification, also known as formal analysis, is a second solution to the verification problem. It encompasses model checking and theorem proving among a variety of other techniques. Distinguished from its dynamic verification counterpart, static verification requires no stimuli. Instead, it demands formally written design requirements: properties or theorems. The properties are specified and designed in the Verification Environment Design, Checkers section of the verification plan. They may be implemented in one of the industry-standard property specification languages such as SVA, PSL, or OVL.

cover flow_ctrl_mgmt is { item FIFO_depth : uint = fifo_current_ptr$ using ranges = { range([0]); range([1]); range([2..1022]); range([1023]); range([1024]) }; item AHB_write : bool; item DCB_read : bool; cross FIFO_depth, AHB_write, DCB_read }; event flow_ctrl_mgmt is rise(fifo_clock$);

DYNAMIC VERIFICATION
The answer to the second part of the question answered by a verification planwhat the solution to the verification problem isfalls into one of two categories: dynamic verification or static verification. Dynamic verification (also known as simulation), in addition to coverage measurement, requires applying stimulus to the DUV and checking its response to the applied stimulus. The most efficient means of rapidly achieving the coverage goals defined in the previous section is through coverage-driven verification (CDV).[1] CDV employs constrained random stimulus generation to produce stimulus that is functionally valid, yet also likely to activate corner cases of the design. An optimal CDV environment is characterized an autonomous, meaning the environment requires no external direction, such a test, to steer it toward generating high value stimulus. The constraints that might be distributed among a set of tests are instead built into the verification environment itself, allowing symmetrical simulations to be distributed across a simulation farm until functional coverage closure is reached.

VERIFICATION PLAN AUTOMATION


Once the verification plan has been written, it may be referenced as a specification for the verification process and updated as verification proceeds. However, unless the plan actually controls the verification process (not unlike a C source file controlling application behavior), it will tend to become yet another element of project documentation, obsolete almost as soon as it is written. The next section describes how to turn a verification plan into an executable specification that remains both a natural language document and becomes machine readable. This is followed by linking the verification plan to the verification environment and using it to control and monitor verification progress. The Cadence Incisive Manager [8] will illustrate the use of an executable verification plan.

Verification planning to functional closure of processor-based SoCs

Incisive Newsletter May 2006 | 4

VERIFICATION PLAN REQUIREMENTS


In order for a verification plan to serve the needs of project team members and also control the verification process, it must be a natural language document but also machine readable. A natural language document (spoken or written by humans) is required because people conceive ideas and exchange them with one another in their native tongues.[2],[3] This allows the document writer to modulate both the abstraction level and ambiguity in the verification plan, trading off precision for implementation freedom. The verification plan must also be machine readable so that it may be linked to the verification environment and progress metrics recorded during verification runs (simulation or proofs).

could also be added to a verification environment implemented in another language to reference a verification plan section. Below is an example of backward annotation implemented in e:
cover flow_ctrl_mgmt using vplan_ref = Design Cores/Data Control Block is { item FIFO_depth : uint = fifo_current_ptr$ using ranges = { range([0]); range([1]); range([2..1022]); range([1023]); range([1024]) }; item AHB_write : bool; item DCB_read : bool; cross FIFO_depth, AHB_write, DCB_read };

VERIFICATION PLAN TO VERIFICATION ENVIRONMENT LINKAGE


The verification plan serves as an annotated map of verification progress, showing users at all times their location on the road to functional closure. The plan also serves as the functional specification for the verification environment, in particular that of the coverage aspect. As the coverage aspect is implemented with functional coverage models (code coverage and assertion coverage), each coverage section of the verification plan must be associated with its implementation. This association allows current coverage data to be displayed in the context of the verification plan. We support two types of associations: forward annotation and backward annotation. Forward annotation is used to associate each coverage section of the verification plan with its implementation. Since the verification plan serves as the functional specification for the coverage aspect of the verification environment, as the verification environment is designed and implemented, references to implemented coverage models are inserted in the plan. The primary use of forward annotation is for a new verification environment implemented from a verification plan. An example of a forward annotation in the verification plan is cover group: flow_ctrl_mgmt. Cover group is the text associated with a style and flow_ctrl_mgmt is the name of an e coverage group. Backward annotation is used to associate a coverage model implementation to a section of a verification plan. The linkage is the same as that of forward annotation, but the direction is reversed: code is added to the coverage model of a legacy verification environment to associate the model with a coverage section of a new verification plan. This is used when legacy verification IP is used to implement part of a new verification plan. This association is easily accomplished by adding a verification plan aspect to a legacy e environment, leaving the original environment unchanged. However, annotation code

This is the same coverage group introduced earlier, with addition of the vplan_ref option. vplan_ref associates this coverage group with the Design Cores, Data Control Block section of the verification plan.

VERIFICATION PLAN VIEW


In addition to the feature sections of the verification plan that define the functional requirements of the design, the Verification View section of the plan provides a set of tailored views into the verification progress. The verification views may be oriented toward cross-module functional areas, such as error detection and recovery, or time-based project checkpoints with specified target goals associated with each. The latter are milestones that are naturally defined for any design project. Each view may be considered a concern that references one or more relevant sections of the verification plan, not unlike an aspect in an aspect-oriented programming language. It is also customized with view-specific coverage goals and deadlines.

SESSION SPECIFICATION
A second input into Incisive Manager, in addition to an executable verification plan, is a file that describes a session. A session is a set of dynamic or static verification runs. The session input format (vsif) file defines the tests, properties, and other attributes of a set of simulations or formal proofs that aim to achieve a subset of the coverage goals, such as an overnight or weekend regression run. When a session completes, a session output format (vsof) file captures the results of the session: runs passed and failed, log files, trace files, and other information required for debugging.

FUNCTIONAL CLOSURE
With the verification plan written and annotated, the verification environment developed, and properties written for those features to be verified using model checking, its time to put the plan to use. What do we

Verification planning to functional closure of processor-based SoCs

Incisive Newsletter May 2006 | 5

mean by functional closure? Simply stated, functional closure means achieving the functional verification goals specified in the verification plan. The goals are defined using the chosen metrics: coverage, simulation failures, property proofs, and DUV and verification environment code. As described earlier, the goals are typically grouped by functionality or timeframe in the verification view section of the verification plan. We use the verification plan viewalso referred to as the vPlan viewin Incisive Manager to analyze verification progress. The vPlan view displays an outline of the verification plan with a percentage next to each section name. This percentage represents the fraction of the coverage goal of each section achieved by the loaded vsof files. Two types of analysis are required to understand the next steps required to reach functional closure: failure analysis and coverage analysis.

Figure 3: Incisive Manager first failures view

FAILURE ANALYSIS
Failure analysis is the process of reviewing failed simulation runs in an attempt to correlate failures with other run parameters. A failure means that one or more verification environment checkers detected and reported functional violations. For example, a vsif file may have specified 500 simulation runs for a particular session, and 75 of those runs failed. How many distinct errors were reported? Which run provoked a particular error in the shortest number of simulation cycles? Are specific errors always detected with other errors? These questions must be answered in order to quickly diagnose and repair each unique failure. Incisive Manager facilitates failure analysis through the first failures view in its runs window (see Figure 3 below). Filtering, selecting, and sorting operations are used to rapidly zero-in on common failure modes.

As the reason for each coverage hole is discovered, the appropriate response-coverage model correction, enhanced stimulus generation, DUV fixes, more simulation-is taken. With each iteration through the closure cycle (plan, implement, simulate/prove, measure, analyze) we refine the coverage goals and adjust the verification environment until we achieve our defined goals.

CONCLUSIONS
The verification planning-to-coverage-closure process discussed in this paper has been broadly adopted by Cadence (and former Verisity) customers since 2003 with spectacular results. The number of first-pass, fully functional silicon designs has increased even though design complexity has continued to rise. Our customers have embraced rigorous upfront verification planning, with its attendant costs, because the resulting verification plan is transformed from documentation-an after-the-fact, obsolete record of the past-to a control specification that drives their verification process and reflects the state of the process throughout the design cycle. The plan-to-closure steps of planning, plan automation, and functional closure comprise a process that is repeatable, predictable, and reusable for both ground-up and derivative designs. This article illustrated each of the steps in detail with examples drawn from a representative SoC design and one of its functional blocks. Where do we go from here? The opportunities for verification process automation at the front-end of the design process are vast. Beyond imbuing the verification plan with reuse semantics that mirror those found in verification and design IP today, the plan itself may be both mechanically linked to its parent specifications and derived from using an

COVERAGE ANALYSIS
As functional, code, and assertion coverage populate the coverage models defined by the verification plan, coverage holes (such as missing coverage) become apparent. Coverage analysis [1],[9],[10],[11],[12] is required to determine why these holes exist and, for those that are valid, how to fill them. For example, a functional coverage hole may be due to a faulty coverage model that demands to observe impossible or illegal behavior. However, it may also be due to missing stimulus required to activate the behavior or missing hardware or software logic required to implement it. Techniques such as coverage hole aggregation, projection, and selection are useful for identifying the commonalities shared by regions of missing coverage.

Verification planning to functional closure of processor-based SoCs

Incisive Newsletter May 2006 | 6

automated process. Further, accurate extraction and understanding of design intent from specifications may be enhanced with the application of knowledge engineering and expert systems.

REFERENCES
[1] [2] Andrew Piziali, Functional Verification Coverage Measurement and Analysis, Springer, 2004. Vincent E. Guiliano, Arthur D. Little, In Defense of Natural Language, proceedings of the ACM annual conference, 1972. Peggy Aycinena, In Defense of Natural Language: A Conversation with Andrew Piziali, http://www.aycinena.com/index2/index3/archive/ in%20defense%20of%20natural%20language .html (as of 11/23/05), March 30, 2005. Oded Lachish, Eitan Marcus, Shmuel Ur, Avi Ziv, Hole Analysis for Functional Coverage Data, proceedings for the 2002 Design Automation Conference. Cadence Design Systems, e Reuse Methodology Manual, 2002. Janick Bergeron, Writing Testbenches: Functional Verification of HDL Models, Kluwer Academic Publishers, 2003. The e Functional Verification Language Working Group, The e Language Reference Manual, http://www.ieee1647.org/ Cadence Design Systems, Incisive Manager, http://www.cadence.com/products/functional_v er/vmanager/index.aspx (as of 11/23/05). Michael Kantrowitz and Lisa M. Noack, Im Done Simulating; Now What? Verification Coverage Analysis and Correctness Checking of the DECchip 21164 Alpha Microprocessor, proceedings of the 1996 Design Automation Conference.

[3]

[4]

[5] [6]

[7]

[8]

[9]

[10] Sigal Asaf, Eitan Marcus, and Avi Ziv, Defining Coverage Views to Improve Functional Coverage Analysis, proceedings of the 2004 Design Automation Conference. [11] Scott Taylor, et al, Functional Verification of a Multiple-issue, Out-of-Order, Superscalar Alpha Processor: The DEC Alpha 21264 Microprocessor, proceedings of the 1998 Design Automation Conference. [12] Alon Gluska, Coverage-Oriented Verification of Banias, proceedings of the 2003 Design Automation Conference. This paper was originally presented at DesignCon 2006 in Santa Clara, CA, February 2006.

Verification planning to functional closure of processor-based SoCs

Incisive Newsletter May 2006 | 7

You might also like