Strategies For Mainstream Usage of Formal Verification: Raj S. Mitra Texas Instruments, Bangalore

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

44.

Strategies for Mainstream Usage of Formal Verification


Raj S. Mitra
Texas Instruments, Bangalore
rsm@ti.com

Abstract new algorithms or abstraction techniques for FV. Instead, the


main question asked (and answered to some extent) here is: What
Formal verification technology has advanced significantly in does it take to move FV from a fringe technology to a mainstream
recent years, yet it seems to have no noticeable acceptance as a usage (i.e. non-experimental planned production usage, with
mainstream verification methodology within the industry. This predictable outcomes)? Throughout this paper, we are referring to
paper discusses the issues involved with deploying formal the usage of commercial production quality tools (not university
verification on a production mode, and the strategies that may prototypes) – today these are typically in the areas of Model
need to be adopted to make this deployment successful. It Checking and Sequential Equivalence Checking. We are also not
analyses the real benefits and risks of using formal verification in referring to Combinational Equivalence Checking, which is
the overall verification process, and how to integrate this new widely used in the industry now. In this paper, we have used the
technology with traditional technologies like simulation. The acronym FV to generically refer to both “Formal Assertion
lessons described in this paper have been learnt from several years Verification” as well as to “Sequential Equivalence Checking”,
of experience with using commercial formal verification tools in FAV to specifically refer to the former, SEC to refer to the latter,
industrial projects. and ABS to refer to “Assertion based Simulation”. Although,
strictly speaking, ABS is not a FV technique, it is the precursor to
Categories and Subject Descriptors: B.6.3 FV and hence needs to be treated in the same context.
[Hardware]: Logic Design - Design Aids – Verification.
We start this paper with a brief summary of the current
General Terms: Verification, Experimentation. verification process, and then enumerate the main challenges in
adopting FV in this context. Then we analyze the advantages
Keywords: Formal verification, Emerging technologies. (returns on investment) we can expect from FV, and subsequently
also discuss that technical advantages are not the sole criteria that
1. Introduction determine the success of adoption. Finally, based on the previous
At the last Design Automation Conference, Jan Rabaey made a discussions, we suggest strategies for making the usage of FV
comment on formal verification [1], mentioning the possibility more widespread in the industry than what it is today.
that someday IEEE will publish a comic strip on adventures in The strategies suggested in this article are based on lessons
formal verification. His reasoning was that the word “adventure” learnt during the application of FV in our organization, over the
implies an image of “pretty much ad-hoc and in the exploratory last five years and over multiple projects, with several best-in-
space” and there is no a production process yet, and hence the class commercial FV tools. We have cited several lessons taken
analogy with the state of affairs of formal verification (FV) today. from research in diffusion of innovation, but these have been
This comment is not isolated; most users of FV tools would actually put into practice in the context of FV deployments in our
testify to the “adventurous” nature of FV usage in industrial organization.
projects (not on toy designs), and the frustrations it causes. This is
in spite of the recent advances seen in this technology and 2. Current Verification Process
numerous research papers published in conferences every year. To set the context for this paper, we begin with a short
There have been pockets of acceptance – some components of introduction to the verification process – as it is practiced today.
some products (processors and controllers) have utilized FV – but Most of this discussion will not be new to the readers, but it will
by and large, a general acceptance has been lacking. help in appreciating the problems with and the value of adopting
What is first needed today is an unbiased root-cause analysis of FV in a mainstream mode. We will deal only with functional
this systemic failure – why are engineers reluctant to use this verification, since only that is relevant for formal verification.
promising technology, or whether this technology is usable at all. A SOC is created by hooking up several IPs. The functionality
Secondly, becoming aware of the limitations, in technology and of each IP is verified separately, and then the SOC / subsystem
management, the requirement is to define a feasible strategy for hookup is verified too (its reset conditions, performance
mainstream adoption of formal verification. characteristics, etc). The individual IPs are verified partly by the
This paper addresses these two topics. It does not propose any designers, who run a few directed smoke tests and filter out the
most obvious bugs, and mostly by the verification engineers, who
Permission to make digital or hard copies of all or part of this work for do an elaborate job of developing testbenches and eliminating all
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
kinds of bugs from the DUT (design under test) through directed,
copies bear this notice and the full citation on the first page. To copy random, and directed-random tests. Although a fine balance is
otherwise, or republish, to post on servers or to redistribute to lists, usually worked out between the two teams for different parts of
requires prior specific permission and/or a fee. the verification, finally the responsibility of finding the bugs in
DAC 2008, June 9-13, 2008, Anaheim, California, USA the implemented system is shouldered by the verification engineer
Copyright 2008 ACM 978-1-60558-115-6/08/0006 …5.00 alone.

800
Stop simulation now?
Quality improves by removal of special causes
(detecting and fixing individual causes of defects)
Stabilized (i.e., within

Items found faulty


Control Limits)
Detected Bugs

at final audit
Coverage or

Continued improvement is
expected but will not happen

Simulation Time Time

(a) (b)
Fig 1: (a) Quality-Time Continuum of Verification by Simulation; (b) A Typical Path of Frustration
The simulation verification process is a traversal of the Quality- when they are rare. Hence, failure in execution must be
Time Continuum (Fig 1-a), whose principal value is a monotonic defined in advance, in terms of measurable metrics.
increase in quality, and not the achievement of an absolute level 3. Distribution and trends in the measurement indices indicate
of quality. The final result is usually a trade-off between the two the health of the system. Control limits on the indices indicate
parameters – quality (measured by coverage data, bug trends, etc) whether the system is in stable statistical control or not, and
and the project schedule. Verification is suspended when the once it is observed to be stable, no further improvement is
curve reaches the desired levels of coverage metrics, or when the possible by quick fixes; improvements to the processes are
clock runs out and it is time to tape out. We say suspended, and now necessary to reduce the variation of the system.
not stopped, because there is no such thing as a target of achieving
“complete” verification through random and directed-random 4. The indices are intended to show whether the system is stable,
simulation, due to the very nature of simulations, and hence there they should never be used to indicate specific defects. In
is really no end to the simulation process. verification, code coverage data should never be used to make
specific code fixes; they should only be used to indicate that
This simulation process, essentially random and incomplete, has some broad areas of the code are yet unverified.
a strong theoretical foundation in the statistical testing principles
expounded by Deming [2]. A testcase is nothing but a sample of 5. Statistical stability of the process of measurement is also vital
the different paths in the module, which is inspected for – e.g. will we get the same results if we change the simulator
conformance with the specification. Hence, valuable lessons from or the engineers? It is also important to give the measurement
past work on product quality measurement and improvement may instrument a good context (environment) to do its work, e.g.
be applied to the field of verification also. We discuss some of measuring an ageing fluid is better done at its source. An
them below. example of this in the simulation context is that assertions
inserted by designers help improve the quality of verification.
The plateau / saturating nature of the curve in Figure 1-a can be
seen as an extension of Figure 1-b (which is a variation of Figure In summary, the simulation process, random and inherently
33 of [2]) from the context of product manufacturing – the first set incomplete, is very similar to the statistical inspection techniques.
of results comes from the removal of “special” (i.e. specific) The latter recommends inspection, not to remove all the product
causes, and then the system reaches “statistical stability” and the defectives, but its metrics to be used as indicators for identifying
curve plateaus to a level. Subsequent improvements can come areas of process improvement – in recruitment, training, ensuring
only from addressing the “common” (i.e. process) causes and input quality, design, and supervision. But this is not the purpose
reducing the system’s variation – i.e. (in this context) by for which simulation is used today; its main purpose is to detect
improving the verification process through root-cause analysis of bugs, and it is not a surprise that it falls far short of expectation,
the gaps in verification. Without getting into the details of especially in the context of today’s large and complex SOCs.
Deming’s theory, the important analogies to be drawn from his This recognition of the gap in verification for large SOCs,
work are the following: which was further triggered by the attention attracted by the
1. Definition of quality depends on the agent (user or worker), Pentium bug – in both the academia and industry, has urged the
and may change with time. However, quality can be defined demand for FV in recent years. But it quickly became apparent
operationally in terms of some indices that can be measured that FV tools are a discontinuity from current simulation practices
(here, several coverage metrics – functional as well as – they cannot be used in the same context and by the same users
structural). An operational definition puts communicable as of simulation, and hence their acceptance did not really take off
meaning into a concept, and has to be agreed upon in advance. in the verification industry.
The indices should measure the quality of the product (DUT)
as well as of the procedure of measurement (its verification). 3. Problems with FV Usage
2. Inspection, automatic (e.g. simulation) or manual (code FV introduces a shift (discontinuities) from the current
reviews), simply does not find all the defectives, especially verification practices in the following areas:

801
1. While assertions can also be used with simulations, the styles requires the application of FV. These FV experts are
of coding differ significantly between the assertions written intensively trained in the writing of properties efficiently, and
for FAV as compared to the assertions written for ABS. It is in the usage of FV tools, and they are able to share FV
necessary to write FAV-friendly assertions in monitor style [3] expertise and insights (and also code – reusable assertions and
that are optimized for minimizing the introduction of extra glue logic) across multiple projects. Most of the reported
state elements. In contrast, assertions and code written for success stories in FV, i.e. the application of FV to solve
ABS typically do not pay any heed for extra state elements complex verification problems which have defied a simulation
being added, and this is the coding practice that the solution, have been brought about through such central teams
verification engineers are currently accustomed to. of FV experts. But this model of application is not scalable,
2. FV is applicable to small modules only, due to capacity and also brings management problems with it – the central
limitations of current technology. However, breaking up the team is often considered alien by the project team, culturally
DUT into smaller parts is increasingly being done in the and by organizational hierarchy. Forming teamwork among
simulation context also, because it is getting increasingly these two teams is a management challenge, and requires extra
difficult to feed in relevant inputs, to activate selected parts of initiatives (e.g. appreciations, incentives, and interventions).
the code, from a far away module boundary. This requires that 7. There is no metric for FV to show its progress – it is either
we break up the DUT in smaller logical parts (structurally or completely done (pass or fail) or not done (run out of capacity,
temporally), add constraints to manage this separation, and or bounded proofs). (In some restricted situations bounded
verify the parts separately – and the same process applies for proofs are useful, e.g. a 5 cycle bounded proof for a processor
both simulation and FV. But the problem gets dramatically whose pipeline latency is 4, can be construed as a full proof.)
acute for FV, where the sizes of the individual modules are Hence, other metrics are used for FV, to indicate under-
much smaller than what can be handled by simulation. specification (not enough properties written) or over-
3. The capacity problem, compounded with the many innovative constraints, but these do not correlate with simulation metrics.
(and tool-dependent) abstraction techniques available to solve Some of the above discontinuities in verification practice are
it, manifests in a more serious problem – the lack of related to the usage of the new technology (e.g. writing efficient
predictability of the verification process. Given a DUT, there assertions and constraints, using abstractions to overcome
is no metric (based on number of gates, number of state capacity limitations, etc), and these can be overcome, to a large
elements, or structure of the design, etc) which can predict extent, by trainings – in the theory of FV (to increase general
before starting verification, whether we will finally get some awareness of FV technology), the methodology to be followed,
answer or end up with a capacity bottleneck. As a result, and the usage of commercial tools and their specific engines.
unplanned manual abstractions are usually added during the The other category of discontinuities, which disrupt the flow of
project cycle, thereby breaking the projected verification work, are considered more severe by the design and verification
schedule. This is quite unlike the simulation context, where engineers, and are the real hurdles in the path to mainstream usage
we know that we will get “some” results on “any” DUT, and of FV. These are the items: (3) low predictability of the FV
can make a reasonable experience-based schedule prediction. process, (5) the requirement for having a very close integration
4. The partitioning of modules (to handle capacity limitations) with the design team, (6) difficulty of managing a central team,
usually introduces artificial boundaries, and lead to the writing and (7) lack of metrics to show progress of verification. These
of constraints on these boundaries. Until these constraints process discontinuities, resulting from technology discontinuities,
have been refined very perfectly, FV catches any and every are the real barriers to the mainstream usage of FV.
hole in these constraints and reports them as failures. These
are called “false failures” because they are not design bugs but 4. Expected Benefits from FV
environment modeling errors, and they have to be fixed before
the real design bugs can be caught. In the initial stages of After the analysis of the problems with adopting FV, let us also
applying FV, the number of reported false failures can be analyze the benefits that we can expect to get from applying FV.
significant, and a lot of time is spent in refining the constraints Then we can compare the gain against the pain, and decide if
for eliminating them. (This item is not critical by itself, but there is a ROI (return on investment) on adopting FV on a larger
manifests in the next two items.) scale. Many frustrations occur on the usage of FV because most
people are not quite clear on what to expect from FV.
5. FV requires a very close interaction with the design team, to
(a) write assertions for elements inside the module (to To the enthusiast and the un-experienced, FV promises the goal
overcome the capacity problem), and (b) help in writing and of “complete” verification. But, because of the capacity
refining constraints (to eliminate false failures). The latter part limitations of current tools and the subsequent heavy usage of
is very critical, because the very close interaction it demands constraints to narrow down the scope of FV runs, it is effectively
with the design team makes it a show stopper occasionally. rendered as incomplete as simulation. Some would even advocate
FV to be more incomplete than simulation, because in simulation
6. The writing of efficient properties and constraints, and we at least have a set way to make progress when coverage goals
intensively micro-managing their iterations in a very are not achieved. Hence, completeness of verification is certainly
structured way [4], is the only way to “squeeze” results from not the goal of FV, for the verification of medium to large sized
FV tools on medium sized modules. But this practice is quite circuits. (However, for very small circuits, i.e. with less than 100
alien to the current simulation engineers, who are more used state elements, this can still be considered a realizable goal.)
to writing large testbench software. Some organizations have
solved this problem by maintaining a central team of FV But before answering the question of FV’s ROI, we need to first
experts, whose services are available for any project that define the verification targets we are seeking to achieve.

802
This automated usage is also applicable at the SOC level, for
the verification of chip-level connectivities [7], e.g. IP-
ABS connectivity, pin-muxing, boundary pads, DFT connectivity, etc.
Normal Today, the common practice is to use simulations for this purpose,
FAV Simulations but to activate the different paths requires a huge amount of
testcases and their setup, where all that is required is to validate
Detected Bugs

that the connections of the IP blocks have been done properly. At


first glance, FV is not applicable at the chip level. But if all the
IPs of the chip are black-boxed, then all that remains are the
connections to be verified (which have a minimal number of state
elements), and FV can be easily applied there. The generation of
Verification Time
the assertions from the specification of the chip connectivity can
be automated, thereby reducing human effort further, and the
Fig 2: Bug Detection Rates whole process can be mostly reduced to a push-button exercise.
Understanding that both simulation and FV are incomplete (one Another automated application is a bug-hunting mode, with the
inherently, the other effectively so), the main question applicable use of a semi-formal technology [8]. After completing ABS or
to both technologies is – when should we stop the verification? FAV to the extent feasible, the DUT can be thrown open to the
Today’s verification signoff criteria are set by simulation cover- semi-formal tool, and the tool be run for days or weeks to find
age metrics that we are well accustomed to, and our verification more bugs. Of course, there has to be a measure of progress of
goal is to meet (or converge towards) them as fast as possible. this verification, giving a sense of confidence that the tool is
increasing its coverage and is not looping in a local region.
This leads to what we think is the key to solving the ROI
question of FV – what we need is a quicker detection of bugs, and Deep: The verification team writes end-to-end assertions, and
not the detection of all bugs. And this is where FV does help, attempts to catch deep corner case bugs (e.g. the detection of
because in the initial stages of the design cycle it (and also ABS) hard-to-locate silicon bugs, or performance bugs in a subsystem).
catches bugs much faster than traditional simulations do. Fig 2 The skills required for this effort are exceptional and have to be
shows this relative advantage. It is a variation of the curve shown nurtured with care within the organization, through the
in [5], but with a curve for FAV also. Simulations start slightly development of small central teams.
later than FAV, because there is a time associated with creating Of these different application scenarios, the automated
testbenches for simulations, whereas the writing and validation of techniques have the highest chance of becoming mainstream
assertions can start along with the RTL development. The initial usage, followed by the designer usage (because that requires a
bug detection rate is also higher for FAV, because it is not change in mindset), and finally the deep and difficult usage. It is
dependent on creating testcases to hit the relevant situations. likely that the many verification engineers are not going to
However, the FV curve terminates sooner than the simulations become FV experts to detect corner case bugs.
due to capacity limitations. Further progress, to catch deep corner
case bugs with FV, comes with extreme efforts (as discussed 5. It is not only ROI
earlier). A similar benefit is also observed while using SEC [6] –
designers can quickly apply SEC to validate recent changes On the hard and long path to converting simulation users to FV,
against their previous golden model, and thus reduce costly a typical response that one faces, after FV has caught a bug that
iterations between the design and verification teams. simulations have not caught, is “So what? Simulations would have
caught it in a few more days”. Are there really any bugs that FV
This raises a related question – who should be the FV user: the can catch but simulations cannot? Technically, no; it is just a
designer or the verification engineer? So far, we have been talking matter of time before the right set of vectors is simulated. The
only about the verification engineer, but if we take a holistic view trick is to find all the right set of vectors as quickly as possible,
of system process improvement (as in [2]), then it is evident that and that is what ROI of FV is all about. But in practice, it is
the system’s variation can be reduced only by creating high extremely hard to convince a simulation team with this argument.
quality designs in the first place, and that indicates that the
designers should be the primary users of FV. Hence, we have to Most discussions on why FV adoption is not picking up in the
consider three very different use scenarios for FV (in sequence): industry focus on the ROI of FV. However, a comparison with the
adoption rates of other new technologies [9] shows that most
Designer: The designer writes white-box assertions at the IP innovations diffuse at a disappointingly slow rate, at least in the
level, and proves them (or, applies SEC to prove iterative eyes of the inventors and technologists who have created them.
correctness) before handing over the IP to the verification team. Reasons for slow adoption vary: availability of other competing
The automated usage (described below) may be applied by the remedies, the change agent not being a prominent figure,
designers too. compatibility with existing solution, and vested interests.
Automated: Here, the design or verification team mostly relies In general, the problem of FV adoption is very similar to the
on the usage of (a) plug-in assertion IPs (plug-and-play pre-built problem with adoption of preventive innovations – a class of
assertion packages of standard protocols), and (b) pre-defined innovations that have demonstrated a particularly slow rate of
checks within the FV tool (e.g. for state machine reachability adoption. A preventive innovation is a new idea that an individual
checks, dead code checks, etc), to catch as many bugs as early as adopts now in order to lower the probability of some unwanted
possible. It is quite likely that some of these checks will force the future event, and examples in this category are the adoption of
FV tool into its capacity limitation, but these checks will just need seat belts in cars, the usage of contraceptives, etc. Not only are the
to be run later through simulations again.

803
rewards of adoption delayed in time, but it cannot be proven assertions, and run the FAV and SEC tools themselves.
(beyond some statistical data, for which there are as many Besides creating a set of white box assertions which can be
proponents as opponents) as to whether it is actually essential. FV used in ABS later on, this will also create a new paradigm
seems to fall in this category – it excels in finding corner bugs, but shift in the work flow – shift the onus of module level
then they are rare anyway and chances are that they will not be verification to the designers. In turn, this paradigm shift will
detected in the silicon if they are not caught by simulation. make designers aware of the verification problem, thus
The knowledge-attitude-practice gap (KAP-gap, [9] page 176) enabling the development of design for verification techniques
in the diffusion of preventive innovations can sometimes be and other new methodologies for verification. This process
closed by a cue-to-action – an event occurring at a time that improvement may have the highest impact on silicon quality
crystallizes a knowledge into an action. Some cues-to-action (a la Deming [2]) – which is the real goal of verification (not
occur naturally – the above article reports that many people begin coverage metrics).
to use contraceptives after they experience a pregnancy scare. In 7. The last step is a catch-all condition – after trying other
other cases, a cue-to-action may be created by a change agency – approaches, also try the semi-formal techniques. There are no
some national family planning programs pay incentives to create guarantees (at least with today’s tools) for finding all
potential adopters. Similarly, we have witnessed that in the remaining bugs or for proving the absence of bugs, but it is at
projects where a late corner case bug caused a delay in a chip’s least worth a try in critical situations.
schedule or caused a respin, FV’s advantages are acknowledged The above strategies have been summarized in Table 1, where
and the team is more willing to adopt FV in future projects. But effort and impact (on project schedule) are marked qualitatively as
the same “shock” may have to be witnessed by other teams before Low, Medium and High. The steps with Low effort can be easily
they themselves start using FV as aggressively. made mainstream activities; the step with High effort is less likely
to see mainstream usage. Note that these strategies do not require
6. Strategies for FV Deployment the advancement of technology beyond what is currently available
The above discussions on ROI of FV and the non-ROI problems today (e.g. the integration of FV and simulation through unified
of adoption lead to the following strategies that can be considered coverage metrics may create a far more mainstream adoption of
for effective mainstream usage of FV. They are presented in FV, but that technology is not available today).
approximately the sequence in which they should be introduced to To accelerate the above steps for adoption, additional
the project teams, so that resistance faced is kept to the minimum. management practices may be used, taken from the lessons learnt
1. Encourage the integration of assertions into simulation, from the diffusion of other innovations [9]. We have witnessed
through ABS. This will enable engineers (both verification the validity and usefulness of these lessons in our experience of
engineers and designers) to start the habit of writing assertions FV adoption. These include (but are not limited to) the following:
in their code, and using them without changing the current 8. Opinion leaders of a social system (and a semiconductor
work flows. These assertions may not be coded efficiently organization is such a system) are influential people who
enough for the application of FAV, but at the least this will have earned social accessibility based on their technical
remove the first level of resistance to the paradigm shift. competence (not necessarily in the same domain as that of
2. Use automatically generated assertions to integrate FAV into the new innovation) and who are recognized to conform to
current simulation flows, to get quick coverage reports on the system’s norms. They can lead in the spread of new
dead code, state machine reachability, etc. This will introduce ideas, or they can head an active opposition. Hence, it is very
the teams to the usage of FAV tools in their simplest use vital to identify such organizational opinion leaders, and
models, and will encourage more usage and experimentation. garner their support for the spread of FV usage.
3. Besides writing assertions for simulation, also make available 9. Diffusion is essentially a process of communication, and
pre-packaged plug-and-play assertion packages for standard channels are necessary for communication. Sharing of
interfaces and protocols. This will not involve any significant knowledge (e.g. what works well and what does not work
departure from the simulation flows and, will also be the well for FV) is very critical for educating the people on the
introduction of running the FAV tools to verify assertions. latest developments and advances in this area, and these have
to be customized for the specific organization. Sharing of
4. Automatically generate assertions on chip-level connectivities, best practices in a common website (as in [10], and ensuring
through flow automations. This will also not require the that it is not biased towards any specific tool vendor) will
verification teams to write assertions, but to use them to help to spread the knowledge across multiple teams.
reduce verification cycle time.
10. Generally, the fastest rate of adoption of innovations stems
5. Along with the adoption of the easy steps, it would be wise to from authority decisions (depending, of course, on how
create and sustain a central team of FV experts in the innovative the authorities are). If management is convinced
organization, to help with critical verification problems. This of the need for attempting FV and making progress through
is not mainstream usage in the sense that the scope of experimentation, it can plan to reduce the number of
application is limited to a small set of people, but the impact simulation engineers in a project and replace them with FV
to the organization is significant. But, as mentioned earlier, engineers in the same project. Thus the overall verification
this requires a different management practice. resource bandwidth remains the same but a part of it gets
6. Recognizing that simulation engineers will probably never allocated for FV. This can help in getting early results in the
adopt the rigors of writing efficient assertions and applying context of specific projects and thereby make a case for more
FV in full detail, at this time consider shifting the target base proactive usage of FV in the subsequent projects. But this is
from verification to design. Ask designers to write white box

804
to be handled delicately – early victories through hardware verification. What we really need is a dramatic
management fiat sometimes turn into subsequent failure. improvement of our processes, to be able to create correct-by-
11. An innovation diffuses more rapidly if it can be customized construction designs in the first place, to verify the specification
to specific needs of the social system. Integration into the vs the intent (not just implementation vs written specification),
flows used in the organization, such that some segments of and reduce inspection only to be used for indicating that we are on
usage become mostly push-button (as in the checks for SOC the right track. Formal methods of specification, modeling,
connectivity) or the identification of clear rules for applying analysis and design are necessary for this – not just “formal
certain classes of checks (as for protocol checking for the verification”. What this paper really establishes is that FV is a set
dominant protocols used in the organization), will help in of technologies for quality improvement and, understanding the
getting more users quickly starting to use the FV tool. true meaning of Quality, process improvement is the way towards
that goal – not just through inspections. These processes include
12. Technology clusters, consisting of one or more tools, flows, and people.
distinguishable elements of technology that are perceived as
being closely interrelated, are usually adopted more rapidly References
together. FV is a collection of technologies (FAV, SEC,
semi-formal technologies, formal coverage analysis, etc) and [1] J Rabaey, “Design without borders: A tribute to the legacy of
together they have a higher chance of adoption than by A Richard Newton”, Keynote at DAC, June 2007.
treating each as a separate entity. We have witnessed that [2] W E Deming, “Out of the crisis”, MIT Press, 1982.
those who are interested in FAV are also the ones who are [3] K Shimizu, et al, “A specification methodology by a collection
willing to experiment with SEC – which is a relatively new of compact properties as applied to the Intel Itanium processor
technology. Hence, a collective approach should be taken for bus protocol”, CHARME, 2001, Scotland.
encouraging their adoption.
[4] A Jain, et al, “Formal Assertion based Verification in
Industrial Setting”, Tutorial at DAC 2007, Foils available at
7. Conclusion http://www.facweb.iitkgp.ernet.in/~pallab/formalpub.html
In this paper, we have presented lessons learnt, based on our [5] H Foster, “Unifying traditional and formal verification
experience, as a dozen strategies for FV deployment. We have through property specification”, Designing Correct Circuits
deliberately avoided citing any specific project example, because (DCC), April 2002, Grenoble.
data from one example can be quite misleading without
[6] A Mathur, V Krishnaswamy, “Design for verification in
understanding its full context (completeness of the specification,
system level models and RTL”, DAC, June 2007.
IP delivery procedure, verification expertise of team, FV expertise
& training undertaken, etc). But we have attempted to keep this [7] S Roy, “Top Level SOC Interconnectivity Verification using
paper enumerative and objective. (See [4], for detailed Formal Techniques”, Microprocessor Test and Verification
methodologies and examples of using FV.) Workshop, Austin, December 2007
Taking a long view of the matter, the core problems with the [8] Ho Pei-Hsin, et.al, “Smart simulation using collaborative
adoption of FV are: (1) the naïve understanding that it yields a formal and simulation engines” , ICCAD, November 2000.
complete verification, and (2) the word “verification” in the name. [9] E M Rogers, “Diffusion of Innovations”, 5th edition, Free
FV seems like an adventure sport (difficult and unpredictable), Press, New York, 2003.
and many people give up on it due to this, only because the [10] “Formal verification patterns”,
expectations of ROI are wrong. Understanding the real ROI of FV http://www.oskitech.com/wiki/index.php?title=Main_Page
allows proper usage of FV, and also opens up opportunities for the
development of advanced tools in this segment. It is also Description of Step Effort Impact
necessary to understand the scope of simulations and know what 1 Assertions in simulations (ABS) M M
to expect from it. Hence we have spent considerable effort in this 2 Auto-generated assertions to augment L M
paper to draw analogies with statistical testing principles, which simulations (deadcode, reachability)
have a strong theoretical foundation and a successful history. 3 Pre-packaged assertion IPs L H
Secondly, we must realize that FV is not really an ideal task for 4 Auto-generated assertions for SOC L H
verification engineers, given the expertise and scope of today’s connectivity
verification engineers. Also, applying FV in isolation (i.e. post- 5 Deep FV, with central team H H
design) creates many of the capacity barriers artificially. What is 6 Designers using FV (FAV, SEC) M H
needed is more usage of FV by the designers, and at a higher level 7 Semi-formal bug-hunting* L L
of abstraction than RTL. This paradigm shift has been happening L = Low, M = Medium, H = High
(albeit slowly) in several industries during recent years. * Low effort is after applying ABS and / or FV. Low impact is
But that is just the beginning of process improvements. The because there is no metric of progress.
growing complexity of SOCs (including hardware-software Table 1: Strategies for FV Usage
interactions) may soon overcome the technical advances in

805

You might also like