Professional Documents
Culture Documents
PVT Corners
PVT Corners
PVT Corners
Introduction
In todays highly competitive semiconductor industry, profitability hinges on competitive design performance, high yield, and rapid time to market. This is even becoming even more pronounced as leading-edge designs push into smaller process nodes. Due to the common use of foundries like TSMC and Global Foundries, silicon technology is no longer a differentiating factor. Advantages in performance, power, and area will be key to market success. Custom integrated circuit design is key to gain such differentiation. Beyond analog, custom IC design includes RF, high-speed I/O, standard cell digital library and memory design. Global and local random process variation, environmental variation, and proximity variation affect custom circuit performance; and designers must manage this under tight performance, power, yield, and time constraints. Depending on the type of variation-design problem, designers face numerous
iterations and time to market). Finally, the designer could simply add a guardband to each device, with a heavy area penalty. There is a pattern here; regardless of variation problem type, designers face similar dilemmas. They could use an accurate model but have slow design iterations. Alternatively, they could loosen the model for fast design iterations, but that risks design failures (under-design), or compromises performance / power / area (over-design). Figure 1 illustrates.
At the end of the flow, the designer can use a proximity-aware tool to compute appropriate guardbands.
Figure 2: Corner-based design flow. This flow applies to PVT, Monte Carlo, or High-Sigma variation problems. The user can combine the flows for different variation problems. For example, the user could do a first-cut design using just a single nominal corner, then add some PVT corners and design against them, and finally add Monte Carlo Statistical corners.
Figure 1: If variation is poorly handled, the design may compromise performance / power / area due to excessive safety (over-design); or yield due to due to variations (under-design).
This paper asks: can this dilemma be resolved, so that designers can have rapid design iterations, yet use an accurate model of variation so that over- and underdesign is avoided? The answer lies in using appropriately chosen design-specific corners a small representative set that simulates quickly, yet captures the bounds of the performance distribution.
Design-Specific Corners
Figure 2 shows a methodology for variation-aware design using design-specific corners. The general idea is to do rapid design iterations against a small set of design-specific corners, yet use verification to confirm with confidence that the design is good. In the first step of the flow, the circuit is verified using a verification tool. If the design is acceptable, the flow is complete. If not, representative corners are extracted, and the loop is repeated. For each type of problem variation, there is a specific tool for verification and for corner extraction - PVT, Monte Carlo Statistical, and High-Sigma Statistical.
in check (50%+ reduction in area) and over-design avoided; by giving visibility to proximity variation earlier in the flow, under-design is avoided (no more catastrophic failures due to proximity).
standard deviation from the data. The issue here is that distributions of circuit performances are often not Gaussian in practice. A Gaussian distribution only arises when there is a linear mapping from process variables to performance. For circuits, the mapping can be highly nonlinear. One could fit a non-Gaussian density model, e.g. a kernel density model. However, because there are no samples at the extreme tails (e.g. 4, 5, 6 sigma), the model is useless for making predictions at those extreme tails.
Approaches
in
High-
For the overall chip to have good yield, the repeated high- block must have extremely high yield (low probability-of-failure pf). Let us consider a chip with target yield of 99.0% (pf 0.01), having 1 million bitcells, such as Figure 3. To achieve the target chip yield, each bitcell needs yield 99.999999% (pf 1.0e-8)1. Let us consider how one might compute the yield of such a circuit. 1. Plain Monte Carlo: One approach would be to use Monte Carlo sampling. However, this would require far too many simulations: a circuit with 99.9999% yield would need, on average, 1 million samples from the true distribution just to observe a single failure against circuit specifications. This is clearly not feasible. Monte Carlo + Density Model: Another approach is to do a moderate number of simulations (say 100 or 1000), fit a density model to the data, and compute yield as the area under the density model where specifications are met. In common practice, a Gaussian density model is built by computing the mean and
3.
Manual Model: A third approach is to manually construct analytical models relating process variation to performance and yield. However, this is highly time-consuming to construct, is only valid for the specific circuit and process, and may have accuracy issues. A change to the circuit or process renders the model obsolete.
None of the previous approaches are adequate. Clearly, there is a need to quickly and accurately estimate yield for high-yield circuits. Furthermore, in the case when yield needs to be improved, there is no means to do rapid iterations on high-yield circuit designs. One might consider using Quasi Monte Carlo or Low Discrepancy Sampling techniques [7] to generate Monte Carlo samples with better spread, which in turn reduces the variance of yield estimates. However, that does not solve the core problem, because one still needs 1/pf samples (e.g. 1 million) to get on average 1 failure. We need a means to handle rare event simulation.
2.
For simplicity of description, this assumes that we have just local (not global) process variations, and there is no redundancy, error correction, etc. Note that the algorithm described herein accounts for both local and global variations.
Solido Design Automation, Inc. 111 North Market Street, Suite 300 San Jose, CA 95113 info@solidodesign.com +1 408 332 5811 http://www.solidodesign.com
and
Optimal
A normal Monte Carlo run draws process points directly from the process variation distribution. The problem is far too many samples are needed in order to get (rare-event) failures in the design. A key insight is that we do not need to draw samples directly from the distribution. Instead, we can create samples that are infeasible more often, so that decent information is available at the tails. Figure 4 illustrates.
true distribution. Optimal Importance Sampling (OptIS) casts the problem of finding the sampling distribution as an optimization problem; then applies an appropriate constrained-optimization solver. The OptIS approach proceeds as: 1. Create a new sampling distribution such that a greater proportion of samples are failures, by solving a specially-formulated optimization problem. Draw samples from the new distribution, simulate them, and see if they meet specifications. Estimate yield by mathematically unbiasing the samples, according to importance sampling formulae [4]. Compute yield accuracy, using a statistical technique called bootstrapping [3].
2.
3.
4.
Figure 4: Sampling at the rare-event tails In Importance Sampling (IS) [4][5], the sampling distribution is adaptively tilted towards the rare infeasible events. When estimating yield, it gives a weight to each sample according to its density on the sampling distribution, compared to its density on the
To illustrate that Optimal Importance Sampling returns yield estimates as accurate as a standard Monte Carlo (Monte Carlo) run, Table 1 compares Monte Carlo and OptIS yield estimation results across memory circuits, digital circuits, and analog circuits. In the table, all circuits use modern geometries of 45 or 65 nm, with accurate process models of variation such as [1]. We see that, for all cases, the yield estimates for OptIS and Monte Carlo agree (i.e. their yield confidence bounds overlap).
Circuit
Bitcell Sense amp Flip flop Current mirror GMC Low noise amp Folded opamp Comparator
# Process Vars.
55 125 195 22 1468 234 558 639
Solido Design Automation, Inc. 111 North Market Street, Suite 300 San Jose, CA 95113 info@solidodesign.com +1 408 332 5811 http://www.solidodesign.com
We can assess the speedup of Optimal Importance Sampling. The Monte Carlo bitcell yield was computed with 1M samples, sense amp 30K samples, and flip flop 90K samples. The speedup is most dramatic in the highest-yield circuits: OptIS is 1M/5K = 200x faster on the bitcell, 6x faster on the sense amp, and 18x faster on the flip flop.
especially in the regions around the specifications where Optimal Importance Sampling had focused its samples. We can learn much from these curves, information about the tails that we would not normally get with a limited sample Monte Carlo run. In the sense amp, the linearity of the curves indicate that the sense amps offset is Gaussian-distributed even into the tails, and that the mapping from process variables to offset is linear. In contrast, the bitcells vout curve bends towards the bottom left, indicating a strong nonlinearity. Such information is valuable to gain insight into the nature of the tails of the distribution, and to understand the tradeoff between specifications and yield.
Return On Investment
When design teams and managers consider which advanced technologies to incorporate in their flows, their metrics include quality of results (QoR), use model, ease of adoption, and cost. Optimal Importance Sampling (OptIS) technology Solido Design Automation, Inc. 111 North Market Street, Suite 300 San Jose, CA 95113 info@solidodesign.com +1 408 332 5811 http://www.solidodesign.com
addresses each of these metrics. Designers can statistically verify their designs with SPICE accuracy in a short amount of time. Compared to a plain Monte Carlo simulation, OptIS is orders of magnitude faster for estimating yields of high- circuits. Compared to Monte Carlo + Gaussian or a Manual Model approach, VHS is far more accurate, as it does not make any simplifying assumptions. Additional value arises when the user extracts design-specific corners from an OptIS run. These corners highlight the circuit failure cases that must be fixed. By designing against just these corners, the designer can perform rapid design iterations to improve the design.
with optimal performance, power and area, providing a closed-loop verification of changes. Compared to status quo approaches, these tools are more scalable, and up to 10x+ faster with no compromise in accuracy. Solido Variation Designer uses foundry models, integrates in existing custom IC design flows and is simulator agnostic, supporting Cadence Spectre, Synopsys HSPICE, Mentor Eldo and Berkeley Design Automation AFS simulators.
References
[1] P. G. Drennan, C. C. McAndrew, Understanding MOSFET Mismatch for Analog Design, IEEE J. Solid State Circuits, March 2003. [2] P.G. Drennan, M.L. Kniffin, D.R. Locascio, Implications of Proximity Effects for Analog Design, Proc. Custom Integrated Circuits Conference, 2006 (Best invited paper) [3] B. Efron, Bootstrap Methods: Another Look at the Jackknife, The Annals of Statistics 7(1), 1979, pp. 126. [4] T.C. Hesterberg, Advances in importance sampling. Ph.D. Dissertation, Statistics Dept., Stanford University, 1988 [5] D.E. Hocevar, M.R. Lightner, and T.N. Trick, A Study of Variance Reduction Techniques for Estimating Circuit Yields, IEEE Trans. ComputerAided Design of Integrated Circuits and Systems 2 (3), July 1983, pp. 180-192 [6] D. Montgomery, Design and Analysis of Experiments, 6th Ed, Wiley, 2007 [7] H. Niederreiter. Random Number Generation and Quasi-Monte Carlo Methods. Society for Industrial and Applied Mathematics, 1992.
Conclusion
Semiconductor profitability hinges on high yield, competitive design performance, and rapid time to market. For the designer, this translates to the need to manage diverse variations (global and local process variations, environmental variations, etc.), reconcile yield with performance (power, speed, area, etc.), while under intense time pressures. A flow using design-specific corners enables designers to manage variation effectively, because the corners simulate quickly yet represent the performance bounds. An example application is high- design, where failures are one in a million, previous approaches to verifying those designs were either extremely expensive or inaccurate. Optimal Importance Sampling (OptIS) allows the designer to quickly and accurately verify high- designs. Furthermore, it enables rapid design iterations, via design-specific high- corners.
Biographies
Trent McConaghy is co-founder and Chief Scientific Officer of Solido Design Automation Inc. He was a co-founder and Chief Scientist of Analog Design Automation Inc., which was acquired by Synopsys Inc. in 2004. He has a Ph.D. in Electrical Engineering from the Katholieke Universiteit Leuven, Belgium; his doctoral thesis won the international EDAA Outstanding Dissertation Award. He is author of the book Variation-Aware Analog Structural Synthesis: A Computational Intelligence Approach (Springer, 2009). He has authored approximately 30 journal papers, book chapters, and conference papers, and has about 20 patents granted / pending. He has given invited talks and tutorials at venues such as DAC, Solido Design Automation, Inc. 111 North Market Street, Suite 300 San Jose, CA 95113 info@solidodesign.com +1 408 332 5811 http://www.solidodesign.com
About Solido
Solido Variation Designer is a comprehensive set of tools for variation-aware custom IC design. It allows users to handle PVT, Monte Carlo, High-Sigma and Proximity problems. To view an online demo, visit www.solidodesign.com/page/request-a-demo/. For each problem type, Solido offers the designer easy-to-use tools to analyze variation impact on design specifications, identify transistor sensitivities to variation, and fix the design to meet specifications
ICCAD and the MIT AI lab. His research interest is in the pragmatic application of statistical machine learning and computational intelligence to variationaware design. Patrick Drennan is Chief Technology Officer of Solido Design Automation Inc. Prior to joining Solido, Patrick was a Distinguished Member of the Technical Staff at Freescale Semiconductor, where he was employed for over 14 years. Patrick co-created the backwards propagation of variance (BPV) method for statistical characterization. His mismatch model earned the Best Regular paper at CICC 2002. He was the first to describe the impact of shallow trench isolation (STI) and well proximity effect (WPE) on design, demonstrating that the WPE produces a graded channel MOSFET. More importantly, he showed the catastrophic impact these unforeseen phenomena can have on circuit design. For this work, he received the Best Invited Paper at CICC 2006. Patrick received the B.S and M.S. in engineering from Rochester Institute of Technology, and Ph.D. in electrical engineering from Arizona State University.
Solido Design Automation, Inc. 111 North Market Street, Suite 300 San Jose, CA 95113 info@solidodesign.com +1 408 332 5811 http://www.solidodesign.com