PVT Corners

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Variation-Aware Custom IC Design

Variation-Aware Custom IC Design


Improving PVT and Monte Carlo Analysis for Design Performance and Parametric Yield
Trent McConaghy, Co-founder and Chief Scientist, Solido Design Automation Inc. Patrick Drennan, Chief Technology Officer, Solido Design Automation Inc. Abstract -- Variation has become a critical concern in modern process geometries; managing it properly is key in designing custom circuits with competitive performance, power, and area. Given that local and global process variation, environmental variation, and proximity variation can differ among circuits, we can classify variation design problems into ProcessVoltage-Temperature (PVT), Monte Carlo (MC) statistical, high-sigma statistical, or proximity-aware design. This paper describes how a flow using design-specific corners manages variation issues in a unified fashion. To enable corner-based design, the key ingredients are corner extraction and design verification tools. This paper describes such tools for PVT, Monte Carlo, high-sigma, and proximity design; and how appropriate technology can make these tools fast, accurate, and scalable. Finally, as an example, this paper focuses on high-sigma problems, showcasing Optimal Importance Sampling to provide the accuracy of an extremely large Monte Carlo run in a fraction of the runtime. challenges. We now explore these challenges, for each problem type. PVT. In some cases, a PVT-corners approach to variation may be enough, in which global F/S corners model process variation. Even that, however, can be cumbersome: running a comprehensive range of PVT corners can take days. Designers may try to overcome this by guessing which corner combinations are most appropriate, which increases design risk. They may try to reduce risk by adding margins, at the expense of performance, power, or area. Monte Carlo Statistical. In many design problems, global F/S corners are not accurate enough; statistical models of the global and local process distribution are necessary. Designers could use Monte Carlo sampling, but design iterations against Monte Carlo sampling are time-consuming (days not hours). Alternatively, designers could use PVT corners, which increases design risk. High-Sigma Statistical. Circuits like bitcells, sense amps, and standard cell digital library circuits are replicated thousands or millions of times, and therefore must have very low failure rate (e.g. 1 in a million). Designers could use a big Monte Carlo run, but that is extremely time-consuming (e.g. 1 million samples just to get 1 failure). Alternatively, designers could use a small Monte Carlo run and extrapolate with density estimation, but that will be inaccurate because there will be no information at the tails. Or, designers could develop an analytical model, but it is time-consuming to create and verify, and is topology-specific. Proximity. Designers may cope with proximity effects [2] by deferring concern until the layout is done, then measuring proximity effects, and iterating with the design until satisfied. This is timeconsuming because it forces long iterations between front and back end design. Alternatively, they could ignore proximity and hope the effects do not cause catastrophic failure (which means expensive foundry Solido Design Automation, Inc. 111 North Market Street, Suite 300 San Jose, CA 95113 info@solidodesign.com +1 408 332 5811 http://www.solidodesign.com

Introduction
In todays highly competitive semiconductor industry, profitability hinges on competitive design performance, high yield, and rapid time to market. This is even becoming even more pronounced as leading-edge designs push into smaller process nodes. Due to the common use of foundries like TSMC and Global Foundries, silicon technology is no longer a differentiating factor. Advantages in performance, power, and area will be key to market success. Custom integrated circuit design is key to gain such differentiation. Beyond analog, custom IC design includes RF, high-speed I/O, standard cell digital library and memory design. Global and local random process variation, environmental variation, and proximity variation affect custom circuit performance; and designers must manage this under tight performance, power, yield, and time constraints. Depending on the type of variation-design problem, designers face numerous

Variation-Aware Custom IC Design

iterations and time to market). Finally, the designer could simply add a guardband to each device, with a heavy area penalty. There is a pattern here; regardless of variation problem type, designers face similar dilemmas. They could use an accurate model but have slow design iterations. Alternatively, they could loosen the model for fast design iterations, but that risks design failures (under-design), or compromises performance / power / area (over-design). Figure 1 illustrates.

At the end of the flow, the designer can use a proximity-aware tool to compute appropriate guardbands.

Figure 2: Corner-based design flow. This flow applies to PVT, Monte Carlo, or High-Sigma variation problems. The user can combine the flows for different variation problems. For example, the user could do a first-cut design using just a single nominal corner, then add some PVT corners and design against them, and finally add Monte Carlo Statistical corners.

Figure 1: If variation is poorly handled, the design may compromise performance / power / area due to excessive safety (over-design); or yield due to due to variations (under-design).

Enabling Technologies for Design-Specific Corners


Nave implementations of verification and corner extraction tools are not adequate; these tools must be fast, accurate, and scalable. Fortunately, appropriate technologies now exist to achieve these goals. The key to fast, accurate, and scalable PVT verification and corner extraction is evolutionary Design of Experiments with response surface modeling [6], enabling speedups of 2x to 50x over nave approaches. Monte Carlo Statistical verification can exploit Optimal Spread Sampling and density estimation for 1.5x to 10x speedup over nave Monte Carlo, at no loss in accuracy. Monte Carlo Statistical Corner extraction extracts corners at pre-specified target yields, leveraging density estimation, response surface modeling, and nonlinear programming for corners that are 5x-10x more accurate than nave Monte Carlo corners. For High-Sigma Statistical problems, Optimal Importance Sampling (OptIS) gives orders of magnitude speedup at no loss in accuracy. The next section will elaborate. For Proximity problems, the key is adaptive sampling combined with sensitivity analysis. By placing guardbands only where they are needed, area is kept Solido Design Automation, Inc. 111 North Market Street, Suite 300 San Jose, CA 95113 info@solidodesign.com +1 408 332 5811 http://www.solidodesign.com

This paper asks: can this dilemma be resolved, so that designers can have rapid design iterations, yet use an accurate model of variation so that over- and underdesign is avoided? The answer lies in using appropriately chosen design-specific corners a small representative set that simulates quickly, yet captures the bounds of the performance distribution.

Design-Specific Corners
Figure 2 shows a methodology for variation-aware design using design-specific corners. The general idea is to do rapid design iterations against a small set of design-specific corners, yet use verification to confirm with confidence that the design is good. In the first step of the flow, the circuit is verified using a verification tool. If the design is acceptable, the flow is complete. If not, representative corners are extracted, and the loop is repeated. For each type of problem variation, there is a specific tool for verification and for corner extraction - PVT, Monte Carlo Statistical, and High-Sigma Statistical.

Variation-Aware Custom IC Design

in check (50%+ reduction in area) and over-design avoided; by giving visibility to proximity variation earlier in the flow, under-design is avoided (no more catastrophic failures due to proximity).

Case Study: High Sigma Circuits


This section is a case study to illustrate variation problems on one of the variation problem types: high-sigma circuits. High- circuits include memory elements and highly replicated digital blocks. High- circuit problems not only need accurate statistical modeling, but also deal with statistically rare events because the circuit blocks are repeated thousands, millions, or even billions of times on a die. They have a very rare probability of failure (e.g. 1 in a million), for which one would need extremely large Monte Carlo runs (e.g. 1 million samples just to get 1 failure).

standard deviation from the data. The issue here is that distributions of circuit performances are often not Gaussian in practice. A Gaussian distribution only arises when there is a linear mapping from process variables to performance. For circuits, the mapping can be highly nonlinear. One could fit a non-Gaussian density model, e.g. a kernel density model. However, because there are no samples at the extreme tails (e.g. 4, 5, 6 sigma), the model is useless for making predictions at those extreme tails.

Status Quo Verification

Approaches

in

High-

For the overall chip to have good yield, the repeated high- block must have extremely high yield (low probability-of-failure pf). Let us consider a chip with target yield of 99.0% (pf 0.01), having 1 million bitcells, such as Figure 3. To achieve the target chip yield, each bitcell needs yield 99.999999% (pf 1.0e-8)1. Let us consider how one might compute the yield of such a circuit. 1. Plain Monte Carlo: One approach would be to use Monte Carlo sampling. However, this would require far too many simulations: a circuit with 99.9999% yield would need, on average, 1 million samples from the true distribution just to observe a single failure against circuit specifications. This is clearly not feasible. Monte Carlo + Density Model: Another approach is to do a moderate number of simulations (say 100 or 1000), fit a density model to the data, and compute yield as the area under the density model where specifications are met. In common practice, a Gaussian density model is built by computing the mean and

Figure 3: Failures with rare probability of occurrence

3.

Manual Model: A third approach is to manually construct analytical models relating process variation to performance and yield. However, this is highly time-consuming to construct, is only valid for the specific circuit and process, and may have accuracy issues. A change to the circuit or process renders the model obsolete.

None of the previous approaches are adequate. Clearly, there is a need to quickly and accurately estimate yield for high-yield circuits. Furthermore, in the case when yield needs to be improved, there is no means to do rapid iterations on high-yield circuit designs. One might consider using Quasi Monte Carlo or Low Discrepancy Sampling techniques [7] to generate Monte Carlo samples with better spread, which in turn reduces the variance of yield estimates. However, that does not solve the core problem, because one still needs 1/pf samples (e.g. 1 million) to get on average 1 failure. We need a means to handle rare event simulation.

2.

For simplicity of description, this assumes that we have just local (not global) process variations, and there is no redundancy, error correction, etc. Note that the algorithm described herein accounts for both local and global variations.

Solido Design Automation, Inc. 111 North Market Street, Suite 300 San Jose, CA 95113 info@solidodesign.com +1 408 332 5811 http://www.solidodesign.com

Variation-Aware Custom IC Design

Rare Event Simulation Importance Sampling

and

Optimal

A normal Monte Carlo run draws process points directly from the process variation distribution. The problem is far too many samples are needed in order to get (rare-event) failures in the design. A key insight is that we do not need to draw samples directly from the distribution. Instead, we can create samples that are infeasible more often, so that decent information is available at the tails. Figure 4 illustrates.

true distribution. Optimal Importance Sampling (OptIS) casts the problem of finding the sampling distribution as an optimization problem; then applies an appropriate constrained-optimization solver. The OptIS approach proceeds as: 1. Create a new sampling distribution such that a greater proportion of samples are failures, by solving a specially-formulated optimization problem. Draw samples from the new distribution, simulate them, and see if they meet specifications. Estimate yield by mathematically unbiasing the samples, according to importance sampling formulae [4]. Compute yield accuracy, using a statistical technique called bootstrapping [3].

2.

3.

4.

Figure 4: Sampling at the rare-event tails In Importance Sampling (IS) [4][5], the sampling distribution is adaptively tilted towards the rare infeasible events. When estimating yield, it gives a weight to each sample according to its density on the sampling distribution, compared to its density on the

To illustrate that Optimal Importance Sampling returns yield estimates as accurate as a standard Monte Carlo (Monte Carlo) run, Table 1 compares Monte Carlo and OptIS yield estimation results across memory circuits, digital circuits, and analog circuits. In the table, all circuits use modern geometries of 45 or 65 nm, with accurate process models of variation such as [1]. We see that, for all cases, the yield estimates for OptIS and Monte Carlo agree (i.e. their yield confidence bounds overlap).

Table 1: Validation of Optimal Importance Sampling for Rare-Event Simulation

Circuit
Bitcell Sense amp Flip flop Current mirror GMC Low noise amp Folded opamp Comparator

# Process Vars.
55 125 195 22 1468 234 558 639

MC Yield (10K-1M samples)


99.9988% (99.9979%- 99.99934%) 99.972% (99.9465% - 99.985%) 99.9556% (99.9395% - 99.9674%) 99.580% (99.433% - 99.689%) 99.836% (99.519% - 99.944%) 99.950% (99.883% - 99.979%) 99.221% (98.027% - 99.699%) 99.597% (99.410% - 99.725%)

OptIS Yield (approx. 5000 samples)


99.99892% (99.9970% - 99.99957%) 99.9550% (99.892% - 99.9817%) 99.9255% (99.773% - 99.9842%) 99.709% (99.569% - 99.808%) 99.831% (99.752% - 99.885%) 99.888% (99.760% - 99.9662%) 99.490% (98.639% - 99.370%) 99.522% (99.291% - 99.682%)

Solido Design Automation, Inc. 111 North Market Street, Suite 300 San Jose, CA 95113 info@solidodesign.com +1 408 332 5811 http://www.solidodesign.com

Variation-Aware Custom IC Design

We can assess the speedup of Optimal Importance Sampling. The Monte Carlo bitcell yield was computed with 1M samples, sense amp 30K samples, and flip flop 90K samples. The speedup is most dramatic in the highest-yield circuits: OptIS is 1M/5K = 200x faster on the bitcell, 6x faster on the sense amp, and 18x faster on the flip flop.

especially in the regions around the specifications where Optimal Importance Sampling had focused its samples. We can learn much from these curves, information about the tails that we would not normally get with a limited sample Monte Carlo run. In the sense amp, the linearity of the curves indicate that the sense amps offset is Gaussian-distributed even into the tails, and that the mapping from process variables to offset is linear. In contrast, the bitcells vout curve bends towards the bottom left, indicating a strong nonlinearity. Such information is valuable to gain insight into the nature of the tails of the distribution, and to understand the tradeoff between specifications and yield.

High- Corner Extraction


If a circuit fails the verification step (yield is not acceptable), the circuit must be improved. Figure 5: Q-Q plot for bitcell (vout >= 0.017) It is time-consuming and tedious to make changes to the circuit, then do a full verification, and iterate until the circuit is acceptable. A faster (yet accurate) method is to find representative failure cases to design against, i.e. design-specific corners. The results of an Optimal Importance Sampling run can be exploited to extract such corners. This is the Corner Extraction step of the corner-based design flow in Figure 2. The flow is: when the yield is not acceptable, the user examines the OptIS visualizations (e.g. scatterplots), and selects process point(s) that cause infeasibility. These can be the points near the feasibility boundary (the most likely points); or ones a little deeper into the infeasibility region (which, if solved against, will give higher margin). The user adds these to Variation Designers set of corners. Figure 6: Q-Q plot for sense amp (-0.022 <= offset <= 0.022) From the Optimal Importance Sampling data, we can also compute the cumulative density function (cdf) by leveraging the importance sampling weights. From the cdf, we can compute q-q plots by applying the inverse normal transform on the cdf values. Unlike cdf plots, q-q plots allow us to examine the tails of the distribution. Figure 5 shows the q-q plot for the bitcell, and Figure 6 for the sense amp, showing both the golden Monte Carlo data and the data computed from OptIS (VHS). Note how well the OptIS curves agree with the golden data, Once the corners are extracted, the designer can design against those corners to improve the circuit. Once satisfied with the designs performance against corners, the designer invokes a new OptIS run to verify the circuit. If needed, the designer performs more design iterations with additional corners.

Return On Investment
When design teams and managers consider which advanced technologies to incorporate in their flows, their metrics include quality of results (QoR), use model, ease of adoption, and cost. Optimal Importance Sampling (OptIS) technology Solido Design Automation, Inc. 111 North Market Street, Suite 300 San Jose, CA 95113 info@solidodesign.com +1 408 332 5811 http://www.solidodesign.com

Variation-Aware Custom IC Design

addresses each of these metrics. Designers can statistically verify their designs with SPICE accuracy in a short amount of time. Compared to a plain Monte Carlo simulation, OptIS is orders of magnitude faster for estimating yields of high- circuits. Compared to Monte Carlo + Gaussian or a Manual Model approach, VHS is far more accurate, as it does not make any simplifying assumptions. Additional value arises when the user extracts design-specific corners from an OptIS run. These corners highlight the circuit failure cases that must be fixed. By designing against just these corners, the designer can perform rapid design iterations to improve the design.

with optimal performance, power and area, providing a closed-loop verification of changes. Compared to status quo approaches, these tools are more scalable, and up to 10x+ faster with no compromise in accuracy. Solido Variation Designer uses foundry models, integrates in existing custom IC design flows and is simulator agnostic, supporting Cadence Spectre, Synopsys HSPICE, Mentor Eldo and Berkeley Design Automation AFS simulators.

References
[1] P. G. Drennan, C. C. McAndrew, Understanding MOSFET Mismatch for Analog Design, IEEE J. Solid State Circuits, March 2003. [2] P.G. Drennan, M.L. Kniffin, D.R. Locascio, Implications of Proximity Effects for Analog Design, Proc. Custom Integrated Circuits Conference, 2006 (Best invited paper) [3] B. Efron, Bootstrap Methods: Another Look at the Jackknife, The Annals of Statistics 7(1), 1979, pp. 126. [4] T.C. Hesterberg, Advances in importance sampling. Ph.D. Dissertation, Statistics Dept., Stanford University, 1988 [5] D.E. Hocevar, M.R. Lightner, and T.N. Trick, A Study of Variance Reduction Techniques for Estimating Circuit Yields, IEEE Trans. ComputerAided Design of Integrated Circuits and Systems 2 (3), July 1983, pp. 180-192 [6] D. Montgomery, Design and Analysis of Experiments, 6th Ed, Wiley, 2007 [7] H. Niederreiter. Random Number Generation and Quasi-Monte Carlo Methods. Society for Industrial and Applied Mathematics, 1992.

Conclusion
Semiconductor profitability hinges on high yield, competitive design performance, and rapid time to market. For the designer, this translates to the need to manage diverse variations (global and local process variations, environmental variations, etc.), reconcile yield with performance (power, speed, area, etc.), while under intense time pressures. A flow using design-specific corners enables designers to manage variation effectively, because the corners simulate quickly yet represent the performance bounds. An example application is high- design, where failures are one in a million, previous approaches to verifying those designs were either extremely expensive or inaccurate. Optimal Importance Sampling (OptIS) allows the designer to quickly and accurately verify high- designs. Furthermore, it enables rapid design iterations, via design-specific high- corners.

Biographies
Trent McConaghy is co-founder and Chief Scientific Officer of Solido Design Automation Inc. He was a co-founder and Chief Scientist of Analog Design Automation Inc., which was acquired by Synopsys Inc. in 2004. He has a Ph.D. in Electrical Engineering from the Katholieke Universiteit Leuven, Belgium; his doctoral thesis won the international EDAA Outstanding Dissertation Award. He is author of the book Variation-Aware Analog Structural Synthesis: A Computational Intelligence Approach (Springer, 2009). He has authored approximately 30 journal papers, book chapters, and conference papers, and has about 20 patents granted / pending. He has given invited talks and tutorials at venues such as DAC, Solido Design Automation, Inc. 111 North Market Street, Suite 300 San Jose, CA 95113 info@solidodesign.com +1 408 332 5811 http://www.solidodesign.com

About Solido
Solido Variation Designer is a comprehensive set of tools for variation-aware custom IC design. It allows users to handle PVT, Monte Carlo, High-Sigma and Proximity problems. To view an online demo, visit www.solidodesign.com/page/request-a-demo/. For each problem type, Solido offers the designer easy-to-use tools to analyze variation impact on design specifications, identify transistor sensitivities to variation, and fix the design to meet specifications

Variation-Aware Custom IC Design

ICCAD and the MIT AI lab. His research interest is in the pragmatic application of statistical machine learning and computational intelligence to variationaware design. Patrick Drennan is Chief Technology Officer of Solido Design Automation Inc. Prior to joining Solido, Patrick was a Distinguished Member of the Technical Staff at Freescale Semiconductor, where he was employed for over 14 years. Patrick co-created the backwards propagation of variance (BPV) method for statistical characterization. His mismatch model earned the Best Regular paper at CICC 2002. He was the first to describe the impact of shallow trench isolation (STI) and well proximity effect (WPE) on design, demonstrating that the WPE produces a graded channel MOSFET. More importantly, he showed the catastrophic impact these unforeseen phenomena can have on circuit design. For this work, he received the Best Invited Paper at CICC 2006. Patrick received the B.S and M.S. in engineering from Rochester Institute of Technology, and Ph.D. in electrical engineering from Arizona State University.

Solido Design Automation, Inc. 111 North Market Street, Suite 300 San Jose, CA 95113 info@solidodesign.com +1 408 332 5811 http://www.solidodesign.com

You might also like