Professional Documents
Culture Documents
Parasitic-Aware RF Circuit Design and Optimization: Jinho Park, Kiyong Choi, and David J. Allstot, Fellow, IEEE
Parasitic-Aware RF Circuit Design and Optimization: Jinho Park, Kiyong Choi, and David J. Allstot, Fellow, IEEE
I. INTRODUCTION
Fig. 2. Final four-stage CMOS RF distributed amplifier after parasitic-aware synthesis [8], [9].
Fig. 4. (a) Hill climbing in conventional simulated annealing. (b) Tunneling in ASAT for a cost function versus design variable X.
Fig. 5. Flowchart for the tunneling heuristic used in the ASAT algorithm.
B. Local Optimization Method iterations are wasted, and if Temp is too small, local search
dominates and the optimizer gets stuck in a local minimum.
As simulated annealing optimization progresses, Temp is de- The optimum value of Temp depends on RF circuit topology;
creased in compliance with the cooling schedule to emphasize i.e., high Temp is required for circuits with steep cost functions
local, rather than global, search capabilities. However, even and vice-versa [10], [17]. Greater computational efficiency is
when Temp is small, it determines the values of the design vari- achieved adapting Temp using an estimate of the cost function
ables randomly at the next simulation point. Consequently, it is slope [18]. Moreover, the adaptive method eliminates the need
notoriously inefficient as it approaches an acceptable solution. to guess an initial value of Temp. An implicit relationship
An improved local search strategy uses a history vector between Temp and the cost function slope is given in (3a).
defined between the previous and present simulation points Temp is adapted by replacing the random number with a fixed
(Fig. 7). In simulated annealing, a fraction of random vector parameter , solving for Temp, and averaging as in (3b)
is added to the present design vector to determine the next
state; herein, a weighted sum of vectors and defines the next
point. A temperature-dependent weighting factor determines cost cost old
(3a)
the balance between local and global search capabilities: if
Temp is large, the random component dominates, which em- cost cost old
phasizes the global search capability and vice-versa. The next (3b)
optimization point is accepted
Fig. 9 charts a flow for adaptively determining Temp. For a given
topology, experience shows that an estimate of Temp obtained
stepsize (2) from iterations within the first optimization loop
is sufficiently accurate for further optimization cycles. All pos-
where is proportional to Temp. itive cost-function differences between iteration points within
the first loop are weighted and averaged to estimate Temp. (Of
C. Adaptive Temperature Coefficient Heuristic course, Temp can be estimated using additional loops.)
Temp is critical in simulated annealing in minimizing the Whereas the adaptive Temp algorithm eliminates the need for
number of iterations. Fig. 8 relates Temp and the cost func- an initial value of Temp, an empirical parameter, , is
tion slope. If Temp is too large, global search dominates and introduced. It represents the initial probability of hill climbing
PARK et al.: PARASITIC-AWARE RF CIRCUIT DESIGN AND OPTIMIZATION 1957
Fig. 9. Flowchart of the adaptive temperature coefficient heuristic used in the ASAT algorithm.
IV. PSO
PSO works with a large population of potential solution
candidates called particles. Hence, the inherently parallel
approach of PSO is potentially faster than simulated annealing. Fig. 10. Describing equations for PSO. The position of a particle at the next
PSO also gains advantages over alternative population-based iteration step is given by the sum of its current position and velocity vectors.
The next velocity vector is a sum of weighted inertia, and randomly weighted
optimization algorithms owing to its unique swarming capabil- competition and cooperation vectors.
ities. Moreover, its describing equations are simple and easily
implemented in the core optimization block. distributed random function. Each member of the group gains
knowledge of the globally best position by cooperating and
A. PSO Theory communicating with all other particles. The competition vector
Social scientists have observed that swarming in search of links the current position of a particle to its personal best
food differs from other animal behaviors in that individuals position ; it is weighted using a second uniformly
benefit from the discoveries and experiences of all other group distributed random function. The competition factor describes
members [6]. Swarming behavior is observed in flocks of birds, the tendency of a particle to explore the vicinity of its own
schools of fish, swarms of bees, etc. PSO mimics the swarming personal best position. Finally, the inertia vector represents the
behavior [7]. An obvious advantage is its simplicity; it is tendency of a particle to maintain its current velocity ; it
easily implemented in the core optimizer using the describing is weighted by a constant . PSO cleverly combines inertia,
equations in Fig. 10. competition, and cooperation in an optimum fashion so that the
The motion of an object in PSO is represented as the vector particles swarm to the best solution.
sum of present position and velocity vectors (Fig. 10). The Whereas it is clear that cooperation among particles is essen-
second equation that details the algorithm for updating the par- tial for finding the global optimum solution, the need for the
ticle velocity comprises three vectors: inertia, competition, and inertia and competition is not obvious; the main reason for in-
cooperation. The cooperation vector links the current position cluding them is to avoid trapping in local minima. To appreciate
of a particle to the position of the particle that enjoys the this point, consider a four-particle example (Fig. 11) in which
best global position ; it is weighted using a uniformly cooperation (Fig. 10) is activated but inertia and competition are
1958 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 51, NO. 10, OCTOBER 2004
Fig. 11. Four-particle PSO example with the cooperation factor enabled and the inertia and competition factors disabled. Performance depends on the initial
particle positions. For the initial positions shown, the particles miss the global optimum and converge on a local minimum.
disabled. The probability of finding the global optimum without C. PSO Parameters
being trapped in a local minimum depends on the initial posi- The inertia weighting factor is important in determining the
tions from which the particles fly straight toward the known best balance between global and local search capabilities. If it is too
position. In the example, the particles converge on a local min- large, PSO emphasizes global searching and is slow, and if it is
imum without ever experiencing the global optimum. This be- too small, it emphasizes local searching and gets trapped in local
havior is reminiscent of gradient-descent algorithms. minima. Another important optimization parameter not shown
Trapping in local minima is avoided using the complete PSO in Fig. 10 is , which limits the maximum particle velocity
formulation (Fig. 10). In Fig. 12(a), four particles are positioned and effectively limits its range of movement between iterations.
with the rightmost particle initially occupying the best position. The size of the population in PSO is less critical than in other
The particles move according to a randomly weighted sum of population-based algorithms: two to three times the number of
inertia, competition, and cooperation vectors. Notice that PSO design variables is generally effective [6].
encourages the particles to travel in different directions to ex- Nonlinear power amplifier (22 particles) and RF distributed
plore different regions of the design space. After the first iter- amplifier (55 particles) designs are used to investigate the im-
ation, the leftmost particle experiences the lowest cost, and the pact of the parameters on PSO optimization. Twelve synthesis
positions after a second iteration are shown in Fig. 12(b); PSO runs were performed for each circuit for nine combinations of
allowed the rightmost particle to escape the local minimum. All and . The failure rate, the percentage of runs that did not
particles quickly swarm toward the optimum solution. find an acceptable solution within the maximum number of it-
The weighting of the inertia, competition, and cooperation erations (5000 for the distributed amplifier and 20 000 for the
factors is important in determining the efficiency and robust- power amplifier), is plotted in Fig. 15. In this comparison, an
ness of PSO. Because it is intuitively appealing to assign the iteration in PSO is an update of only one particle. Statistics of
same weight to each vector, the factors, , and the number of iterations versus parameter values are shown in
are chosen to have average values of one; hence, Fig. 16; in these examples, and are op-
. In some cases, is adjusted to be less than 1 for timum. Additional details are presented in Section VI.
faster convergence. This issue is revisited later.
V. COMPACT MODELS FOR PASSIVE COMPONENTS
B. PSO Procedure On-chip inductors, transformers, micro-strip transmission
lines, and coplanar wave-guides are ubiquitous in impedance
PSO begins with the population of particles assigned random matching networks, resonant circuits, etc. The parasitic-aware
initial positions in the design space. Each particle is also given synthesis paradigm ensures that parasitic components do not
an initial random velocity (Fig. 13). As PSO progresses, each limit circuit performance. Hence, parasitic modeling is a key
particle keeps track of its best solution, Pbest, while the whole component as indicated in Fig. 1.
group keeps track of the overall best solution experienced by any The use of on-chip spiral inductors provides increased
particle, Gbest. Particle swarming continues until a sufficiently integration. However, their performance is inferior to their
low-cost solution is found, or a maximum number of iterations off-chip counterparts owing to parasitic elements. Fig. 17
are executed. PSO for parasitic-aware synthesis is charted in shows cross-sectional and top views of a square spiral inductor
Fig. 14. [10]–[12]. Process and design parameter information is needed
PARK et al.: PARASITIC-AWARE RF CIRCUIT DESIGN AND OPTIMIZATION 1959
Fig. 12. PSO example. (a) At the initial positions the rightmost particle has the lowest cost. For the first iteration, the particles move based on the weighted inertia,
cooperation, and competition factors. The leftmost particle now occupies the best position. (b) Particle positions after the second iteration. PSO encourages the
rightmost particle to escape the local minimum. Note particle swarming toward the optimum.
Fig. 13. Initial conditions for PSO. In this example, the four particles are assigned random initial positions and velocities.
to develop accurate compact circuit models. Process informa- the substrate, metal thickness, substrate resistivity, etc. Design
tion includes oxide thickness between the metal layers and parameters are metal width , number of turns , center
1960 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 51, NO. 10, OCTOBER 2004
Fig. 14. Flowchart of PSO as used in the core optimization block of a parasitic-aware RF circuit synthesis tool.
Fig. 15. Performance results for 12 parasitic-aware PSO runs versus v and w . (a) Four-stage distributed amplifier. (b) Three-stage power amplifier. The failure
rate is the percentage of synthesis runs for which an acceptable solution is not found within the maximum number of iterations. v = 0:1 and w = 0:80 work
well for both examples.
spacing , and metal line spacing . The relationships between inductor in 0.35 m CMOS. Design parameters for the metal-3
the parasitic component values and the design and process pa- spiral are: turns, m, m, and
rameters are complex and difficult to model accurately. Fig. 18 m. The spiral provides 9 nH of inductance with a
displays the measured impedance versus frequency for a spiral peak of 4 at 3.2 GHz.
PARK et al.: PARASITIC-AWARE RF CIRCUIT DESIGN AND OPTIMIZATION 1961
Fig. 17. Cross-sectional (left) and top (right) views of a parasitic-laden 2 turn
monolithic square spiral inductor. A compact -model circuit is also shown.
Fig. 19. Parasitic component values for the compact -model of Fig. 15
versus square spiral inductance in 0.35 m CMOS. Design parameters for the
metal-3 spirals are: W = 15 m, S = 101:4 m, and D = 1:2 m. x denotes
measured or MOMENTUM simulation value; ‘—’ indicates the describing
equation obtain using the polyfit function of MATLAB. Representative
polynomials are shown.
Fig. 21. Four-stage CMOS distributed amplifier with artificial LC gate and
drain delay lines.
Fig. 20. Bond wire inductor connected between two on-chip bonding pads.
Key parasitics that need to be modeled include the N -well and substrate
resistances and capacitances.
Fig. 23. Averages (AVG) and standard deviations (STD) of the number of iterations for PSO, ASAT, and SA for (a) four-stage distributed amplifier, and (b)
three-stage power amplifier.
Fig. 24. Average costs versus iterations for (a) the distributed amplifier and
(b) the power amplifier. Fig. 25. (a) Forward gain magnitude (S in decibels) results. (b) Forward
gain phase results.
C. Comparative Results tially perform poorer than simulated annealing. One reason is
For comparison purposes, the best parameter settings are used that PSO in our examples generates the particles in a serial
for all optimization runs. Since Temp is critical for both the sim- fashion, even though PSO is an inherently parallel approach,
ulated annealing and ASAT algorithms, the adaptive Temp coef- to provide a worst-case comparison to the other approaches. It
ficient algorithm is used in both. For particle swarm, updates the velocity and position of each particle, and deter-
and is chosen for both amplifiers as explained earlier. mines the cost after the whole group of particles is created. Thus,
Fig. 23 compares three different optimization approaches for PSO appears slow at the beginning of the optimization process.
12 synthesis runs. The distributed amplifier is optimized using In ASAT, the adaptive tunneling process emphasizes the global
11 design variables and the power amplifier has 22. For the dis- search at the beginning, resulting in a slow pace. As optimiza-
tributed amplifier, all three optimization techniques find an ac- tion progresses, both techniques achieve superiority over sim-
ceptable set of design parameters to achieve a flat-gain response ulated annealing, which wastes iterations hill climbing as de-
within a reasonable number of iterations. Both PSO and ASAT scribed earlier. As the constraints become tighter and as itera-
converge more than twice as fast as simulated annealing. The tions increase, the discrepancy between PSO or ASAT and sim-
standard deviations of the number of iterations indicate sim- ulated annealing widens as detailed in Fig. 24. PSO is powerful
ilar robustness for all three approaches. For the power ampli- in reducing the computation cost due to cooperation among its
fier, simulated annealing is poorer in both average and standard particles. ASAT is also more effective owing to its novel local
deviation; the search space is larger for this example and it has search algorithm. Both PSO and ASAT exhibit a very good bal-
difficulty converging to the optimum. ance between global and local search capabilities. Overall, PSO
Fig. 24 plots costs versus the number of iterations for the and ASAT outperform simulated annealing by more than 2X.
various methods. For the power amplifier, PSO and ASAT ini- PSO is 15% faster than ASAT in both examples.
PARK et al.: PARASITIC-AWARE RF CIRCUIT DESIGN AND OPTIMIZATION 1965
VII. CONCLUSION
Methods for parasitic-aware RF circuit synthesis using PSO
and ASAT are presented and compared to simulated annealing. Jinho Park was born in Seoul, Korea, in 1972. He
PSO and ASAT provide greater computational efficiency and received the B.S. degree from Seoul National Uni-
robustness in the presence of on-chip and package parasitics. A versity, Seoul, Korea in 1996, the M.S. degree from
Oregon Graduate Institute, Portland, in 1999, and
parasitic-aware synthesis system has been described that com- the Ph.D. degree in electrical engineering from the
prises three parts: an optimization core, an embedded RF cir- University of Washington, Seattle, in 2003. During
cuit simulator, and a compact model generator. A four-stage dis- his Ph.D. program, he studied CMOS ultra-wideband
LNAs and RF synthesis techniques using particle
tributed amplifier with 8-dB forward gain and 1-dB gain flatness swarm optimization.
over a 8-GHz bandwidth, and a three-stage 900 MHz nonlinear In 2003, he joined Marvell Semiconductors, Sun-
power amplifier with 30-dBm output power and 55% drain effi- nyvale, CA, where he is currently engaged in design
of RF synthesizers and dc–dc converters. He is the coauthor of a book and a
ciency have been synthesized in 0.35- m digital CMOS. book chapter in electronics. From 1999 to 2003, he served as President of Ko-
rean Graduate School Association of Electrical Engineering at University of
REFERENCES Washington and Director of Korean–American Scientists & Engineers Associ-
ation, Pacific Northwest Chapter.
[1] R. Gupta and D. J. Allstot, “Parasitic-aware design and optimization of Dr. Park received awards for outstanding analog design from the National
CMOS RF integrated circuits,” in Proc. IEEE Radio Frequency Inte- Science Foundation Center for the Design of Analog/Digital Integrated Circuits
grated Circuits Symp., June 1998, pp. 325–328. (CDADIC) in 2002 and Analog Devices, Inc. in 2003.
1966 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 51, NO. 10, OCTOBER 2004
Kiyong Choi was born in Seoul, Korea. He received David J. Allstot (S’72–M’72–SM’83–F’92) re-
the B.S.E.E. and M.S.E.E. degrees in electrical engi- ceived the B.S., degree from the University of
neering from Arizona State University, Tempe, AZ. Portland, Portland, OR, the M.S. degree from
in 1998 and 1999, respectively, and the Ph.D. degree Oregon State University, Corvallis, and the Ph.D.
from University of Washington, Seattle, in 2003. degree from the University of California, Berkeley,
His interests include high-speed and high-power respectively.
analog integrated circuit design and computer-aided He has held several industrial and academic po-
design optimization. He is currently with Marvell sitions and has been the Boeing-Egtvedt Chair Pro-
Semiconductors, Sunnyvale, CA. fessor of Engineering at the University of Washington
since 1999. He is currently the Acting Chair of Elec-
trical Engineering. He has advised approximately 75
M.S. and Ph.D. graduates and published about 225 papers.
Dr. Allstot is the recipient of several outstanding teaching and advising
awards Awards include the 1978 IEEE W.R.G. Baker Prize Paper Award,
the 1995 IEEE Circuits and Systems (CAS) Society Darlington Best Paper
Award, the 1998 IEEE International Solid-State Circuits (SSC) Conference
Beatrice Winner Award, 1999 IEEE CAS Society Golden Jubilee Medal, and
the 2004 Technical Achievement Award of the IEEE CAS Society.He was an
Associate Editor of IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II:
ANALOG AND DIGITAL SIGNAL PROCESSING from 1990 to 1993, and its Editor
from 1993 to 1995. He has served on the Technical Program Committee, IEEE
Custom Integrated Circuits Conference, from 1990 to 1993, on the Education
Award Committee, IEEE CAS Society, from 1990 to 1993, on the Board of
Governors, IEEE CAS Society, from 1992 to 1995, on the Technical Program
Committee, IEEE International Symposium on Low-Power Electronics and
Design from 1994 to 1997, on the Mac Van Valkenberg Award Committee,
IEEE CAS Society, from 1994 to 1996, and since 1994 is serving on the
Technical Program Committee, IEEE International SSC Conference. He has
been the 1995 Special Sessions Chair, IEEE International Symposium on CAS,
the Executive Committee Member and Short Course Chair, IEEE International
SSC Conference, from 1996 to 2000, the Co-Chair, IEEE SSC and Technology
Committee, from 1996 to 1998, Distinguished Lecturer, IEEE CAS Society,
from 2000 to 2001, and the Co-General Chair, IEEE International Symposium
on CAS in 2002. He is a Member of Eta Kappa Nu and Sigma Xi.