Professional Documents
Culture Documents
Vlsi - Physical Design
Vlsi - Physical Design
INTRODUCTION:
The transformation of a circuit description into a geometric description, is known as a
layout. A layout consists of a set of planar geometric shapes in several layers.
The process of converting the specifications of an electrical circuit into a layout is called
the
Physical design.
Due to the large number of components and the fine details required by the fabrication
process, the physical design is not practically possible without the help of computers. As
a result, almost all phases of physical design extensively use computer-aided design
(CAD) tools and many phases are either partially or fully automated. This automation of
the physical design process has increased the level of integration, reduced the turnaround
time, and enhanced chip performance.
There are various CAD tools available in market and each of them have their own
strengths and weaknesses. The Electronic Design Automation (EDA) companies like
Cadence, Synopsys, Magma, and Mentor Graphics provide these CAD tools.
VLSI physical design automation is mainly deals with the study of algorithms related to
the physical design process. The objective is to study optimal arrangements of devices on
a plane (or in a three-dimensional space) and various interconnection schemes between
these devices to obtain the desired functionality. Because space on a wafer is very
expensive, algorithms must use the space very efficiently to decrease the costs and
improve the yield. In addition, the arrangement of devices (placement) plays a key role in
determining the performance of a chip. Algorithms for physical design must also ensure
that all the rules required by the fabrication are followed and that the layout is within the
tolerance limits of the fabrication process. Finally, algorithms must be efficient and
should be able to handle very large designs. Efficient algorithms not only lead to fast
turnaround time, but also permit designers to iteratively improve the layouts.
VLSI DESIGN CYCLE:
The design process of producing a packaged VLSI chip physically follows various
steps which is popularly known as VLSI design cycle.This design cycle is normally
represented by a flow chart shown below. The various steps involved in the design cycle
are elaborated below.
(i). System specification: The specifications of the system to be designed are exactly
specified in this step. It considers performance, functionality, and the physical
dimensions of the design. The choice of fabrication technology and design techniques
are also considered. The end results are specifications for the size, speed, power, and
functionality of the VLSI system to be designed.
(ii) Functional design: In this step, behavioral aspects of the system are considered. The
outcome is usually a timing diagram or other relationships between sub-units. This
information is used to improve the overall design process and to reduce the complexity of
the subsequent phases.
(iii). Logic design: In this step, the functional design is converted into a logical design,
using the Boolean expressions. These expressions are minimized to achieve the smallest
logic design which conforms to the functional design. This logic design of the system is
simulated and tested to verify its correctness.
(iv).Circuit design: This step involves conversion of Boolean expressions into a
circuit representation by taking into consideration the speed and power requirements of
the original design. The electrical behavior of the various components are also
considered in this phase. The circuit design is usually expressed in a detailed circuit
diagram.
Different approaches are followed to the floor planning problem. Wimer et al. describe a
branch and bound approach for the floor plan sizing problem, i.e. finding an optimal
combination of all possible layout-alternatives for all modules after placement. While
their algorithm is able to find the best solution for this problem, it is very time
consuming, especially for real problem instances. Cohoon et al. implemented a genetic
algorithm for the whole floor planning problem. Their algorithm makes use of estimates
for the required routing space to ensure completion of the interconnections. Another more
often used heuristic solution method for placement is Simulated Annealing
(c) Routing: The main objective in this step is to complete the interconnections between
blocks according to the specified netlist. First, the space not occupied by the blocks
(called the routing space) is partitioned into rectangular regions called channels and
switchboxes. The goal of a router is to complete all circuit connections using the shortest
possible wire length and using only the channels and switchboxes. This is usually done in
two phases, referred to as the global routing and detailed routing phases.
In global routing, connections are completed between the proper blocks of the circuit
disregarding the exact geometric details of each wire and pin. For each wire, the global
router finds a list of channels which are to be used as a passage way for that wire. In other
words, global routing specifies the loose route of a wire through different regions in
the routing space.
Global routing is followed by detailed routing, which completes point-to-point
connections between pins on the blocks. Loose routing is converted into exact routing by
specifying geometric information such as width of wires and their layer assignments.
Detailed routing includes channel routing and switchbox routing.
As all problems in routing are computationally hard, the researchers have focused on
heuristic algorithms. As a result, experimental evaluation has become an integral part of
all algorithms and several benchmarks have been standardized. Due to the nature of the
routing algorithms, complete routing of all the connections cannot be guaranteed in many
cases
(d).Compaction: The operation of layout area minimization without violating the design
rules and without altering the original functionality of layout is called as compaction.
The input of compaction is layout and output is also layout but by minimizing area.
Compaction is done by three ways:
(i) By reducing space between blocks without violating design space rule.
(ii) By reducing size of each block without violating design size rule.
(iii).By reducing shape of blocks without violating electrical characteristics of blocks.
Therefore compaction is very complex process because this process requires the
knowledge of all design rules. Due to the use of strategies compaction algorithms are
divided into one-dimensional algorithms (either in x-dimension or y-dimension), two
dimensional algorithms (both in x-dimension and y-dimension) and topological algorithm
(moving of separate cells according to routing constraints).
Types of compaction techniques:
(i) 1-Dimensional compaction:
So,this algorithm uses recursive depth-first search to explore the search space. Depth-
first search keeps going down a path as long as it can. If it reaches a node with no
children (dead end"), then it back tracks to its parent and tries another child node that it
hasn't already explored. If it has explored all child nodes, then it backtracks up one more
level and continues. If the average branching factor is b and the depth of the search tree is
k, then backtrack search requires O(bk ) time, which is exponential. Classic examples of
problems requiring backtrack search.
Branch and Bound algorithm:
Branch and bound is a variant of backtracking search that takes advantage of information
about the optimality of partial solutions to avoid considering solutions that cannot be
optimal. So we are still doing an exhaustive search but potentially avoiding exploring
large parts of the search space that are not going to give us a solution. Given an initial
problem and some objective function f to be minimized, the branch and bound technique
works as follows.
If the problem is small enough, then solve it directly.
Otherwise the problem is decomposed into two or more sub problems. Each sub problem
is characterized by the inclusion of one or more constraints.
For each sub problem, we compute a lower bounding function g. This lower bound
represents the smallest possible cost of a solution to the sub problem,given the constraints
on the given sub problem.
Simulation:
The objective behind any simulation tool is to create a computer based model for the
design verification and analyzing the behavior of circuits under construction also
checking the current level of abstraction.
Types of Simulation :
Device level simulation . Circuit level simulation . Timing level & Macro level
simulation. Switch level simulation. Gate level simulation. RTL simulation. System level
simulation.
Device level simulation : This model involves with a semiconductor device like a MOS
transistor used to test the effect of fabrication parameters .Simulator techniques based on
finite-element method are used for this purpose.
Circuit level simulation:It deals with small groups of transistors modeled in the analog
domain .The variables computed are currents and voltages and the computations are
based on numerical methods.
Switch level simulation: This simulation method , models the MOS transistors as
switches,that pass signals .The values of signals are discrete ,but it also includes certain
analog features to combine certain components like resistance and capacitance.
Gate level simulation : In this model a circuit is composed of several logic gates
connected by uni-directional memory less wires. The logic gates themselves are
collections of transistors and other circuit elements which perform a logic function. A
logic gate may be a simple inverter or NAND gate or NOR gate or more complex
functional unit like a flip-flop or register.
Register Transfer Level (RTL) simulation: This model is used synchronous circuits
where all registers are controlled by a system clock signal.The registers store the state of
the system ,while the combinational logic computes the next state and the output based
on the current state and the input. Here the important consideration is the state transitions
and the precise timing of intermediate signals in the computation of the next state is not
considered.
System level Simulation : It deals with the hardware described in terms of primitives
that need not correspond with hardware building blocks. VHDL is the most popular
hardware description language used for system level simulation.When used in the initial
stages of a design ,it can describe the behavior of a circuit as a processor as a set of
communicating processes.
Gate Level Modeling and Simulation :
The gate level model forms the theoretical basis for the logic design. In this model a
circuit is composed of several logic gates connected by uni-directional memory less
wires. The logic gates themselves are collections of transistors and other circuit elements
which perform a logic function. A logic gate may be a simple inverter or NAND gate or
NOR gate or more complex functional unit like a flip-flop or register .The logic gates
compute the Boolean functions correspond to their input signals and transmit the values
along wires to the inputs of other gates to which it is connected.Each input of the gate
has a unique signal source. Information is stored only in feedback paths of sequential
circuits.
Gate level modeling and simulation is classified into following four types.
Signal Modeling :A signal modeling deals with the signal applied to a logic gate.
Normally the Boolean signals are denoted by either 0 or 1.A signal which is neither 0 nor
1 is denoted by X. This indicates a transition from one state to other. Here X denotes the
unknown value.The more values are used for a signal ,the more complex is the modeling
of a gate.If the gate has n inputs signals ,with each signal having N values ,the output for
Nn should be specified. The logic involved in dealing with a circuit modeled using
multiple valued discrete signals is called multiple-valued logic.
Gate Modeling: This is useful to model the behavior of even a single gate .The model
should be such that signal values at the gates outputs are efficiently computed as a
function of gates inputs.The outputs of a gate are represented by either truth table
representation or Sub-routine representation.
Delay Modeling:
At the gate level ,time is modeled ina discrete way and all delays in the circuit are
expressed as an integer multiple of a time unit. The output of any physical gate will take
some time to switch after the moment that the input is switched. The delay occurred
here can affect the correct functionality of the circuit, especially when the circuit is
asynchronous. So,a correct modeling of the delays is needed. The important delay models
are
Propagation Delay model : It is associated with a fixed delay at the gates output. So,any
effect of switching inputs is observed at the output of after certain delay.
Rise fall Dealy model: This model is related to rise and fall in the output of a gate.It
always takes some time ro come back to normal state after rise or fall of a signal.
Inertial Delay model:The input pulse should have a minimum width in order to have any
effect at the output.Inertial delays occurs due to the capacitive elements in the gate. The
inertial delays can be combined with the propagation and rise fall delay models.
Connectivity Modeling : This model is related to the suitable connections of all gates in
the network .For this the simulator should have suitable data structures to represent the
connectivity.
The unilateral nature of logic gates is the basis to the operation of gate level simulators.
For each binary vector at the input nodes of a logic gate ,the binary value (0 or 1) at the
output is computed and propagated on the inputs of other gates that are connected to it.
Here during the propagation of the signal a certain time delay occurs due to the inertial
elements like node capacitances present in the circuit. The simulators which do not
consider this delay can analyze only combinatorial circuits.So,the simulators which
handle sequential circuits must estimate the propagational delay through a logic gate in
several ways.Some simulators operate in the unit-delay mode where all logic gates are
assumed to have the same delay. But these Unit delay simulators can verify only the
steady state behavior or the logic functionality of the digital circuit.
The difference in the propagation delays through different signal paths in a network of
logic gates ,sometimes cause undesirable situations like static Hazards and dynamic
Hazards.Hazards are situations ,where a spurious glitches or spikes occur in an otherwise
smooth analog waveform at the output of a logic gate.
Compiler Driven Simulation:
There are two basic mechanisms to simulate a circuit at the gate level.They are (i)
Compiler-driven simulation and (ii) Event-driven simulation
The compiler driven simulation occurs in synchronous circuits.The core of such circuits
consists of registers that store the state of the system and combinational logic that
computes the next state.
Event driven simulation is developed by the fact that ,under normal conditions ,very few
gates switch simultaneously and that computing signal propagation through all gates in
the network over and over again at each time instant leads to unnecessary
computations. So,it is economical to compute only those signals that are actually
changing their states. A change in signal state is called an event , hence this simulation is
called Event driven Simulation.
In a sequential circuit ,the occurrence of a glitch could cause the circuit to mal-function.
Therefore ,the detection of Hazards and race-conditions are very important and as
result ,most digital simulators generates an alert to the user when they occur. The
detection of Hazards is possible by introducing a third state , denoted by X ,which
denotes a signal-transition.
Many simulators use a third value to represent an unknown or undefined logic level
denoted by X.This X state indicate an uninitialized signal ,a signal held between two
logic thresholds or signal in a 0 1 or 1 0 transition.The X state is handled
algebraically by extending the binary Boolean algebra to a ternary or three valued De
Morgans algebra which preserves most of the desired properties of Gate model.
Some other simulators also implement the X-state by an enumeration technique in which
the simulation is repeated with the nodes in the X-state set to all possible combinations of
0s and 1s.Nodes that remain in a unique binary state for all combinations are set to this
state,whicl others are set to X.
To simulate tri-state gates and logic buses ,some simulators use a fourth state called the
High Impedance state and normally denoted by H(or Z also).This H state is also used
some times to model dynamic memory by allowing a node to retain its previous logic
state,if the outputs of all logic gates connected to the node are at the H-level.
Gate level simulators are not completely suitable for the logic simulation of MOS
circuits. Because ,there is a mismatch between the Boolean gate model and the behavior
of the MOS logic circuits. Hence ,there is a need of different approach to the digital
modeling and simulation of MOS circuits ,which is nothing but switch level logic
simulation.
Switch level modeling and simulation: For simulation of MOS circuits ,these
switch level simulators are developed.One of the first switch-level simulators to be
implemented is MOSSIM.
In contrast to the gate-level modeling and simulation ,the switch leveltechniques operate
directly on the transistor circuit structure and capture many circuit properties that are not
possible in gate level model. For example bi-directionality of signal flow ,charge sharing
effects and transistor sizes.In contrast to circuit level simulation ,node voltages are
represented by discrete logic levels and transistors by bi-directional resistive switches in
switch level modeling.
The resistive switch model of a transistor is controlled by the voltage level at its gate
terminal.An n-type transistor is conducting when its gate voltage is 1 and a p-type
transistor is conductiong when its gate voltage is 0.Transistors are allowed to have
discrete strength values depending on the values of their conductances when fully
ON.This is done to model the behavior of ratioed logic.
As an example, a depletion load transistor used in n-channel MOS circuit design has its
gate logic level set to 1 and its strength is weaker than that of an enhancement type
transistor. Transistors in series are equal to a single transistor of strength equal to the
weakest one, while transistors in parallel are equivalent to a single transistor of strength
equal to the strongest one(maximum conductance).
In most of the switch level simulators ,the circuit is partitioned into channel-connected
sub-circuits.This partitioning can be done at once at the outset,where every transistor is
included or dynamically at every iteration where only conducting transistors are
included.This dynamic partitioning adds some additional overhead cost in the design.
The simulation of the entire circuit follows an event scheduler similar in many ways to
gate level logic simulators, except that now the gates consists of channel connected
transistors.
LOGIC SYNTHESIS VERIFICATION
INTRODUCTION:
Logic synthesis is the process of converting a high-level description of design into an
optimized gate-level representation. Logic synthesis uses a standard cell library which
have simple cells, such as basic logic gates like and, or, and nor, or macro cells, such as
adder, muxes, memory, and flip-flops. Standard cells put together are called technology
library. Normally the technology library is known by the transistor size (0.18u, 90nm).
But all these methods are not standard or unique or Canonical. To synthesize ,optimize or
to verify or manipulate the large Boolean functions ,they must be represented efficiently
by using suitable methods. One such method to represent the complex Boolean functions
is Binary Decision Diagrams(BDD).The BDD method ,which is canonical is the most
popular method among others.
Binary Decision Diagram (BDD)
Binary decision diagram (BDD) is a graphical representation of a Boolean function,
which is derivable from Shannons expansion theorem . It is similar to binary tree .So, a
binary decision diagram (BDD) is a finite DAG (Directed Acyclic Graph) with the
following features.
It has a unique initial node,
all non-terminals labelled with a Boolean variable,
Aall terminals labeled are with 0 or 1,
All edges are labelled with 0 (dashed edge) or 1 (solid edge),
Each non-terminal has exactly 1 out-edge labeled 0 and 1 out-edge labeled 1.
Shannons Expansion Theorem :
The Shannons expansion theorem is used iteratively to build any BDD for a given
Boolean function.
Shannons expansion theorem states that Any switching function of n variables can be
expressed as a sum of products of n literals, one for each variable.
Let us assume that f (x1, x2, ..., xn) is a switching function of n variables. According to
Shannon , one way of expressing this function is
f(x1, x2, ..., xn) = x1f(1, x2, ..., xn) + x1'f(0, x2, ..., xn)
On the right side, the function is the sum of two terms, one of them relevant when x1 is
equal to 1 and the other when x1 is equal to 0 . The first term is x1 times what remains of
f when x1 is equal to the value 1 and the second term is x 1' times what remains of f when
x1 is equal to 0.
Shannons expansion theorem in the general case is
f = a0 x1'x2' ... xn' + a1x1'x2' ... xn1'xn + a2x1'x2' ... xn1 xn' + ...+ a2n2 x1x2... xn' + a2n1 x1x2... xn
Each ai is a constant in which the subscript is the decimal equivalent of the multiplier of
ai viewed as a binary number. Thus, for three variables, a5 (binary 101) is the coefficient
of x1x2'x3.
In a similar way it can be stated as any switching function of n variables can be
expressed as a product of sums of n literals, one for each variable .
Binary Decision Diagram (BDD)- Example :
Let us consider an example of constructing a BDD.The output of a Boolean function S is
given by the truth table and the function is denoted by the MUX based circuit shown in
the diagram.
In the BDD , the line with a bubble on it denotes that value = 0 and the Lines without
bubble denote the value = 1.
Let us consider S(0,0,0) in Figure (d) and S(1,1,1) in Figure (e).
There are several methods to denote the value = 1 and value = 0 .For example
Bubble vs. Non-bubble line
Dashed line vs. Solid line
T (then) vs. E (else) labels
For variables a,b,c,d the ordering should be such that a b c d as shown in the
diagram below.
(b)
The diagram (b) above, shows an optimal ordering because, there is exactly one node
for each variable. The order is b c a d .
Reduction operations(ROBDD) :
1. Removal of duplicate terminals. If a BDD contains more than one terminal 0-node,
then
redirect all edges which point to such a 0-node to just one of them. Proceed in the same
way
with terminal nodes labelled with 1.
2. Removal of redundant tests. If both outgoing edges of a node n- point to the same node
m,
then eliminate that node n,sending all its incoming edges to m
3. Removal of duplicate non-terminals. If two distinct nodes n and m in the BDD are the
roots of
structurally identical sub BDDS, then eliminate one of them, say m, and redirect all its
incoming edges to the other one.
A BDD is reduced if it has been simplified as much as possible using these reduction
operations
Examples: (i). remove duplicate terminals
(ii).Remove redundant tests.
A BDD is reduced if it has been simplified as much as possible using these reduction
operations
BDD Canonical Form : The Binary Decision Diagrams are said to be canonical(unique)
for a given ordering if All internal nodes are descendants of some node and there are no
isomorphic sub-graphs and for every node fT fE .
The equivalence of two functions f and g can be easily done by seeing the structure of
ROBDDs.The various manipulations on BDDs can be performed directly if the function
is denoted in canonical form.
HIGH-LEVEL SYNTHESIS
INTRODUCTION:
HARDWARE MODELS :
All HLS systems need to restrict the target hardware. Most systems generate
synchronous hardware and build it with the following parts:
Functional units : They can perform one or more computations, e.g. addition,
multiplication,
comparison, ALU.
Registers:They store inputs, intermediate results and outputs; sometimes several
registers are taken together to form a register file.
Multiplexers: From several inputs, one is passed to the output.
Busses: a connection shared between several hardware elements, such that only one
element can write data at a specific time.
The data path : A network of functional units, registers, multiplexers and buses. The
actual computation takes place in the data path.
Control: The part of the hardware that takes care of having the data present at the right
place at a specific time, of presenting the right instructions to a programmable unit, etc.
Often high-level synthesis concentrates on data-path synthesis. The control part is then
realized as a finite state machine or in microcode.
Synthesis tasks
High-level synthesis maps a behavioral description into the FSMD model so that the data
path executes variable assignments and the control unit implements the control
constructs. Since the FSMD model [FSM with a data path] determines the amount of
computation in each state, so,one must first define the number and type of resources
(storage units, functional units, and interconnection units) to be used in the data path.
Allocation is the task of defining necessary resources for a given design constraint.
The next task in mapping a behavioral description into an FSMD model is to partition the
behavioral description into states (or control steps) so that the allocated resources can
compute all the variable assignments in each state. This partitioning of behavior into
time intervals is called scheduling.
Although scheduling assigns each operation to a particular state, it does not assign it to a
particular component. To obtain the proper implementation, we assign each variable to a
storage unit, each operation to a functional unit, and each transfer from l/O ports to units
and among units to an interconnection unit. This task is called binding (or resource
sharing).
Binding defines the structure of the data path but not the structure of the control unit. The
final task, control synthesis, consists of reducing and encoding states and deriving the
logic network for next-state and control signals in the control unit. Control synthesis
employs well-known logic synthesis.
Allocation. The allocation task determines the type and quantity of resources used in the
chip architecture. It also determines the clocking scheme, memory hierarchy, and
pipelining style. The goal of allocation is to make appropriate trade-offs between the
designs cost and performance. If the original description contains inherent parallelism,
allocating more hardware resources increases area and cost, but it also creates more
opportunities for parallel operations or storage accesses, resulting in better performance.
On the other hand, allocating fewer resources decreases area and cost, but it also forces
operations to execute sequentially , resulting in poorer performance . To perform the
required tradeoffs, allocation must determine the exact area and performance values.
Scheduling :
The next step schedules operations and memory accesses into clock cycles. Scheduling
algorithms are of two types, based on the optimization goal and the specified constraints.I
f the user has completely specified all the available resources and the clock cycle length
during allocation, the scheduling algorithms goal is to produce a design with the best
possible performance, or the fewest clock cycles. In other words, scheduling must
maximize usage of the allocated resources. We call this approach resource-constrained
scheduling. If a list of resources is not available prior to scheduling, but a desired overall
performance is specified, the scheduling algorithms goal is to produce a design with the
lowest possible cost, or the fewest functional units. This is the time constrained
scheduling approach. Resource-constrained scheduling usually constructs the schedule
one state at a time. It schedules operations so as not to exceed resource constraints or
violate data dependencies. It ensures that at the instant for which it schedules an
operation Oi into control step Sj, a resource capable of executing O i is available and all
the predecessors of node Oi have been scheduled.
Binding : The binding task assigns the operations and memory accesses within each
clock cycle to available hardware units. A resource such as a functional, storage, or
interconnection unit can be shared by different operations, data accesses, or data transfers
if they are mutually exclusive. For example, two operations assigned to two different
control steps are mutually exclusive since they will never execute simultaneously; hence
they can be bound to the same hardware unit. Binding consists of three subtasks based on
the unit type.
There are two important scheduling algorithms. They are ASAP(As soon as possible )
and ALAP(As late as possible).
INTRODUCTION:
FPGA is a new approach to ASIC design that can dramatically reduce manufacturing
turnaround time and cost. In its simplest form, an FPGA consists of a regular array of
programmable logic blocks interconnected by a programmable routing network. A
programmable logic block is a RAM and can be programmed by the user to act as a small
logic module. Given a circuit, user can program the programmable logic module using an
FPGA programming tool. The key advantage of FPGAs is re-programmability .The RAM
nature of the FPGAs allows for in-circuit flexibility that is most useful when the
specifications are likely to change in the final application. In some applications such as
remote sensors, it is necessary to make system updates via software. In FPGA, a data
channel is provided, which allows easy transfer of the new logic function and
reprogramming the FPGA.
The physical design automation of FPGAs involves mainly three steps. They are
partitioning, placement and routing.
Partitioning problem in FPGAs is significantly different from the partitioning problems in
other design styles .This problem mainly depends on the architecture in which the circuit
has to be implemented.
Placement problem in FPGAs is very similar to the gate array placement problem.
The routing problem in FPGAs is to find a connection path and program the appropriate
interconnection points.
FPGA Technologies :
An FPGA architecture mainly consists of two parts : the logic blocks, and the routing
network. A logic block has a fixed number of inputs and one output. A wide range of
functions can be implemented using a logic block. Given a circuit to be implemented
using FPGAs, it is first decomposed into smaller sub-circuits such that each of the sub-
circuit can be implemented using a single logic block. There are two types of logic
blocks. The first type is based on Look-Up Tables (LUTs), while second type is based on
multiplexers.
Look-up table based logic blocks:
A LUT based logic block is just a segment of RAM. A function can be implemented by
simply loading its LUT into the logic block at power up. If a function needs to be
implemented, then its truth table is loaded into the logic block. In this way, on receiving a
certain set of inputs, the logic blocks simply look up the appropriate output and set the
output line accordingly. Because of the reconfigurable nature of the LUT based logic
blocks, they are also called the Configurable Logic Blocks (CLBs). It is clear that bits are
required in a logic block to represent abit input, 1-bit output combinational logic
function.
Multiplexer based logic blocks: Typically a multiplexer based logic block consist of
three 2-to-l multiplexers and one two-input OR gate as shown in Figure below.
The number of inputs is eight. The circuit within the logic block can be used to
implement a wide range of functions. One such function, shown in Figure (a) can be
mapped to a logic block as shown in Figure (b). Thus, the programming of multiplexer
based logic block is achieved by routing different inputs into the block.
There are two models of routing network , the segmented and the non-segmented routing
network .
Physical Design Cycle for FPGAs :The physical design cycle for FPGAs consists of the
following three important steps:
.Partitioning: The circuit to be mapped onto the FPGA has to be partitioned into smaller
sub-circuits, such that each sub-circuit can be mapped to a programmable logic block.
Unlike the partitioning in other design styles, there are no constraints on the size of a
partition. However, there are constraints on the inputs and outputs of a partition. This is
due to the unique architecture of FPGAs.
Placement: In this step of the design cycle, the sub-circuits which are formed in the
partitioning phase are allocated physical locations on the FPGA, i.e., the logic block on
the FPGA is programmed to behave like the sub-circuit that is mapped to it. This
placement must be carried out in a manner that the routers can complete the
interconnections. This is very critical as the routing resources of the FPGA are limited.
Routing: In this phase, all the sub-circuits which have been programmed on the FPGA
blocks are interconnected by blowing the fuses between the routing segments to achieve
the interconnections.
Figure above shows the complete physical design cycle of FPGAs. System design is
available as a directed graph which is partitioned in second step. Placement involves
mapping of sub-circuits onto CLBs. Shaded rectangles represent CLBs which have been
programmed. Final step is routing of channels.
--------------xxxxxxxxxxx---------------